00:00:00.001 Started by upstream project "autotest-per-patch" build number 122880 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.097 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.098 The recommended git tool is: git 00:00:00.098 using credential 00000000-0000-0000-0000-000000000002 00:00:00.100 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.167 Fetching changes from the remote Git repository 00:00:00.170 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.219 Using shallow fetch with depth 1 00:00:00.219 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.219 > git --version # timeout=10 00:00:00.252 > git --version # 'git version 2.39.2' 00:00:00.252 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.253 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.253 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.571 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.583 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.595 Checking out Revision c7986954d8037b9c61764d44ed2af24625b251c6 (FETCH_HEAD) 00:00:05.595 > git config core.sparsecheckout # timeout=10 00:00:05.605 > git read-tree -mu HEAD # timeout=10 00:00:05.623 > git checkout -f c7986954d8037b9c61764d44ed2af24625b251c6 # timeout=5 00:00:05.642 Commit message: "inventory/dev: add missing long names" 00:00:05.642 > git rev-list --no-walk c7986954d8037b9c61764d44ed2af24625b251c6 # timeout=10 00:00:05.760 [Pipeline] Start of Pipeline 00:00:05.776 [Pipeline] library 00:00:05.778 Loading library shm_lib@master 00:00:05.778 Library shm_lib@master is cached. Copying from home. 00:00:05.795 [Pipeline] node 00:00:05.811 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.813 [Pipeline] { 00:00:05.826 [Pipeline] catchError 00:00:05.827 [Pipeline] { 00:00:05.843 [Pipeline] wrap 00:00:05.853 [Pipeline] { 00:00:05.863 [Pipeline] stage 00:00:05.865 [Pipeline] { (Prologue) 00:00:06.089 [Pipeline] sh 00:00:06.372 + logger -p user.info -t JENKINS-CI 00:00:06.389 [Pipeline] echo 00:00:06.390 Node: WFP8 00:00:06.398 [Pipeline] sh 00:00:06.697 [Pipeline] setCustomBuildProperty 00:00:06.708 [Pipeline] echo 00:00:06.709 Cleanup processes 00:00:06.714 [Pipeline] sh 00:00:06.998 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.998 1960638 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.009 [Pipeline] sh 00:00:07.291 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.291 ++ grep -v 'sudo pgrep' 00:00:07.291 ++ awk '{print $1}' 00:00:07.291 + sudo kill -9 00:00:07.291 + true 00:00:07.305 [Pipeline] cleanWs 00:00:07.314 [WS-CLEANUP] Deleting project workspace... 00:00:07.314 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.321 [WS-CLEANUP] done 00:00:07.324 [Pipeline] setCustomBuildProperty 00:00:07.337 [Pipeline] sh 00:00:07.619 + sudo git config --global --replace-all safe.directory '*' 00:00:07.700 [Pipeline] nodesByLabel 00:00:07.701 Found a total of 1 nodes with the 'sorcerer' label 00:00:07.714 [Pipeline] httpRequest 00:00:07.718 HttpMethod: GET 00:00:07.719 URL: http://10.211.164.101/packages/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:07.724 Sending request to url: http://10.211.164.101/packages/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:07.739 Response Code: HTTP/1.1 200 OK 00:00:07.739 Success: Status code 200 is in the accepted range: 200,404 00:00:07.740 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:20.023 [Pipeline] sh 00:00:20.307 + tar --no-same-owner -xf jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:20.326 [Pipeline] httpRequest 00:00:20.330 HttpMethod: GET 00:00:20.331 URL: http://10.211.164.101/packages/spdk_01f10b8a3bf61d59422d8d60472346d8199e8eee.tar.gz 00:00:20.331 Sending request to url: http://10.211.164.101/packages/spdk_01f10b8a3bf61d59422d8d60472346d8199e8eee.tar.gz 00:00:20.335 Response Code: HTTP/1.1 200 OK 00:00:20.335 Success: Status code 200 is in the accepted range: 200,404 00:00:20.336 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_01f10b8a3bf61d59422d8d60472346d8199e8eee.tar.gz 00:00:26.756 [Pipeline] sh 00:00:27.040 + tar --no-same-owner -xf spdk_01f10b8a3bf61d59422d8d60472346d8199e8eee.tar.gz 00:00:29.591 [Pipeline] sh 00:00:29.874 + git -C spdk log --oneline -n5 00:00:29.874 01f10b8a3 raid: fix race between starting rebuild and creating io channel 00:00:29.874 4506c0c36 test/common: Enable inherit_errexit 00:00:29.874 b24df7cfa test: Drop superfluous calls to print_backtrace() 00:00:29.874 7b52e4c17 test/scheduler: Meassure utime of $spdk_pid threads as a fallback 00:00:29.874 1dc065205 test/scheduler: Calculate median of the cpu load samples 00:00:29.887 [Pipeline] } 00:00:29.904 [Pipeline] // stage 00:00:29.913 [Pipeline] stage 00:00:29.915 [Pipeline] { (Prepare) 00:00:29.935 [Pipeline] writeFile 00:00:29.952 [Pipeline] sh 00:00:30.236 + logger -p user.info -t JENKINS-CI 00:00:30.250 [Pipeline] sh 00:00:30.534 + logger -p user.info -t JENKINS-CI 00:00:30.548 [Pipeline] sh 00:00:30.831 + cat autorun-spdk.conf 00:00:30.831 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:30.831 SPDK_TEST_NVMF=1 00:00:30.831 SPDK_TEST_NVME_CLI=1 00:00:30.831 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:30.831 SPDK_TEST_NVMF_NICS=e810 00:00:30.831 SPDK_TEST_VFIOUSER=1 00:00:30.831 SPDK_RUN_UBSAN=1 00:00:30.831 NET_TYPE=phy 00:00:30.839 RUN_NIGHTLY=0 00:00:30.844 [Pipeline] readFile 00:00:30.868 [Pipeline] withEnv 00:00:30.870 [Pipeline] { 00:00:30.885 [Pipeline] sh 00:00:31.170 + set -ex 00:00:31.170 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:31.170 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:31.170 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:31.170 ++ SPDK_TEST_NVMF=1 00:00:31.170 ++ SPDK_TEST_NVME_CLI=1 00:00:31.170 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:31.170 ++ SPDK_TEST_NVMF_NICS=e810 00:00:31.170 ++ SPDK_TEST_VFIOUSER=1 00:00:31.170 ++ SPDK_RUN_UBSAN=1 00:00:31.170 ++ NET_TYPE=phy 00:00:31.170 ++ RUN_NIGHTLY=0 00:00:31.170 + case $SPDK_TEST_NVMF_NICS in 00:00:31.170 + DRIVERS=ice 00:00:31.170 + [[ tcp == \r\d\m\a ]] 00:00:31.170 + [[ -n ice ]] 00:00:31.170 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:31.170 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:31.170 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:31.170 rmmod: ERROR: Module irdma is not currently loaded 00:00:31.170 rmmod: ERROR: Module i40iw is not currently loaded 00:00:31.170 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:31.170 + true 00:00:31.170 + for D in $DRIVERS 00:00:31.170 + sudo modprobe ice 00:00:31.170 + exit 0 00:00:31.180 [Pipeline] } 00:00:31.199 [Pipeline] // withEnv 00:00:31.204 [Pipeline] } 00:00:31.221 [Pipeline] // stage 00:00:31.232 [Pipeline] catchError 00:00:31.234 [Pipeline] { 00:00:31.250 [Pipeline] timeout 00:00:31.251 Timeout set to expire in 40 min 00:00:31.253 [Pipeline] { 00:00:31.269 [Pipeline] stage 00:00:31.271 [Pipeline] { (Tests) 00:00:31.287 [Pipeline] sh 00:00:31.573 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:31.573 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:31.573 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:31.573 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:31.573 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:31.573 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:31.573 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:31.573 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:31.573 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:31.573 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:31.573 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:31.573 + source /etc/os-release 00:00:31.573 ++ NAME='Fedora Linux' 00:00:31.573 ++ VERSION='38 (Cloud Edition)' 00:00:31.573 ++ ID=fedora 00:00:31.573 ++ VERSION_ID=38 00:00:31.573 ++ VERSION_CODENAME= 00:00:31.573 ++ PLATFORM_ID=platform:f38 00:00:31.573 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:31.573 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:31.573 ++ LOGO=fedora-logo-icon 00:00:31.573 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:31.573 ++ HOME_URL=https://fedoraproject.org/ 00:00:31.573 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:31.573 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:31.573 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:31.573 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:31.573 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:31.573 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:31.573 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:31.573 ++ SUPPORT_END=2024-05-14 00:00:31.573 ++ VARIANT='Cloud Edition' 00:00:31.573 ++ VARIANT_ID=cloud 00:00:31.573 + uname -a 00:00:31.573 Linux spdk-wfp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:31.573 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:34.150 Hugepages 00:00:34.150 node hugesize free / total 00:00:34.150 node0 1048576kB 0 / 0 00:00:34.150 node0 2048kB 0 / 0 00:00:34.150 node1 1048576kB 0 / 0 00:00:34.150 node1 2048kB 0 / 0 00:00:34.150 00:00:34.150 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:34.150 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:34.150 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:34.150 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:34.150 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:34.150 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:34.150 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:34.150 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:34.150 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:34.150 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:00:34.150 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:34.150 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:34.150 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:34.150 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:34.150 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:34.150 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:34.150 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:34.150 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:34.150 + rm -f /tmp/spdk-ld-path 00:00:34.150 + source autorun-spdk.conf 00:00:34.150 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:34.150 ++ SPDK_TEST_NVMF=1 00:00:34.150 ++ SPDK_TEST_NVME_CLI=1 00:00:34.150 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:34.150 ++ SPDK_TEST_NVMF_NICS=e810 00:00:34.150 ++ SPDK_TEST_VFIOUSER=1 00:00:34.150 ++ SPDK_RUN_UBSAN=1 00:00:34.150 ++ NET_TYPE=phy 00:00:34.150 ++ RUN_NIGHTLY=0 00:00:34.150 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:34.150 + [[ -n '' ]] 00:00:34.150 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:34.150 + for M in /var/spdk/build-*-manifest.txt 00:00:34.150 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:34.150 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:34.150 + for M in /var/spdk/build-*-manifest.txt 00:00:34.150 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:34.150 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:34.150 ++ uname 00:00:34.150 + [[ Linux == \L\i\n\u\x ]] 00:00:34.150 + sudo dmesg -T 00:00:34.150 + sudo dmesg --clear 00:00:34.150 + dmesg_pid=1961552 00:00:34.150 + [[ Fedora Linux == FreeBSD ]] 00:00:34.150 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:34.150 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:34.150 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:34.150 + [[ -x /usr/src/fio-static/fio ]] 00:00:34.150 + export FIO_BIN=/usr/src/fio-static/fio 00:00:34.150 + FIO_BIN=/usr/src/fio-static/fio 00:00:34.150 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:34.150 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:34.150 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:34.150 + sudo dmesg -Tw 00:00:34.150 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:34.150 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:34.150 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:34.150 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:34.150 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:34.150 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:34.150 Test configuration: 00:00:34.150 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:34.150 SPDK_TEST_NVMF=1 00:00:34.150 SPDK_TEST_NVME_CLI=1 00:00:34.150 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:34.150 SPDK_TEST_NVMF_NICS=e810 00:00:34.150 SPDK_TEST_VFIOUSER=1 00:00:34.150 SPDK_RUN_UBSAN=1 00:00:34.150 NET_TYPE=phy 00:00:34.150 RUN_NIGHTLY=0 10:52:31 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:34.150 10:52:31 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:34.150 10:52:31 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:34.150 10:52:31 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:34.150 10:52:31 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:34.150 10:52:31 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:34.150 10:52:31 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:34.150 10:52:31 -- paths/export.sh@5 -- $ export PATH 00:00:34.150 10:52:31 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:34.150 10:52:31 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:34.150 10:52:31 -- common/autobuild_common.sh@437 -- $ date +%s 00:00:34.150 10:52:31 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715763151.XXXXXX 00:00:34.150 10:52:31 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715763151.mkisRa 00:00:34.150 10:52:31 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:00:34.150 10:52:31 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:00:34.151 10:52:31 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:34.151 10:52:31 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:34.151 10:52:31 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:34.151 10:52:31 -- common/autobuild_common.sh@453 -- $ get_config_params 00:00:34.151 10:52:31 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:34.151 10:52:31 -- common/autotest_common.sh@10 -- $ set +x 00:00:34.151 10:52:31 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:34.151 10:52:31 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:00:34.151 10:52:31 -- pm/common@17 -- $ local monitor 00:00:34.151 10:52:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:34.151 10:52:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:34.151 10:52:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:34.151 10:52:31 -- pm/common@21 -- $ date +%s 00:00:34.151 10:52:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:34.151 10:52:31 -- pm/common@21 -- $ date +%s 00:00:34.151 10:52:31 -- pm/common@25 -- $ sleep 1 00:00:34.151 10:52:31 -- pm/common@21 -- $ date +%s 00:00:34.151 10:52:31 -- pm/common@21 -- $ date +%s 00:00:34.151 10:52:31 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715763151 00:00:34.151 10:52:31 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715763151 00:00:34.151 10:52:31 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715763151 00:00:34.151 10:52:31 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715763151 00:00:34.151 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715763151_collect-cpu-temp.pm.log 00:00:34.151 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715763151_collect-vmstat.pm.log 00:00:34.151 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715763151_collect-cpu-load.pm.log 00:00:34.151 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715763151_collect-bmc-pm.bmc.pm.log 00:00:35.088 10:52:32 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:00:35.088 10:52:32 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:35.088 10:52:32 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:35.088 10:52:32 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:35.088 10:52:32 -- spdk/autobuild.sh@16 -- $ date -u 00:00:35.088 Wed May 15 08:52:32 AM UTC 2024 00:00:35.088 10:52:32 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:35.088 v24.05-pre-659-g01f10b8a3 00:00:35.088 10:52:32 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:35.088 10:52:32 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:35.088 10:52:32 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:35.088 10:52:32 -- common/autotest_common.sh@1098 -- $ '[' 3 -le 1 ']' 00:00:35.088 10:52:32 -- common/autotest_common.sh@1104 -- $ xtrace_disable 00:00:35.088 10:52:32 -- common/autotest_common.sh@10 -- $ set +x 00:00:35.088 ************************************ 00:00:35.088 START TEST ubsan 00:00:35.088 ************************************ 00:00:35.088 10:52:32 ubsan -- common/autotest_common.sh@1122 -- $ echo 'using ubsan' 00:00:35.088 using ubsan 00:00:35.088 00:00:35.088 real 0m0.000s 00:00:35.088 user 0m0.000s 00:00:35.088 sys 0m0.000s 00:00:35.088 10:52:32 ubsan -- common/autotest_common.sh@1123 -- $ xtrace_disable 00:00:35.088 10:52:32 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:35.088 ************************************ 00:00:35.088 END TEST ubsan 00:00:35.088 ************************************ 00:00:35.348 10:52:32 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:35.348 10:52:32 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:35.348 10:52:32 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:35.348 10:52:32 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:35.348 10:52:32 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:35.348 10:52:32 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:35.348 10:52:32 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:35.348 10:52:32 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:35.348 10:52:32 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:35.348 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:35.348 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:35.607 Using 'verbs' RDMA provider 00:00:48.747 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:00:58.723 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:00:58.723 Creating mk/config.mk...done. 00:00:58.723 Creating mk/cc.flags.mk...done. 00:00:58.723 Type 'make' to build. 00:00:58.723 10:52:55 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:00:58.723 10:52:55 -- common/autotest_common.sh@1098 -- $ '[' 3 -le 1 ']' 00:00:58.723 10:52:55 -- common/autotest_common.sh@1104 -- $ xtrace_disable 00:00:58.723 10:52:55 -- common/autotest_common.sh@10 -- $ set +x 00:00:58.723 ************************************ 00:00:58.723 START TEST make 00:00:58.723 ************************************ 00:00:58.723 10:52:55 make -- common/autotest_common.sh@1122 -- $ make -j96 00:00:58.981 make[1]: Nothing to be done for 'all'. 00:01:00.367 The Meson build system 00:01:00.367 Version: 1.3.1 00:01:00.367 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:00.367 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:00.367 Build type: native build 00:01:00.367 Project name: libvfio-user 00:01:00.367 Project version: 0.0.1 00:01:00.367 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:00.367 C linker for the host machine: cc ld.bfd 2.39-16 00:01:00.367 Host machine cpu family: x86_64 00:01:00.367 Host machine cpu: x86_64 00:01:00.367 Run-time dependency threads found: YES 00:01:00.367 Library dl found: YES 00:01:00.367 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:00.367 Run-time dependency json-c found: YES 0.17 00:01:00.367 Run-time dependency cmocka found: YES 1.1.7 00:01:00.367 Program pytest-3 found: NO 00:01:00.367 Program flake8 found: NO 00:01:00.367 Program misspell-fixer found: NO 00:01:00.367 Program restructuredtext-lint found: NO 00:01:00.367 Program valgrind found: YES (/usr/bin/valgrind) 00:01:00.367 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:00.367 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:00.367 Compiler for C supports arguments -Wwrite-strings: YES 00:01:00.367 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:00.367 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:00.367 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:00.367 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:00.367 Build targets in project: 8 00:01:00.367 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:00.367 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:00.367 00:01:00.367 libvfio-user 0.0.1 00:01:00.367 00:01:00.367 User defined options 00:01:00.367 buildtype : debug 00:01:00.367 default_library: shared 00:01:00.367 libdir : /usr/local/lib 00:01:00.367 00:01:00.367 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:00.625 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:00.625 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:00.625 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:00.625 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:00.625 [4/37] Compiling C object samples/null.p/null.c.o 00:01:00.625 [5/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:00.625 [6/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:00.625 [7/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:00.625 [8/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:00.625 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:00.625 [10/37] Compiling C object samples/server.p/server.c.o 00:01:00.625 [11/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:00.625 [12/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:00.625 [13/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:00.625 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:00.625 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:00.625 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:00.625 [17/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:00.625 [18/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:00.625 [19/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:00.625 [20/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:00.625 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:00.625 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:00.625 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:00.625 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:00.625 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:00.883 [26/37] Compiling C object samples/client.p/client.c.o 00:01:00.883 [27/37] Linking target samples/client 00:01:00.883 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:00.883 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:00.883 [30/37] Linking target test/unit_tests 00:01:00.883 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:01:01.141 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:01.142 [33/37] Linking target samples/lspci 00:01:01.142 [34/37] Linking target samples/null 00:01:01.142 [35/37] Linking target samples/gpio-pci-idio-16 00:01:01.142 [36/37] Linking target samples/server 00:01:01.142 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:01.142 INFO: autodetecting backend as ninja 00:01:01.142 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:01.142 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:01.400 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:01.400 ninja: no work to do. 00:01:06.668 The Meson build system 00:01:06.668 Version: 1.3.1 00:01:06.668 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:06.668 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:06.668 Build type: native build 00:01:06.668 Program cat found: YES (/usr/bin/cat) 00:01:06.668 Project name: DPDK 00:01:06.668 Project version: 23.11.0 00:01:06.668 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:06.668 C linker for the host machine: cc ld.bfd 2.39-16 00:01:06.668 Host machine cpu family: x86_64 00:01:06.668 Host machine cpu: x86_64 00:01:06.668 Message: ## Building in Developer Mode ## 00:01:06.668 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:06.668 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:06.668 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:06.668 Program python3 found: YES (/usr/bin/python3) 00:01:06.668 Program cat found: YES (/usr/bin/cat) 00:01:06.668 Compiler for C supports arguments -march=native: YES 00:01:06.668 Checking for size of "void *" : 8 00:01:06.668 Checking for size of "void *" : 8 (cached) 00:01:06.668 Library m found: YES 00:01:06.668 Library numa found: YES 00:01:06.668 Has header "numaif.h" : YES 00:01:06.668 Library fdt found: NO 00:01:06.668 Library execinfo found: NO 00:01:06.668 Has header "execinfo.h" : YES 00:01:06.668 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:06.668 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:06.668 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:06.668 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:06.668 Run-time dependency openssl found: YES 3.0.9 00:01:06.668 Run-time dependency libpcap found: YES 1.10.4 00:01:06.668 Has header "pcap.h" with dependency libpcap: YES 00:01:06.668 Compiler for C supports arguments -Wcast-qual: YES 00:01:06.668 Compiler for C supports arguments -Wdeprecated: YES 00:01:06.668 Compiler for C supports arguments -Wformat: YES 00:01:06.669 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:06.669 Compiler for C supports arguments -Wformat-security: NO 00:01:06.669 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:06.669 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:06.669 Compiler for C supports arguments -Wnested-externs: YES 00:01:06.669 Compiler for C supports arguments -Wold-style-definition: YES 00:01:06.669 Compiler for C supports arguments -Wpointer-arith: YES 00:01:06.669 Compiler for C supports arguments -Wsign-compare: YES 00:01:06.669 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:06.669 Compiler for C supports arguments -Wundef: YES 00:01:06.669 Compiler for C supports arguments -Wwrite-strings: YES 00:01:06.669 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:06.669 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:06.669 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:06.669 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:06.669 Program objdump found: YES (/usr/bin/objdump) 00:01:06.669 Compiler for C supports arguments -mavx512f: YES 00:01:06.669 Checking if "AVX512 checking" compiles: YES 00:01:06.669 Fetching value of define "__SSE4_2__" : 1 00:01:06.669 Fetching value of define "__AES__" : 1 00:01:06.669 Fetching value of define "__AVX__" : 1 00:01:06.669 Fetching value of define "__AVX2__" : 1 00:01:06.669 Fetching value of define "__AVX512BW__" : 1 00:01:06.669 Fetching value of define "__AVX512CD__" : 1 00:01:06.669 Fetching value of define "__AVX512DQ__" : 1 00:01:06.669 Fetching value of define "__AVX512F__" : 1 00:01:06.669 Fetching value of define "__AVX512VL__" : 1 00:01:06.669 Fetching value of define "__PCLMUL__" : 1 00:01:06.669 Fetching value of define "__RDRND__" : 1 00:01:06.669 Fetching value of define "__RDSEED__" : 1 00:01:06.669 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:06.669 Fetching value of define "__znver1__" : (undefined) 00:01:06.669 Fetching value of define "__znver2__" : (undefined) 00:01:06.669 Fetching value of define "__znver3__" : (undefined) 00:01:06.669 Fetching value of define "__znver4__" : (undefined) 00:01:06.669 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:06.669 Message: lib/log: Defining dependency "log" 00:01:06.669 Message: lib/kvargs: Defining dependency "kvargs" 00:01:06.669 Message: lib/telemetry: Defining dependency "telemetry" 00:01:06.669 Checking for function "getentropy" : NO 00:01:06.669 Message: lib/eal: Defining dependency "eal" 00:01:06.669 Message: lib/ring: Defining dependency "ring" 00:01:06.669 Message: lib/rcu: Defining dependency "rcu" 00:01:06.669 Message: lib/mempool: Defining dependency "mempool" 00:01:06.669 Message: lib/mbuf: Defining dependency "mbuf" 00:01:06.669 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:06.669 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:06.669 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:06.669 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:06.669 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:06.669 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:06.669 Compiler for C supports arguments -mpclmul: YES 00:01:06.669 Compiler for C supports arguments -maes: YES 00:01:06.669 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:06.669 Compiler for C supports arguments -mavx512bw: YES 00:01:06.669 Compiler for C supports arguments -mavx512dq: YES 00:01:06.669 Compiler for C supports arguments -mavx512vl: YES 00:01:06.669 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:06.669 Compiler for C supports arguments -mavx2: YES 00:01:06.669 Compiler for C supports arguments -mavx: YES 00:01:06.669 Message: lib/net: Defining dependency "net" 00:01:06.669 Message: lib/meter: Defining dependency "meter" 00:01:06.669 Message: lib/ethdev: Defining dependency "ethdev" 00:01:06.669 Message: lib/pci: Defining dependency "pci" 00:01:06.669 Message: lib/cmdline: Defining dependency "cmdline" 00:01:06.669 Message: lib/hash: Defining dependency "hash" 00:01:06.669 Message: lib/timer: Defining dependency "timer" 00:01:06.669 Message: lib/compressdev: Defining dependency "compressdev" 00:01:06.669 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:06.669 Message: lib/dmadev: Defining dependency "dmadev" 00:01:06.669 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:06.669 Message: lib/power: Defining dependency "power" 00:01:06.669 Message: lib/reorder: Defining dependency "reorder" 00:01:06.669 Message: lib/security: Defining dependency "security" 00:01:06.669 Has header "linux/userfaultfd.h" : YES 00:01:06.669 Has header "linux/vduse.h" : YES 00:01:06.669 Message: lib/vhost: Defining dependency "vhost" 00:01:06.669 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:06.669 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:06.669 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:06.669 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:06.669 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:06.669 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:06.669 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:06.669 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:06.669 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:06.669 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:06.669 Program doxygen found: YES (/usr/bin/doxygen) 00:01:06.669 Configuring doxy-api-html.conf using configuration 00:01:06.669 Configuring doxy-api-man.conf using configuration 00:01:06.669 Program mandb found: YES (/usr/bin/mandb) 00:01:06.669 Program sphinx-build found: NO 00:01:06.669 Configuring rte_build_config.h using configuration 00:01:06.669 Message: 00:01:06.669 ================= 00:01:06.669 Applications Enabled 00:01:06.669 ================= 00:01:06.669 00:01:06.669 apps: 00:01:06.669 00:01:06.669 00:01:06.669 Message: 00:01:06.669 ================= 00:01:06.669 Libraries Enabled 00:01:06.669 ================= 00:01:06.669 00:01:06.669 libs: 00:01:06.669 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:06.669 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:06.669 cryptodev, dmadev, power, reorder, security, vhost, 00:01:06.669 00:01:06.669 Message: 00:01:06.669 =============== 00:01:06.669 Drivers Enabled 00:01:06.669 =============== 00:01:06.669 00:01:06.669 common: 00:01:06.669 00:01:06.669 bus: 00:01:06.669 pci, vdev, 00:01:06.669 mempool: 00:01:06.669 ring, 00:01:06.669 dma: 00:01:06.669 00:01:06.669 net: 00:01:06.669 00:01:06.669 crypto: 00:01:06.669 00:01:06.669 compress: 00:01:06.669 00:01:06.669 vdpa: 00:01:06.669 00:01:06.669 00:01:06.669 Message: 00:01:06.669 ================= 00:01:06.669 Content Skipped 00:01:06.669 ================= 00:01:06.669 00:01:06.669 apps: 00:01:06.669 dumpcap: explicitly disabled via build config 00:01:06.669 graph: explicitly disabled via build config 00:01:06.669 pdump: explicitly disabled via build config 00:01:06.669 proc-info: explicitly disabled via build config 00:01:06.669 test-acl: explicitly disabled via build config 00:01:06.669 test-bbdev: explicitly disabled via build config 00:01:06.669 test-cmdline: explicitly disabled via build config 00:01:06.669 test-compress-perf: explicitly disabled via build config 00:01:06.669 test-crypto-perf: explicitly disabled via build config 00:01:06.669 test-dma-perf: explicitly disabled via build config 00:01:06.669 test-eventdev: explicitly disabled via build config 00:01:06.669 test-fib: explicitly disabled via build config 00:01:06.669 test-flow-perf: explicitly disabled via build config 00:01:06.669 test-gpudev: explicitly disabled via build config 00:01:06.669 test-mldev: explicitly disabled via build config 00:01:06.669 test-pipeline: explicitly disabled via build config 00:01:06.669 test-pmd: explicitly disabled via build config 00:01:06.669 test-regex: explicitly disabled via build config 00:01:06.669 test-sad: explicitly disabled via build config 00:01:06.669 test-security-perf: explicitly disabled via build config 00:01:06.669 00:01:06.669 libs: 00:01:06.669 metrics: explicitly disabled via build config 00:01:06.669 acl: explicitly disabled via build config 00:01:06.669 bbdev: explicitly disabled via build config 00:01:06.669 bitratestats: explicitly disabled via build config 00:01:06.669 bpf: explicitly disabled via build config 00:01:06.669 cfgfile: explicitly disabled via build config 00:01:06.669 distributor: explicitly disabled via build config 00:01:06.669 efd: explicitly disabled via build config 00:01:06.669 eventdev: explicitly disabled via build config 00:01:06.669 dispatcher: explicitly disabled via build config 00:01:06.669 gpudev: explicitly disabled via build config 00:01:06.669 gro: explicitly disabled via build config 00:01:06.669 gso: explicitly disabled via build config 00:01:06.669 ip_frag: explicitly disabled via build config 00:01:06.669 jobstats: explicitly disabled via build config 00:01:06.669 latencystats: explicitly disabled via build config 00:01:06.669 lpm: explicitly disabled via build config 00:01:06.669 member: explicitly disabled via build config 00:01:06.669 pcapng: explicitly disabled via build config 00:01:06.669 rawdev: explicitly disabled via build config 00:01:06.669 regexdev: explicitly disabled via build config 00:01:06.669 mldev: explicitly disabled via build config 00:01:06.669 rib: explicitly disabled via build config 00:01:06.669 sched: explicitly disabled via build config 00:01:06.669 stack: explicitly disabled via build config 00:01:06.669 ipsec: explicitly disabled via build config 00:01:06.669 pdcp: explicitly disabled via build config 00:01:06.669 fib: explicitly disabled via build config 00:01:06.669 port: explicitly disabled via build config 00:01:06.669 pdump: explicitly disabled via build config 00:01:06.669 table: explicitly disabled via build config 00:01:06.669 pipeline: explicitly disabled via build config 00:01:06.669 graph: explicitly disabled via build config 00:01:06.669 node: explicitly disabled via build config 00:01:06.669 00:01:06.669 drivers: 00:01:06.669 common/cpt: not in enabled drivers build config 00:01:06.669 common/dpaax: not in enabled drivers build config 00:01:06.669 common/iavf: not in enabled drivers build config 00:01:06.669 common/idpf: not in enabled drivers build config 00:01:06.669 common/mvep: not in enabled drivers build config 00:01:06.669 common/octeontx: not in enabled drivers build config 00:01:06.669 bus/auxiliary: not in enabled drivers build config 00:01:06.669 bus/cdx: not in enabled drivers build config 00:01:06.669 bus/dpaa: not in enabled drivers build config 00:01:06.670 bus/fslmc: not in enabled drivers build config 00:01:06.670 bus/ifpga: not in enabled drivers build config 00:01:06.670 bus/platform: not in enabled drivers build config 00:01:06.670 bus/vmbus: not in enabled drivers build config 00:01:06.670 common/cnxk: not in enabled drivers build config 00:01:06.670 common/mlx5: not in enabled drivers build config 00:01:06.670 common/nfp: not in enabled drivers build config 00:01:06.670 common/qat: not in enabled drivers build config 00:01:06.670 common/sfc_efx: not in enabled drivers build config 00:01:06.670 mempool/bucket: not in enabled drivers build config 00:01:06.670 mempool/cnxk: not in enabled drivers build config 00:01:06.670 mempool/dpaa: not in enabled drivers build config 00:01:06.670 mempool/dpaa2: not in enabled drivers build config 00:01:06.670 mempool/octeontx: not in enabled drivers build config 00:01:06.670 mempool/stack: not in enabled drivers build config 00:01:06.670 dma/cnxk: not in enabled drivers build config 00:01:06.670 dma/dpaa: not in enabled drivers build config 00:01:06.670 dma/dpaa2: not in enabled drivers build config 00:01:06.670 dma/hisilicon: not in enabled drivers build config 00:01:06.670 dma/idxd: not in enabled drivers build config 00:01:06.670 dma/ioat: not in enabled drivers build config 00:01:06.670 dma/skeleton: not in enabled drivers build config 00:01:06.670 net/af_packet: not in enabled drivers build config 00:01:06.670 net/af_xdp: not in enabled drivers build config 00:01:06.670 net/ark: not in enabled drivers build config 00:01:06.670 net/atlantic: not in enabled drivers build config 00:01:06.670 net/avp: not in enabled drivers build config 00:01:06.670 net/axgbe: not in enabled drivers build config 00:01:06.670 net/bnx2x: not in enabled drivers build config 00:01:06.670 net/bnxt: not in enabled drivers build config 00:01:06.670 net/bonding: not in enabled drivers build config 00:01:06.670 net/cnxk: not in enabled drivers build config 00:01:06.670 net/cpfl: not in enabled drivers build config 00:01:06.670 net/cxgbe: not in enabled drivers build config 00:01:06.670 net/dpaa: not in enabled drivers build config 00:01:06.670 net/dpaa2: not in enabled drivers build config 00:01:06.670 net/e1000: not in enabled drivers build config 00:01:06.670 net/ena: not in enabled drivers build config 00:01:06.670 net/enetc: not in enabled drivers build config 00:01:06.670 net/enetfec: not in enabled drivers build config 00:01:06.670 net/enic: not in enabled drivers build config 00:01:06.670 net/failsafe: not in enabled drivers build config 00:01:06.670 net/fm10k: not in enabled drivers build config 00:01:06.670 net/gve: not in enabled drivers build config 00:01:06.670 net/hinic: not in enabled drivers build config 00:01:06.670 net/hns3: not in enabled drivers build config 00:01:06.670 net/i40e: not in enabled drivers build config 00:01:06.670 net/iavf: not in enabled drivers build config 00:01:06.670 net/ice: not in enabled drivers build config 00:01:06.670 net/idpf: not in enabled drivers build config 00:01:06.670 net/igc: not in enabled drivers build config 00:01:06.670 net/ionic: not in enabled drivers build config 00:01:06.670 net/ipn3ke: not in enabled drivers build config 00:01:06.670 net/ixgbe: not in enabled drivers build config 00:01:06.670 net/mana: not in enabled drivers build config 00:01:06.670 net/memif: not in enabled drivers build config 00:01:06.670 net/mlx4: not in enabled drivers build config 00:01:06.670 net/mlx5: not in enabled drivers build config 00:01:06.670 net/mvneta: not in enabled drivers build config 00:01:06.670 net/mvpp2: not in enabled drivers build config 00:01:06.670 net/netvsc: not in enabled drivers build config 00:01:06.670 net/nfb: not in enabled drivers build config 00:01:06.670 net/nfp: not in enabled drivers build config 00:01:06.670 net/ngbe: not in enabled drivers build config 00:01:06.670 net/null: not in enabled drivers build config 00:01:06.670 net/octeontx: not in enabled drivers build config 00:01:06.670 net/octeon_ep: not in enabled drivers build config 00:01:06.670 net/pcap: not in enabled drivers build config 00:01:06.670 net/pfe: not in enabled drivers build config 00:01:06.670 net/qede: not in enabled drivers build config 00:01:06.670 net/ring: not in enabled drivers build config 00:01:06.670 net/sfc: not in enabled drivers build config 00:01:06.670 net/softnic: not in enabled drivers build config 00:01:06.670 net/tap: not in enabled drivers build config 00:01:06.670 net/thunderx: not in enabled drivers build config 00:01:06.670 net/txgbe: not in enabled drivers build config 00:01:06.670 net/vdev_netvsc: not in enabled drivers build config 00:01:06.670 net/vhost: not in enabled drivers build config 00:01:06.670 net/virtio: not in enabled drivers build config 00:01:06.670 net/vmxnet3: not in enabled drivers build config 00:01:06.670 raw/*: missing internal dependency, "rawdev" 00:01:06.670 crypto/armv8: not in enabled drivers build config 00:01:06.670 crypto/bcmfs: not in enabled drivers build config 00:01:06.670 crypto/caam_jr: not in enabled drivers build config 00:01:06.670 crypto/ccp: not in enabled drivers build config 00:01:06.670 crypto/cnxk: not in enabled drivers build config 00:01:06.670 crypto/dpaa_sec: not in enabled drivers build config 00:01:06.670 crypto/dpaa2_sec: not in enabled drivers build config 00:01:06.670 crypto/ipsec_mb: not in enabled drivers build config 00:01:06.670 crypto/mlx5: not in enabled drivers build config 00:01:06.670 crypto/mvsam: not in enabled drivers build config 00:01:06.670 crypto/nitrox: not in enabled drivers build config 00:01:06.670 crypto/null: not in enabled drivers build config 00:01:06.670 crypto/octeontx: not in enabled drivers build config 00:01:06.670 crypto/openssl: not in enabled drivers build config 00:01:06.670 crypto/scheduler: not in enabled drivers build config 00:01:06.670 crypto/uadk: not in enabled drivers build config 00:01:06.670 crypto/virtio: not in enabled drivers build config 00:01:06.670 compress/isal: not in enabled drivers build config 00:01:06.670 compress/mlx5: not in enabled drivers build config 00:01:06.670 compress/octeontx: not in enabled drivers build config 00:01:06.670 compress/zlib: not in enabled drivers build config 00:01:06.670 regex/*: missing internal dependency, "regexdev" 00:01:06.670 ml/*: missing internal dependency, "mldev" 00:01:06.670 vdpa/ifc: not in enabled drivers build config 00:01:06.670 vdpa/mlx5: not in enabled drivers build config 00:01:06.670 vdpa/nfp: not in enabled drivers build config 00:01:06.670 vdpa/sfc: not in enabled drivers build config 00:01:06.670 event/*: missing internal dependency, "eventdev" 00:01:06.670 baseband/*: missing internal dependency, "bbdev" 00:01:06.670 gpu/*: missing internal dependency, "gpudev" 00:01:06.670 00:01:06.670 00:01:06.670 Build targets in project: 85 00:01:06.670 00:01:06.670 DPDK 23.11.0 00:01:06.670 00:01:06.670 User defined options 00:01:06.670 buildtype : debug 00:01:06.670 default_library : shared 00:01:06.670 libdir : lib 00:01:06.670 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:06.670 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:06.670 c_link_args : 00:01:06.670 cpu_instruction_set: native 00:01:06.670 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:01:06.670 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:01:06.670 enable_docs : false 00:01:06.670 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:06.670 enable_kmods : false 00:01:06.670 tests : false 00:01:06.670 00:01:06.670 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:06.938 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:07.202 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:07.202 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:07.202 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:07.202 [4/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:07.202 [5/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:07.202 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:07.202 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:07.202 [8/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:07.202 [9/265] Linking static target lib/librte_kvargs.a 00:01:07.203 [10/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:07.203 [11/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:07.203 [12/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:07.203 [13/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:07.203 [14/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:07.203 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:07.203 [16/265] Linking static target lib/librte_log.a 00:01:07.203 [17/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:07.203 [18/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:07.203 [19/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:07.203 [20/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:07.203 [21/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:07.203 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:07.203 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:07.203 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:07.203 [25/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:07.203 [26/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:07.203 [27/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:07.203 [28/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:07.461 [29/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:07.461 [30/265] Linking static target lib/librte_pci.a 00:01:07.461 [31/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:07.461 [32/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:07.461 [33/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:07.461 [34/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:07.461 [35/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:07.461 [36/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:07.461 [37/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:07.461 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:07.461 [39/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:07.756 [40/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:07.756 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:07.756 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:07.756 [43/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:07.756 [44/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:07.756 [45/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:07.756 [46/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:07.756 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:07.756 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:07.756 [49/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:07.756 [50/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:07.756 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:07.756 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:07.756 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:07.756 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:07.756 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:07.756 [56/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:07.756 [57/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.756 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:07.756 [59/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:07.756 [60/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:07.756 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:07.756 [62/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.756 [63/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:07.756 [64/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:07.756 [65/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:07.756 [66/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:07.756 [67/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:07.756 [68/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:07.756 [69/265] Linking static target lib/librte_ring.a 00:01:07.756 [70/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:07.756 [71/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:07.756 [72/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:07.756 [73/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:07.756 [74/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:07.756 [75/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:07.756 [76/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:07.756 [77/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:07.756 [78/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:07.756 [79/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:07.756 [80/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:07.756 [81/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:07.756 [82/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:07.756 [83/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:07.756 [84/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:07.756 [85/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:07.757 [86/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:07.757 [87/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:07.757 [88/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:07.757 [89/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:07.757 [90/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:07.757 [91/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:07.757 [92/265] Linking static target lib/librte_meter.a 00:01:07.757 [93/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:07.757 [94/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:07.757 [95/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:07.757 [96/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:07.757 [97/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:07.757 [98/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:07.757 [99/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:07.757 [100/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:07.757 [101/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:07.757 [102/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:07.757 [103/265] Linking static target lib/librte_telemetry.a 00:01:07.757 [104/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:07.757 [105/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:07.757 [106/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:07.757 [107/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:07.757 [108/265] Linking static target lib/librte_rcu.a 00:01:07.757 [109/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:07.757 [110/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:07.757 [111/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:07.757 [112/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:07.757 [113/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:07.757 [114/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:07.757 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:07.757 [116/265] Linking static target lib/librte_cmdline.a 00:01:07.757 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:07.757 [118/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:07.757 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:07.757 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:07.757 [121/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:07.757 [122/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:07.757 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:07.757 [124/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:07.757 [125/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:07.757 [126/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:07.757 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:07.757 [128/265] Linking static target lib/librte_mempool.a 00:01:07.757 [129/265] Linking static target lib/librte_net.a 00:01:07.757 [130/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:07.757 [131/265] Linking static target lib/librte_eal.a 00:01:08.016 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:08.016 [133/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:08.016 [134/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.016 [135/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:08.016 [136/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:08.016 [137/265] Linking static target lib/librte_timer.a 00:01:08.016 [138/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:08.016 [139/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:08.016 [140/265] Linking target lib/librte_log.so.24.0 00:01:08.016 [141/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.016 [142/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.016 [143/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:08.016 [144/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:08.016 [145/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:08.016 [146/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:08.016 [147/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:08.016 [148/265] Linking static target lib/librte_mbuf.a 00:01:08.016 [149/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:08.016 [150/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:08.016 [151/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:08.016 [152/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:08.016 [153/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.016 [154/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:08.016 [155/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:08.016 [156/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:08.016 [157/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:08.016 [158/265] Linking static target lib/librte_dmadev.a 00:01:08.016 [159/265] Linking static target lib/librte_compressdev.a 00:01:08.016 [160/265] Linking target lib/librte_kvargs.so.24.0 00:01:08.016 [161/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:08.016 [162/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:08.016 [163/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.016 [164/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:08.016 [165/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:08.016 [166/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:08.016 [167/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:08.016 [168/265] Linking static target lib/librte_reorder.a 00:01:08.016 [169/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:08.274 [170/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:08.274 [171/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:08.274 [172/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:08.274 [173/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:08.274 [174/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:08.274 [175/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:08.274 [176/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:08.274 [177/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:08.274 [178/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:08.274 [179/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:08.274 [180/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.274 [181/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:08.274 [182/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:08.274 [183/265] Linking static target lib/librte_hash.a 00:01:08.274 [184/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:08.274 [185/265] Linking static target lib/librte_security.a 00:01:08.274 [186/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:08.274 [187/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:08.274 [188/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:08.274 [189/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:08.274 [190/265] Linking static target lib/librte_power.a 00:01:08.274 [191/265] Linking target lib/librte_telemetry.so.24.0 00:01:08.274 [192/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.274 [193/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:08.274 [194/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:08.274 [195/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:08.274 [196/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:08.274 [197/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:08.274 [198/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:08.274 [199/265] Linking static target drivers/librte_bus_vdev.a 00:01:08.274 [200/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:08.274 [201/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:08.533 [202/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:08.533 [203/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:08.533 [204/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:08.533 [205/265] Linking static target drivers/librte_mempool_ring.a 00:01:08.533 [206/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:08.533 [207/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:08.533 [208/265] Linking static target drivers/librte_bus_pci.a 00:01:08.533 [209/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.533 [210/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:08.533 [211/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.533 [212/265] Linking static target lib/librte_cryptodev.a 00:01:08.533 [213/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.533 [214/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.790 [215/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.790 [216/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.790 [217/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:08.790 [218/265] Linking static target lib/librte_ethdev.a 00:01:08.790 [219/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.790 [220/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.048 [221/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.048 [222/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:09.048 [223/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.048 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.018 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:10.018 [226/265] Linking static target lib/librte_vhost.a 00:01:10.276 [227/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.648 [228/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.905 [229/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.162 [230/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.162 [231/265] Linking target lib/librte_eal.so.24.0 00:01:17.421 [232/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:17.421 [233/265] Linking target lib/librte_ring.so.24.0 00:01:17.421 [234/265] Linking target lib/librte_pci.so.24.0 00:01:17.421 [235/265] Linking target lib/librte_dmadev.so.24.0 00:01:17.421 [236/265] Linking target lib/librte_timer.so.24.0 00:01:17.421 [237/265] Linking target lib/librte_meter.so.24.0 00:01:17.421 [238/265] Linking target drivers/librte_bus_vdev.so.24.0 00:01:17.421 [239/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:17.421 [240/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:17.421 [241/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:17.421 [242/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:17.421 [243/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:17.421 [244/265] Linking target lib/librte_mempool.so.24.0 00:01:17.421 [245/265] Linking target drivers/librte_bus_pci.so.24.0 00:01:17.421 [246/265] Linking target lib/librte_rcu.so.24.0 00:01:17.678 [247/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:17.678 [248/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:17.678 [249/265] Linking target drivers/librte_mempool_ring.so.24.0 00:01:17.678 [250/265] Linking target lib/librte_mbuf.so.24.0 00:01:17.935 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:17.935 [252/265] Linking target lib/librte_cryptodev.so.24.0 00:01:17.935 [253/265] Linking target lib/librte_compressdev.so.24.0 00:01:17.935 [254/265] Linking target lib/librte_net.so.24.0 00:01:17.935 [255/265] Linking target lib/librte_reorder.so.24.0 00:01:17.935 [256/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:17.935 [257/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:17.935 [258/265] Linking target lib/librte_security.so.24.0 00:01:17.935 [259/265] Linking target lib/librte_hash.so.24.0 00:01:17.935 [260/265] Linking target lib/librte_cmdline.so.24.0 00:01:18.192 [261/265] Linking target lib/librte_ethdev.so.24.0 00:01:18.192 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:18.192 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:18.192 [264/265] Linking target lib/librte_power.so.24.0 00:01:18.192 [265/265] Linking target lib/librte_vhost.so.24.0 00:01:18.192 INFO: autodetecting backend as ninja 00:01:18.192 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:01:19.123 CC lib/ut/ut.o 00:01:19.123 CC lib/log/log.o 00:01:19.123 CC lib/log/log_deprecated.o 00:01:19.123 CC lib/log/log_flags.o 00:01:19.123 CC lib/ut_mock/mock.o 00:01:19.379 LIB libspdk_ut.a 00:01:19.379 LIB libspdk_log.a 00:01:19.379 LIB libspdk_ut_mock.a 00:01:19.379 SO libspdk_ut.so.2.0 00:01:19.379 SO libspdk_log.so.7.0 00:01:19.379 SO libspdk_ut_mock.so.6.0 00:01:19.379 SYMLINK libspdk_ut.so 00:01:19.379 SYMLINK libspdk_log.so 00:01:19.379 SYMLINK libspdk_ut_mock.so 00:01:19.636 CC lib/ioat/ioat.o 00:01:19.636 CXX lib/trace_parser/trace.o 00:01:19.636 CC lib/dma/dma.o 00:01:19.636 CC lib/util/base64.o 00:01:19.636 CC lib/util/bit_array.o 00:01:19.895 CC lib/util/cpuset.o 00:01:19.895 CC lib/util/crc16.o 00:01:19.895 CC lib/util/crc32.o 00:01:19.895 CC lib/util/crc32c.o 00:01:19.895 CC lib/util/crc32_ieee.o 00:01:19.895 CC lib/util/dif.o 00:01:19.895 CC lib/util/crc64.o 00:01:19.895 CC lib/util/file.o 00:01:19.895 CC lib/util/fd.o 00:01:19.895 CC lib/util/hexlify.o 00:01:19.895 CC lib/util/iov.o 00:01:19.895 CC lib/util/math.o 00:01:19.895 CC lib/util/pipe.o 00:01:19.895 CC lib/util/strerror_tls.o 00:01:19.895 CC lib/util/string.o 00:01:19.895 CC lib/util/uuid.o 00:01:19.895 CC lib/util/fd_group.o 00:01:19.895 CC lib/util/xor.o 00:01:19.895 CC lib/util/zipf.o 00:01:19.895 CC lib/vfio_user/host/vfio_user_pci.o 00:01:19.895 CC lib/vfio_user/host/vfio_user.o 00:01:19.895 LIB libspdk_dma.a 00:01:19.895 SO libspdk_dma.so.4.0 00:01:19.895 LIB libspdk_ioat.a 00:01:19.895 SYMLINK libspdk_dma.so 00:01:20.154 SO libspdk_ioat.so.7.0 00:01:20.154 SYMLINK libspdk_ioat.so 00:01:20.154 LIB libspdk_vfio_user.a 00:01:20.154 SO libspdk_vfio_user.so.5.0 00:01:20.154 LIB libspdk_util.a 00:01:20.154 SYMLINK libspdk_vfio_user.so 00:01:20.154 SO libspdk_util.so.9.0 00:01:20.413 SYMLINK libspdk_util.so 00:01:20.413 LIB libspdk_trace_parser.a 00:01:20.413 SO libspdk_trace_parser.so.5.0 00:01:20.670 SYMLINK libspdk_trace_parser.so 00:01:20.670 CC lib/idxd/idxd.o 00:01:20.670 CC lib/idxd/idxd_user.o 00:01:20.670 CC lib/rdma/common.o 00:01:20.670 CC lib/rdma/rdma_verbs.o 00:01:20.670 CC lib/vmd/vmd.o 00:01:20.670 CC lib/vmd/led.o 00:01:20.670 CC lib/conf/conf.o 00:01:20.670 CC lib/json/json_parse.o 00:01:20.670 CC lib/json/json_util.o 00:01:20.670 CC lib/json/json_write.o 00:01:20.670 CC lib/env_dpdk/memory.o 00:01:20.670 CC lib/env_dpdk/env.o 00:01:20.670 CC lib/env_dpdk/pci.o 00:01:20.670 CC lib/env_dpdk/pci_ioat.o 00:01:20.670 CC lib/env_dpdk/init.o 00:01:20.670 CC lib/env_dpdk/threads.o 00:01:20.670 CC lib/env_dpdk/pci_vmd.o 00:01:20.670 CC lib/env_dpdk/pci_virtio.o 00:01:20.670 CC lib/env_dpdk/pci_idxd.o 00:01:20.670 CC lib/env_dpdk/pci_event.o 00:01:20.670 CC lib/env_dpdk/sigbus_handler.o 00:01:20.670 CC lib/env_dpdk/pci_dpdk.o 00:01:20.670 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:20.670 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:20.929 LIB libspdk_conf.a 00:01:20.929 LIB libspdk_rdma.a 00:01:20.929 SO libspdk_conf.so.6.0 00:01:20.929 SO libspdk_rdma.so.6.0 00:01:20.929 LIB libspdk_json.a 00:01:20.929 SO libspdk_json.so.6.0 00:01:20.929 SYMLINK libspdk_conf.so 00:01:20.929 SYMLINK libspdk_rdma.so 00:01:20.929 SYMLINK libspdk_json.so 00:01:21.187 LIB libspdk_idxd.a 00:01:21.187 SO libspdk_idxd.so.12.0 00:01:21.187 LIB libspdk_vmd.a 00:01:21.187 SYMLINK libspdk_idxd.so 00:01:21.187 SO libspdk_vmd.so.6.0 00:01:21.187 SYMLINK libspdk_vmd.so 00:01:21.445 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:21.445 CC lib/jsonrpc/jsonrpc_server.o 00:01:21.445 CC lib/jsonrpc/jsonrpc_client.o 00:01:21.445 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:21.445 LIB libspdk_jsonrpc.a 00:01:21.703 SO libspdk_jsonrpc.so.6.0 00:01:21.703 SYMLINK libspdk_jsonrpc.so 00:01:21.703 LIB libspdk_env_dpdk.a 00:01:21.703 SO libspdk_env_dpdk.so.14.0 00:01:21.961 SYMLINK libspdk_env_dpdk.so 00:01:21.961 CC lib/rpc/rpc.o 00:01:22.218 LIB libspdk_rpc.a 00:01:22.218 SO libspdk_rpc.so.6.0 00:01:22.218 SYMLINK libspdk_rpc.so 00:01:22.474 CC lib/trace/trace.o 00:01:22.474 CC lib/trace/trace_flags.o 00:01:22.474 CC lib/trace/trace_rpc.o 00:01:22.474 CC lib/notify/notify.o 00:01:22.474 CC lib/notify/notify_rpc.o 00:01:22.474 CC lib/keyring/keyring.o 00:01:22.474 CC lib/keyring/keyring_rpc.o 00:01:22.730 LIB libspdk_notify.a 00:01:22.730 SO libspdk_notify.so.6.0 00:01:22.730 LIB libspdk_trace.a 00:01:22.730 LIB libspdk_keyring.a 00:01:22.730 SO libspdk_trace.so.10.0 00:01:22.730 SYMLINK libspdk_notify.so 00:01:22.730 SO libspdk_keyring.so.1.0 00:01:22.730 SYMLINK libspdk_trace.so 00:01:22.730 SYMLINK libspdk_keyring.so 00:01:22.986 CC lib/thread/thread.o 00:01:22.986 CC lib/thread/iobuf.o 00:01:22.986 CC lib/sock/sock.o 00:01:22.986 CC lib/sock/sock_rpc.o 00:01:23.549 LIB libspdk_sock.a 00:01:23.549 SO libspdk_sock.so.9.0 00:01:23.549 SYMLINK libspdk_sock.so 00:01:23.806 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:23.806 CC lib/nvme/nvme_ctrlr.o 00:01:23.806 CC lib/nvme/nvme_fabric.o 00:01:23.806 CC lib/nvme/nvme_ns_cmd.o 00:01:23.806 CC lib/nvme/nvme_ns.o 00:01:23.806 CC lib/nvme/nvme_pcie_common.o 00:01:23.806 CC lib/nvme/nvme_pcie.o 00:01:23.806 CC lib/nvme/nvme.o 00:01:23.806 CC lib/nvme/nvme_qpair.o 00:01:23.806 CC lib/nvme/nvme_quirks.o 00:01:23.806 CC lib/nvme/nvme_transport.o 00:01:23.806 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:23.806 CC lib/nvme/nvme_discovery.o 00:01:23.806 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:23.806 CC lib/nvme/nvme_opal.o 00:01:23.806 CC lib/nvme/nvme_tcp.o 00:01:23.806 CC lib/nvme/nvme_poll_group.o 00:01:23.806 CC lib/nvme/nvme_io_msg.o 00:01:23.806 CC lib/nvme/nvme_zns.o 00:01:23.806 CC lib/nvme/nvme_stubs.o 00:01:23.806 CC lib/nvme/nvme_auth.o 00:01:23.806 CC lib/nvme/nvme_cuse.o 00:01:23.806 CC lib/nvme/nvme_vfio_user.o 00:01:23.806 CC lib/nvme/nvme_rdma.o 00:01:24.063 LIB libspdk_thread.a 00:01:24.063 SO libspdk_thread.so.10.0 00:01:24.320 SYMLINK libspdk_thread.so 00:01:24.577 CC lib/init/json_config.o 00:01:24.577 CC lib/init/subsystem.o 00:01:24.577 CC lib/init/subsystem_rpc.o 00:01:24.577 CC lib/init/rpc.o 00:01:24.577 CC lib/blob/request.o 00:01:24.577 CC lib/blob/blobstore.o 00:01:24.577 CC lib/blob/zeroes.o 00:01:24.577 CC lib/blob/blob_bs_dev.o 00:01:24.577 CC lib/vfu_tgt/tgt_endpoint.o 00:01:24.577 CC lib/vfu_tgt/tgt_rpc.o 00:01:24.577 CC lib/accel/accel.o 00:01:24.577 CC lib/accel/accel_rpc.o 00:01:24.577 CC lib/accel/accel_sw.o 00:01:24.577 CC lib/virtio/virtio.o 00:01:24.577 CC lib/virtio/virtio_vfio_user.o 00:01:24.577 CC lib/virtio/virtio_vhost_user.o 00:01:24.577 CC lib/virtio/virtio_pci.o 00:01:24.577 LIB libspdk_init.a 00:01:24.834 SO libspdk_init.so.5.0 00:01:24.834 LIB libspdk_vfu_tgt.a 00:01:24.834 SYMLINK libspdk_init.so 00:01:24.834 LIB libspdk_virtio.a 00:01:24.834 SO libspdk_vfu_tgt.so.3.0 00:01:24.834 SO libspdk_virtio.so.7.0 00:01:24.834 SYMLINK libspdk_vfu_tgt.so 00:01:24.834 SYMLINK libspdk_virtio.so 00:01:25.092 CC lib/event/app.o 00:01:25.092 CC lib/event/reactor.o 00:01:25.092 CC lib/event/log_rpc.o 00:01:25.092 CC lib/event/app_rpc.o 00:01:25.092 CC lib/event/scheduler_static.o 00:01:25.349 LIB libspdk_accel.a 00:01:25.349 SO libspdk_accel.so.15.0 00:01:25.349 SYMLINK libspdk_accel.so 00:01:25.349 LIB libspdk_event.a 00:01:25.349 LIB libspdk_nvme.a 00:01:25.349 SO libspdk_event.so.13.0 00:01:25.606 SO libspdk_nvme.so.13.0 00:01:25.606 SYMLINK libspdk_event.so 00:01:25.606 CC lib/bdev/bdev.o 00:01:25.606 CC lib/bdev/bdev_rpc.o 00:01:25.606 CC lib/bdev/bdev_zone.o 00:01:25.606 CC lib/bdev/part.o 00:01:25.606 CC lib/bdev/scsi_nvme.o 00:01:25.606 SYMLINK libspdk_nvme.so 00:01:26.537 LIB libspdk_blob.a 00:01:26.537 SO libspdk_blob.so.11.0 00:01:26.537 SYMLINK libspdk_blob.so 00:01:27.102 CC lib/blobfs/blobfs.o 00:01:27.102 CC lib/blobfs/tree.o 00:01:27.102 CC lib/lvol/lvol.o 00:01:27.360 LIB libspdk_bdev.a 00:01:27.360 SO libspdk_bdev.so.15.0 00:01:27.618 LIB libspdk_blobfs.a 00:01:27.618 SO libspdk_blobfs.so.10.0 00:01:27.618 SYMLINK libspdk_bdev.so 00:01:27.618 LIB libspdk_lvol.a 00:01:27.618 SYMLINK libspdk_blobfs.so 00:01:27.618 SO libspdk_lvol.so.10.0 00:01:27.618 SYMLINK libspdk_lvol.so 00:01:27.876 CC lib/nbd/nbd_rpc.o 00:01:27.876 CC lib/nvmf/ctrlr.o 00:01:27.876 CC lib/nbd/nbd.o 00:01:27.876 CC lib/nvmf/ctrlr_discovery.o 00:01:27.876 CC lib/nvmf/ctrlr_bdev.o 00:01:27.876 CC lib/nvmf/subsystem.o 00:01:27.876 CC lib/nvmf/nvmf.o 00:01:27.876 CC lib/nvmf/nvmf_rpc.o 00:01:27.876 CC lib/scsi/dev.o 00:01:27.876 CC lib/nvmf/transport.o 00:01:27.876 CC lib/scsi/lun.o 00:01:27.876 CC lib/nvmf/tcp.o 00:01:27.876 CC lib/scsi/port.o 00:01:27.876 CC lib/nvmf/stubs.o 00:01:27.876 CC lib/scsi/scsi.o 00:01:27.876 CC lib/nvmf/mdns_server.o 00:01:27.876 CC lib/scsi/scsi_bdev.o 00:01:27.876 CC lib/scsi/scsi_pr.o 00:01:27.876 CC lib/nvmf/vfio_user.o 00:01:27.876 CC lib/scsi/scsi_rpc.o 00:01:27.876 CC lib/nvmf/rdma.o 00:01:27.876 CC lib/nvmf/auth.o 00:01:27.876 CC lib/scsi/task.o 00:01:27.876 CC lib/ublk/ublk.o 00:01:27.876 CC lib/ublk/ublk_rpc.o 00:01:27.876 CC lib/ftl/ftl_core.o 00:01:27.876 CC lib/ftl/ftl_init.o 00:01:27.876 CC lib/ftl/ftl_layout.o 00:01:27.876 CC lib/ftl/ftl_debug.o 00:01:27.876 CC lib/ftl/ftl_io.o 00:01:27.876 CC lib/ftl/ftl_sb.o 00:01:27.876 CC lib/ftl/ftl_l2p.o 00:01:27.876 CC lib/ftl/ftl_l2p_flat.o 00:01:27.876 CC lib/ftl/ftl_nv_cache.o 00:01:27.876 CC lib/ftl/ftl_band.o 00:01:27.876 CC lib/ftl/ftl_band_ops.o 00:01:27.876 CC lib/ftl/ftl_writer.o 00:01:27.876 CC lib/ftl/ftl_rq.o 00:01:27.876 CC lib/ftl/ftl_reloc.o 00:01:27.876 CC lib/ftl/ftl_l2p_cache.o 00:01:27.876 CC lib/ftl/ftl_p2l.o 00:01:27.876 CC lib/ftl/mngt/ftl_mngt.o 00:01:27.876 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:27.876 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:27.876 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:27.876 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:27.876 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:27.876 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:27.876 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:27.876 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:27.876 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:27.876 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:27.876 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:27.876 CC lib/ftl/utils/ftl_conf.o 00:01:27.876 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:27.876 CC lib/ftl/utils/ftl_mempool.o 00:01:27.876 CC lib/ftl/utils/ftl_bitmap.o 00:01:27.876 CC lib/ftl/utils/ftl_property.o 00:01:27.876 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:27.876 CC lib/ftl/utils/ftl_md.o 00:01:27.876 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:27.876 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:27.876 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:27.876 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:27.876 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:27.876 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:27.876 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:27.876 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:27.876 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:27.876 CC lib/ftl/base/ftl_base_dev.o 00:01:27.876 CC lib/ftl/base/ftl_base_bdev.o 00:01:27.876 CC lib/ftl/ftl_trace.o 00:01:28.442 LIB libspdk_nbd.a 00:01:28.442 SO libspdk_nbd.so.7.0 00:01:28.442 SYMLINK libspdk_nbd.so 00:01:28.442 LIB libspdk_scsi.a 00:01:28.442 SO libspdk_scsi.so.9.0 00:01:28.442 LIB libspdk_ublk.a 00:01:28.442 SO libspdk_ublk.so.3.0 00:01:28.442 SYMLINK libspdk_scsi.so 00:01:28.442 SYMLINK libspdk_ublk.so 00:01:28.700 LIB libspdk_ftl.a 00:01:28.700 SO libspdk_ftl.so.9.0 00:01:28.700 CC lib/iscsi/conn.o 00:01:28.700 CC lib/iscsi/init_grp.o 00:01:28.700 CC lib/iscsi/iscsi.o 00:01:28.700 CC lib/iscsi/md5.o 00:01:28.970 CC lib/iscsi/param.o 00:01:28.970 CC lib/iscsi/portal_grp.o 00:01:28.970 CC lib/iscsi/tgt_node.o 00:01:28.970 CC lib/iscsi/iscsi_subsystem.o 00:01:28.970 CC lib/iscsi/iscsi_rpc.o 00:01:28.970 CC lib/iscsi/task.o 00:01:28.970 CC lib/vhost/vhost.o 00:01:28.970 CC lib/vhost/vhost_rpc.o 00:01:28.970 CC lib/vhost/vhost_scsi.o 00:01:28.970 CC lib/vhost/vhost_blk.o 00:01:28.970 CC lib/vhost/rte_vhost_user.o 00:01:29.241 SYMLINK libspdk_ftl.so 00:01:29.499 LIB libspdk_nvmf.a 00:01:29.499 LIB libspdk_vhost.a 00:01:29.757 SO libspdk_nvmf.so.18.0 00:01:29.757 SO libspdk_vhost.so.8.0 00:01:29.757 SYMLINK libspdk_vhost.so 00:01:29.757 LIB libspdk_iscsi.a 00:01:29.757 SYMLINK libspdk_nvmf.so 00:01:29.757 SO libspdk_iscsi.so.8.0 00:01:30.016 SYMLINK libspdk_iscsi.so 00:01:30.582 CC module/env_dpdk/env_dpdk_rpc.o 00:01:30.582 CC module/vfu_device/vfu_virtio.o 00:01:30.582 CC module/vfu_device/vfu_virtio_blk.o 00:01:30.582 CC module/vfu_device/vfu_virtio_scsi.o 00:01:30.582 CC module/vfu_device/vfu_virtio_rpc.o 00:01:30.582 LIB libspdk_env_dpdk_rpc.a 00:01:30.582 CC module/blob/bdev/blob_bdev.o 00:01:30.582 CC module/sock/posix/posix.o 00:01:30.582 CC module/scheduler/gscheduler/gscheduler.o 00:01:30.582 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:30.582 CC module/accel/iaa/accel_iaa.o 00:01:30.582 CC module/accel/iaa/accel_iaa_rpc.o 00:01:30.582 SO libspdk_env_dpdk_rpc.so.6.0 00:01:30.582 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:30.582 CC module/accel/error/accel_error.o 00:01:30.582 CC module/accel/error/accel_error_rpc.o 00:01:30.582 CC module/accel/ioat/accel_ioat.o 00:01:30.582 CC module/keyring/file/keyring_rpc.o 00:01:30.582 CC module/keyring/file/keyring.o 00:01:30.582 CC module/accel/ioat/accel_ioat_rpc.o 00:01:30.582 CC module/accel/dsa/accel_dsa.o 00:01:30.582 CC module/accel/dsa/accel_dsa_rpc.o 00:01:30.582 SYMLINK libspdk_env_dpdk_rpc.so 00:01:30.840 LIB libspdk_keyring_file.a 00:01:30.840 LIB libspdk_scheduler_gscheduler.a 00:01:30.840 LIB libspdk_scheduler_dpdk_governor.a 00:01:30.840 LIB libspdk_scheduler_dynamic.a 00:01:30.840 LIB libspdk_accel_error.a 00:01:30.840 SO libspdk_keyring_file.so.1.0 00:01:30.840 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:30.840 LIB libspdk_accel_iaa.a 00:01:30.840 SO libspdk_scheduler_gscheduler.so.4.0 00:01:30.840 LIB libspdk_accel_ioat.a 00:01:30.840 SO libspdk_scheduler_dynamic.so.4.0 00:01:30.840 SO libspdk_accel_error.so.2.0 00:01:30.840 SO libspdk_accel_iaa.so.3.0 00:01:30.840 LIB libspdk_blob_bdev.a 00:01:30.840 LIB libspdk_accel_dsa.a 00:01:30.840 SO libspdk_accel_ioat.so.6.0 00:01:30.840 SYMLINK libspdk_scheduler_gscheduler.so 00:01:30.840 SYMLINK libspdk_keyring_file.so 00:01:30.840 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:30.840 SO libspdk_blob_bdev.so.11.0 00:01:30.840 SYMLINK libspdk_scheduler_dynamic.so 00:01:30.840 SYMLINK libspdk_accel_iaa.so 00:01:30.840 SO libspdk_accel_dsa.so.5.0 00:01:30.840 SYMLINK libspdk_accel_error.so 00:01:30.840 SYMLINK libspdk_accel_ioat.so 00:01:30.840 SYMLINK libspdk_blob_bdev.so 00:01:30.840 SYMLINK libspdk_accel_dsa.so 00:01:30.840 LIB libspdk_vfu_device.a 00:01:31.098 SO libspdk_vfu_device.so.3.0 00:01:31.098 SYMLINK libspdk_vfu_device.so 00:01:31.098 LIB libspdk_sock_posix.a 00:01:31.098 SO libspdk_sock_posix.so.6.0 00:01:31.356 SYMLINK libspdk_sock_posix.so 00:01:31.356 CC module/bdev/null/bdev_null.o 00:01:31.356 CC module/bdev/null/bdev_null_rpc.o 00:01:31.356 CC module/bdev/raid/bdev_raid_rpc.o 00:01:31.356 CC module/bdev/gpt/gpt.o 00:01:31.356 CC module/bdev/raid/bdev_raid.o 00:01:31.356 CC module/bdev/gpt/vbdev_gpt.o 00:01:31.356 CC module/bdev/raid/raid1.o 00:01:31.356 CC module/bdev/raid/bdev_raid_sb.o 00:01:31.356 CC module/bdev/raid/raid0.o 00:01:31.356 CC module/blobfs/bdev/blobfs_bdev.o 00:01:31.356 CC module/bdev/raid/concat.o 00:01:31.356 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:31.356 CC module/bdev/split/vbdev_split.o 00:01:31.356 CC module/bdev/error/vbdev_error.o 00:01:31.356 CC module/bdev/split/vbdev_split_rpc.o 00:01:31.356 CC module/bdev/aio/bdev_aio_rpc.o 00:01:31.356 CC module/bdev/error/vbdev_error_rpc.o 00:01:31.356 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:31.356 CC module/bdev/aio/bdev_aio.o 00:01:31.356 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:31.356 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:31.356 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:31.356 CC module/bdev/nvme/bdev_nvme.o 00:01:31.356 CC module/bdev/ftl/bdev_ftl.o 00:01:31.357 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:31.357 CC module/bdev/nvme/nvme_rpc.o 00:01:31.357 CC module/bdev/nvme/bdev_mdns_client.o 00:01:31.357 CC module/bdev/nvme/vbdev_opal.o 00:01:31.357 CC module/bdev/iscsi/bdev_iscsi.o 00:01:31.357 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:31.357 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:31.357 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:31.357 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:31.357 CC module/bdev/malloc/bdev_malloc.o 00:01:31.357 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:31.357 CC module/bdev/passthru/vbdev_passthru.o 00:01:31.357 CC module/bdev/delay/vbdev_delay.o 00:01:31.357 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:31.357 CC module/bdev/lvol/vbdev_lvol.o 00:01:31.357 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:31.357 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:31.357 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:31.615 LIB libspdk_blobfs_bdev.a 00:01:31.615 LIB libspdk_bdev_null.a 00:01:31.615 LIB libspdk_bdev_split.a 00:01:31.615 SO libspdk_blobfs_bdev.so.6.0 00:01:31.615 SO libspdk_bdev_null.so.6.0 00:01:31.615 SO libspdk_bdev_split.so.6.0 00:01:31.615 LIB libspdk_bdev_gpt.a 00:01:31.615 SYMLINK libspdk_blobfs_bdev.so 00:01:31.615 LIB libspdk_bdev_error.a 00:01:31.615 LIB libspdk_bdev_passthru.a 00:01:31.615 LIB libspdk_bdev_ftl.a 00:01:31.615 SYMLINK libspdk_bdev_null.so 00:01:31.615 SO libspdk_bdev_gpt.so.6.0 00:01:31.615 SYMLINK libspdk_bdev_split.so 00:01:31.615 SO libspdk_bdev_error.so.6.0 00:01:31.615 SO libspdk_bdev_ftl.so.6.0 00:01:31.615 SO libspdk_bdev_passthru.so.6.0 00:01:31.615 LIB libspdk_bdev_aio.a 00:01:31.615 LIB libspdk_bdev_zone_block.a 00:01:31.615 LIB libspdk_bdev_delay.a 00:01:31.615 LIB libspdk_bdev_iscsi.a 00:01:31.873 LIB libspdk_bdev_malloc.a 00:01:31.873 SYMLINK libspdk_bdev_gpt.so 00:01:31.873 SO libspdk_bdev_zone_block.so.6.0 00:01:31.873 SO libspdk_bdev_aio.so.6.0 00:01:31.873 SO libspdk_bdev_iscsi.so.6.0 00:01:31.873 SO libspdk_bdev_delay.so.6.0 00:01:31.873 SYMLINK libspdk_bdev_ftl.so 00:01:31.873 SYMLINK libspdk_bdev_error.so 00:01:31.873 SYMLINK libspdk_bdev_passthru.so 00:01:31.873 SO libspdk_bdev_malloc.so.6.0 00:01:31.873 SYMLINK libspdk_bdev_aio.so 00:01:31.873 SYMLINK libspdk_bdev_zone_block.so 00:01:31.873 SYMLINK libspdk_bdev_delay.so 00:01:31.873 SYMLINK libspdk_bdev_iscsi.so 00:01:31.873 LIB libspdk_bdev_lvol.a 00:01:31.873 SYMLINK libspdk_bdev_malloc.so 00:01:31.873 LIB libspdk_bdev_virtio.a 00:01:31.873 SO libspdk_bdev_lvol.so.6.0 00:01:31.873 SO libspdk_bdev_virtio.so.6.0 00:01:31.873 SYMLINK libspdk_bdev_lvol.so 00:01:31.873 SYMLINK libspdk_bdev_virtio.so 00:01:32.132 LIB libspdk_bdev_raid.a 00:01:32.132 SO libspdk_bdev_raid.so.6.0 00:01:32.132 SYMLINK libspdk_bdev_raid.so 00:01:33.070 LIB libspdk_bdev_nvme.a 00:01:33.070 SO libspdk_bdev_nvme.so.7.0 00:01:33.070 SYMLINK libspdk_bdev_nvme.so 00:01:33.635 CC module/event/subsystems/vmd/vmd.o 00:01:33.635 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:33.635 CC module/event/subsystems/scheduler/scheduler.o 00:01:33.635 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:33.635 CC module/event/subsystems/iobuf/iobuf.o 00:01:33.635 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:33.635 CC module/event/subsystems/keyring/keyring.o 00:01:33.635 CC module/event/subsystems/sock/sock.o 00:01:33.635 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:33.892 LIB libspdk_event_vhost_blk.a 00:01:33.892 LIB libspdk_event_vmd.a 00:01:33.892 LIB libspdk_event_scheduler.a 00:01:33.892 LIB libspdk_event_sock.a 00:01:33.892 LIB libspdk_event_keyring.a 00:01:33.892 SO libspdk_event_vhost_blk.so.3.0 00:01:33.892 LIB libspdk_event_vfu_tgt.a 00:01:33.892 LIB libspdk_event_iobuf.a 00:01:33.892 SO libspdk_event_vmd.so.6.0 00:01:33.892 SO libspdk_event_sock.so.5.0 00:01:33.892 SO libspdk_event_scheduler.so.4.0 00:01:33.892 SO libspdk_event_vfu_tgt.so.3.0 00:01:33.892 SO libspdk_event_keyring.so.1.0 00:01:33.892 SO libspdk_event_iobuf.so.3.0 00:01:33.892 SYMLINK libspdk_event_vhost_blk.so 00:01:33.892 SYMLINK libspdk_event_sock.so 00:01:33.892 SYMLINK libspdk_event_vmd.so 00:01:33.892 SYMLINK libspdk_event_scheduler.so 00:01:33.892 SYMLINK libspdk_event_vfu_tgt.so 00:01:33.892 SYMLINK libspdk_event_keyring.so 00:01:33.892 SYMLINK libspdk_event_iobuf.so 00:01:34.150 CC module/event/subsystems/accel/accel.o 00:01:34.408 LIB libspdk_event_accel.a 00:01:34.408 SO libspdk_event_accel.so.6.0 00:01:34.408 SYMLINK libspdk_event_accel.so 00:01:34.666 CC module/event/subsystems/bdev/bdev.o 00:01:34.924 LIB libspdk_event_bdev.a 00:01:34.924 SO libspdk_event_bdev.so.6.0 00:01:34.924 SYMLINK libspdk_event_bdev.so 00:01:35.182 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:35.182 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:35.182 CC module/event/subsystems/scsi/scsi.o 00:01:35.182 CC module/event/subsystems/nbd/nbd.o 00:01:35.182 CC module/event/subsystems/ublk/ublk.o 00:01:35.440 LIB libspdk_event_scsi.a 00:01:35.440 LIB libspdk_event_nbd.a 00:01:35.440 LIB libspdk_event_nvmf.a 00:01:35.440 LIB libspdk_event_ublk.a 00:01:35.440 SO libspdk_event_scsi.so.6.0 00:01:35.440 SO libspdk_event_nbd.so.6.0 00:01:35.440 SO libspdk_event_ublk.so.3.0 00:01:35.440 SO libspdk_event_nvmf.so.6.0 00:01:35.440 SYMLINK libspdk_event_scsi.so 00:01:35.440 SYMLINK libspdk_event_nbd.so 00:01:35.440 SYMLINK libspdk_event_ublk.so 00:01:35.440 SYMLINK libspdk_event_nvmf.so 00:01:35.698 CC module/event/subsystems/iscsi/iscsi.o 00:01:35.698 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:35.965 LIB libspdk_event_iscsi.a 00:01:35.965 LIB libspdk_event_vhost_scsi.a 00:01:35.965 SO libspdk_event_iscsi.so.6.0 00:01:35.965 SO libspdk_event_vhost_scsi.so.3.0 00:01:35.965 SYMLINK libspdk_event_iscsi.so 00:01:35.965 SYMLINK libspdk_event_vhost_scsi.so 00:01:36.224 SO libspdk.so.6.0 00:01:36.224 SYMLINK libspdk.so 00:01:36.483 CC app/spdk_lspci/spdk_lspci.o 00:01:36.483 CC app/spdk_nvme_discover/discovery_aer.o 00:01:36.483 CXX app/trace/trace.o 00:01:36.483 CC app/trace_record/trace_record.o 00:01:36.483 CC app/spdk_nvme_identify/identify.o 00:01:36.483 CC app/spdk_top/spdk_top.o 00:01:36.483 CC app/spdk_nvme_perf/perf.o 00:01:36.483 CC test/rpc_client/rpc_client_test.o 00:01:36.483 TEST_HEADER include/spdk/accel.h 00:01:36.483 TEST_HEADER include/spdk/accel_module.h 00:01:36.483 TEST_HEADER include/spdk/assert.h 00:01:36.483 TEST_HEADER include/spdk/barrier.h 00:01:36.483 TEST_HEADER include/spdk/base64.h 00:01:36.483 TEST_HEADER include/spdk/bdev.h 00:01:36.483 TEST_HEADER include/spdk/bdev_module.h 00:01:36.483 TEST_HEADER include/spdk/bit_array.h 00:01:36.483 TEST_HEADER include/spdk/bdev_zone.h 00:01:36.483 TEST_HEADER include/spdk/bit_pool.h 00:01:36.483 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:36.483 TEST_HEADER include/spdk/blob_bdev.h 00:01:36.483 TEST_HEADER include/spdk/blobfs.h 00:01:36.483 TEST_HEADER include/spdk/blob.h 00:01:36.483 TEST_HEADER include/spdk/conf.h 00:01:36.483 TEST_HEADER include/spdk/config.h 00:01:36.483 TEST_HEADER include/spdk/cpuset.h 00:01:36.483 TEST_HEADER include/spdk/crc16.h 00:01:36.483 TEST_HEADER include/spdk/crc32.h 00:01:36.483 TEST_HEADER include/spdk/crc64.h 00:01:36.483 TEST_HEADER include/spdk/dif.h 00:01:36.483 TEST_HEADER include/spdk/dma.h 00:01:36.483 TEST_HEADER include/spdk/endian.h 00:01:36.483 TEST_HEADER include/spdk/env_dpdk.h 00:01:36.483 TEST_HEADER include/spdk/event.h 00:01:36.483 TEST_HEADER include/spdk/env.h 00:01:36.483 TEST_HEADER include/spdk/fd_group.h 00:01:36.483 TEST_HEADER include/spdk/fd.h 00:01:36.483 TEST_HEADER include/spdk/file.h 00:01:36.483 TEST_HEADER include/spdk/ftl.h 00:01:36.483 TEST_HEADER include/spdk/gpt_spec.h 00:01:36.483 CC app/iscsi_tgt/iscsi_tgt.o 00:01:36.483 TEST_HEADER include/spdk/histogram_data.h 00:01:36.483 TEST_HEADER include/spdk/idxd.h 00:01:36.483 TEST_HEADER include/spdk/idxd_spec.h 00:01:36.483 TEST_HEADER include/spdk/hexlify.h 00:01:36.483 TEST_HEADER include/spdk/init.h 00:01:36.483 TEST_HEADER include/spdk/ioat.h 00:01:36.483 TEST_HEADER include/spdk/ioat_spec.h 00:01:36.483 TEST_HEADER include/spdk/iscsi_spec.h 00:01:36.483 TEST_HEADER include/spdk/json.h 00:01:36.483 CC app/nvmf_tgt/nvmf_main.o 00:01:36.483 TEST_HEADER include/spdk/jsonrpc.h 00:01:36.483 TEST_HEADER include/spdk/keyring.h 00:01:36.483 CC app/spdk_dd/spdk_dd.o 00:01:36.483 TEST_HEADER include/spdk/keyring_module.h 00:01:36.483 TEST_HEADER include/spdk/likely.h 00:01:36.483 TEST_HEADER include/spdk/log.h 00:01:36.483 TEST_HEADER include/spdk/mmio.h 00:01:36.483 TEST_HEADER include/spdk/lvol.h 00:01:36.483 TEST_HEADER include/spdk/nbd.h 00:01:36.746 TEST_HEADER include/spdk/memory.h 00:01:36.746 TEST_HEADER include/spdk/nvme.h 00:01:36.746 TEST_HEADER include/spdk/nvme_intel.h 00:01:36.746 TEST_HEADER include/spdk/notify.h 00:01:36.746 TEST_HEADER include/spdk/nvme_spec.h 00:01:36.746 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:36.746 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:36.746 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:36.746 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:36.746 TEST_HEADER include/spdk/nvme_zns.h 00:01:36.746 CC app/spdk_tgt/spdk_tgt.o 00:01:36.746 CC app/vhost/vhost.o 00:01:36.746 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:36.746 TEST_HEADER include/spdk/nvmf.h 00:01:36.746 TEST_HEADER include/spdk/nvmf_spec.h 00:01:36.746 TEST_HEADER include/spdk/opal.h 00:01:36.746 TEST_HEADER include/spdk/nvmf_transport.h 00:01:36.746 TEST_HEADER include/spdk/pci_ids.h 00:01:36.746 TEST_HEADER include/spdk/pipe.h 00:01:36.746 TEST_HEADER include/spdk/queue.h 00:01:36.746 TEST_HEADER include/spdk/opal_spec.h 00:01:36.746 TEST_HEADER include/spdk/reduce.h 00:01:36.746 TEST_HEADER include/spdk/rpc.h 00:01:36.746 TEST_HEADER include/spdk/scsi.h 00:01:36.746 TEST_HEADER include/spdk/scheduler.h 00:01:36.746 TEST_HEADER include/spdk/scsi_spec.h 00:01:36.746 TEST_HEADER include/spdk/sock.h 00:01:36.746 TEST_HEADER include/spdk/stdinc.h 00:01:36.746 TEST_HEADER include/spdk/thread.h 00:01:36.746 TEST_HEADER include/spdk/string.h 00:01:36.746 TEST_HEADER include/spdk/trace_parser.h 00:01:36.746 TEST_HEADER include/spdk/trace.h 00:01:36.746 TEST_HEADER include/spdk/tree.h 00:01:36.746 TEST_HEADER include/spdk/ublk.h 00:01:36.746 TEST_HEADER include/spdk/util.h 00:01:36.746 TEST_HEADER include/spdk/version.h 00:01:36.746 TEST_HEADER include/spdk/uuid.h 00:01:36.746 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:36.746 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:36.746 TEST_HEADER include/spdk/vhost.h 00:01:36.746 TEST_HEADER include/spdk/vmd.h 00:01:36.746 TEST_HEADER include/spdk/zipf.h 00:01:36.746 TEST_HEADER include/spdk/xor.h 00:01:36.746 CXX test/cpp_headers/accel.o 00:01:36.746 CXX test/cpp_headers/accel_module.o 00:01:36.746 CXX test/cpp_headers/assert.o 00:01:36.746 CXX test/cpp_headers/barrier.o 00:01:36.746 CXX test/cpp_headers/base64.o 00:01:36.746 CXX test/cpp_headers/bdev_module.o 00:01:36.746 CXX test/cpp_headers/bdev_zone.o 00:01:36.746 CXX test/cpp_headers/bdev.o 00:01:36.746 CXX test/cpp_headers/bit_array.o 00:01:36.746 CXX test/cpp_headers/bit_pool.o 00:01:36.746 CXX test/cpp_headers/blob_bdev.o 00:01:36.746 CXX test/cpp_headers/blobfs_bdev.o 00:01:36.746 CXX test/cpp_headers/blob.o 00:01:36.746 CXX test/cpp_headers/blobfs.o 00:01:36.746 CXX test/cpp_headers/conf.o 00:01:36.746 CXX test/cpp_headers/config.o 00:01:36.746 CXX test/cpp_headers/cpuset.o 00:01:36.746 CXX test/cpp_headers/crc16.o 00:01:36.746 CXX test/cpp_headers/crc64.o 00:01:36.746 CXX test/cpp_headers/dif.o 00:01:36.746 CXX test/cpp_headers/crc32.o 00:01:36.746 CC app/fio/nvme/fio_plugin.o 00:01:36.746 CC examples/util/zipf/zipf.o 00:01:36.746 CXX test/cpp_headers/dma.o 00:01:36.746 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:36.746 CC examples/nvme/hello_world/hello_world.o 00:01:36.746 CC examples/ioat/perf/perf.o 00:01:36.746 CC test/nvme/aer/aer.o 00:01:36.746 CC examples/nvme/reconnect/reconnect.o 00:01:36.746 CC test/event/reactor/reactor.o 00:01:36.746 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:36.746 CC examples/nvme/hotplug/hotplug.o 00:01:36.746 CC test/nvme/fused_ordering/fused_ordering.o 00:01:36.746 CC test/thread/poller_perf/poller_perf.o 00:01:36.746 CC test/nvme/reset/reset.o 00:01:36.746 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:36.746 CC test/event/event_perf/event_perf.o 00:01:36.746 CC test/app/stub/stub.o 00:01:36.746 CC examples/vmd/lsvmd/lsvmd.o 00:01:36.746 CC test/app/histogram_perf/histogram_perf.o 00:01:36.747 CC examples/ioat/verify/verify.o 00:01:36.747 CC test/blobfs/mkfs/mkfs.o 00:01:36.747 CC test/env/vtophys/vtophys.o 00:01:36.747 CC examples/nvme/abort/abort.o 00:01:36.747 CC examples/idxd/perf/perf.o 00:01:36.747 CC test/app/jsoncat/jsoncat.o 00:01:36.747 CC test/nvme/connect_stress/connect_stress.o 00:01:36.747 CC test/env/pci/pci_ut.o 00:01:36.747 CC test/nvme/overhead/overhead.o 00:01:36.747 CC test/nvme/startup/startup.o 00:01:36.747 CC test/nvme/err_injection/err_injection.o 00:01:36.747 CC examples/blob/hello_world/hello_blob.o 00:01:36.747 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:36.747 CC test/env/memory/memory_ut.o 00:01:36.747 CC test/event/reactor_perf/reactor_perf.o 00:01:36.747 CC test/nvme/cuse/cuse.o 00:01:36.747 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:36.747 CC test/nvme/e2edp/nvme_dp.o 00:01:36.747 CC examples/vmd/led/led.o 00:01:36.747 CC examples/accel/perf/accel_perf.o 00:01:36.747 CC examples/nvme/arbitration/arbitration.o 00:01:36.747 CC test/app/bdev_svc/bdev_svc.o 00:01:36.747 CC test/nvme/compliance/nvme_compliance.o 00:01:36.747 CC test/nvme/sgl/sgl.o 00:01:36.747 CC app/fio/bdev/fio_plugin.o 00:01:36.747 CC examples/blob/cli/blobcli.o 00:01:36.747 CC test/nvme/fdp/fdp.o 00:01:36.747 CC test/nvme/reserve/reserve.o 00:01:36.747 CC examples/bdev/hello_world/hello_bdev.o 00:01:36.747 CC examples/sock/hello_world/hello_sock.o 00:01:36.747 CC test/nvme/simple_copy/simple_copy.o 00:01:36.747 CC test/nvme/boot_partition/boot_partition.o 00:01:36.747 CC test/event/app_repeat/app_repeat.o 00:01:37.011 CC test/accel/dif/dif.o 00:01:37.011 CC examples/thread/thread/thread_ex.o 00:01:37.011 CC test/dma/test_dma/test_dma.o 00:01:37.011 CC test/event/scheduler/scheduler.o 00:01:37.011 CC examples/nvmf/nvmf/nvmf.o 00:01:37.011 CC test/bdev/bdevio/bdevio.o 00:01:37.011 CC examples/bdev/bdevperf/bdevperf.o 00:01:37.011 LINK spdk_nvme_discover 00:01:37.011 LINK rpc_client_test 00:01:37.011 LINK spdk_lspci 00:01:37.011 CC test/lvol/esnap/esnap.o 00:01:37.011 LINK interrupt_tgt 00:01:37.011 CC test/env/mem_callbacks/mem_callbacks.o 00:01:37.011 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:37.270 LINK zipf 00:01:37.270 LINK lsvmd 00:01:37.270 LINK env_dpdk_post_init 00:01:37.270 LINK event_perf 00:01:37.270 LINK vtophys 00:01:37.270 LINK nvmf_tgt 00:01:37.270 LINK histogram_perf 00:01:37.270 CXX test/cpp_headers/endian.o 00:01:37.270 CXX test/cpp_headers/env_dpdk.o 00:01:37.270 CXX test/cpp_headers/env.o 00:01:37.270 LINK vhost 00:01:37.270 CXX test/cpp_headers/fd_group.o 00:01:37.270 CXX test/cpp_headers/event.o 00:01:37.270 LINK reactor_perf 00:01:37.270 CXX test/cpp_headers/fd.o 00:01:37.270 LINK connect_stress 00:01:37.270 LINK mkfs 00:01:37.270 LINK doorbell_aers 00:01:37.270 LINK boot_partition 00:01:37.270 LINK app_repeat 00:01:37.270 LINK ioat_perf 00:01:37.270 LINK fused_ordering 00:01:37.270 CXX test/cpp_headers/file.o 00:01:37.270 CXX test/cpp_headers/ftl.o 00:01:37.270 LINK bdev_svc 00:01:37.270 LINK hello_world 00:01:37.270 CXX test/cpp_headers/gpt_spec.o 00:01:37.270 LINK iscsi_tgt 00:01:37.270 LINK spdk_tgt 00:01:37.270 LINK spdk_trace_record 00:01:37.270 LINK poller_perf 00:01:37.270 LINK reactor 00:01:37.270 LINK hello_blob 00:01:37.270 CXX test/cpp_headers/hexlify.o 00:01:37.270 LINK jsoncat 00:01:37.270 LINK led 00:01:37.270 LINK stub 00:01:37.270 LINK reset 00:01:37.270 LINK pmr_persistence 00:01:37.270 LINK hello_bdev 00:01:37.270 CXX test/cpp_headers/histogram_data.o 00:01:37.270 LINK aer 00:01:37.270 LINK startup 00:01:37.270 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:37.270 CXX test/cpp_headers/idxd.o 00:01:37.270 CXX test/cpp_headers/idxd_spec.o 00:01:37.535 LINK scheduler 00:01:37.535 LINK spdk_trace 00:01:37.535 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:37.535 LINK err_injection 00:01:37.535 LINK thread 00:01:37.535 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:37.535 LINK cmb_copy 00:01:37.535 CXX test/cpp_headers/init.o 00:01:37.535 LINK reconnect 00:01:37.535 LINK idxd_perf 00:01:37.535 CXX test/cpp_headers/ioat.o 00:01:37.535 CXX test/cpp_headers/ioat_spec.o 00:01:37.535 CXX test/cpp_headers/iscsi_spec.o 00:01:37.535 LINK hotplug 00:01:37.535 LINK verify 00:01:37.535 CXX test/cpp_headers/json.o 00:01:37.535 CXX test/cpp_headers/jsonrpc.o 00:01:37.535 LINK reserve 00:01:37.535 CXX test/cpp_headers/keyring.o 00:01:37.535 CXX test/cpp_headers/keyring_module.o 00:01:37.535 CXX test/cpp_headers/likely.o 00:01:37.535 CXX test/cpp_headers/log.o 00:01:37.535 CXX test/cpp_headers/lvol.o 00:01:37.535 CXX test/cpp_headers/mmio.o 00:01:37.535 CXX test/cpp_headers/memory.o 00:01:37.535 CXX test/cpp_headers/nbd.o 00:01:37.535 LINK hello_sock 00:01:37.535 CXX test/cpp_headers/notify.o 00:01:37.535 LINK simple_copy 00:01:37.535 LINK spdk_dd 00:01:37.535 CXX test/cpp_headers/nvme.o 00:01:37.535 LINK nvme_dp 00:01:37.535 LINK test_dma 00:01:37.535 CXX test/cpp_headers/nvme_intel.o 00:01:37.535 LINK sgl 00:01:37.535 LINK dif 00:01:37.535 CXX test/cpp_headers/nvme_ocssd.o 00:01:37.535 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:37.535 CXX test/cpp_headers/nvme_spec.o 00:01:37.535 CXX test/cpp_headers/nvme_zns.o 00:01:37.535 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:37.535 CXX test/cpp_headers/nvmf_cmd.o 00:01:37.535 CXX test/cpp_headers/nvmf.o 00:01:37.535 CXX test/cpp_headers/nvmf_spec.o 00:01:37.535 CXX test/cpp_headers/nvmf_transport.o 00:01:37.535 LINK overhead 00:01:37.535 CXX test/cpp_headers/opal.o 00:01:37.535 CXX test/cpp_headers/opal_spec.o 00:01:37.535 CXX test/cpp_headers/pci_ids.o 00:01:37.535 CXX test/cpp_headers/pipe.o 00:01:37.535 CXX test/cpp_headers/queue.o 00:01:37.535 LINK nvme_manage 00:01:37.535 CXX test/cpp_headers/reduce.o 00:01:37.535 CXX test/cpp_headers/rpc.o 00:01:37.535 CXX test/cpp_headers/scheduler.o 00:01:37.535 LINK fdp 00:01:37.794 CXX test/cpp_headers/scsi.o 00:01:37.794 CXX test/cpp_headers/scsi_spec.o 00:01:37.794 LINK arbitration 00:01:37.794 CXX test/cpp_headers/stdinc.o 00:01:37.794 CXX test/cpp_headers/sock.o 00:01:37.794 LINK nvmf 00:01:37.794 CXX test/cpp_headers/string.o 00:01:37.794 CXX test/cpp_headers/thread.o 00:01:37.794 LINK blobcli 00:01:37.794 CXX test/cpp_headers/trace.o 00:01:37.794 CXX test/cpp_headers/trace_parser.o 00:01:37.794 LINK spdk_nvme 00:01:37.794 CXX test/cpp_headers/tree.o 00:01:37.794 CXX test/cpp_headers/ublk.o 00:01:37.794 LINK abort 00:01:37.794 LINK nvme_compliance 00:01:37.794 CXX test/cpp_headers/uuid.o 00:01:37.794 CXX test/cpp_headers/util.o 00:01:37.794 CXX test/cpp_headers/version.o 00:01:37.795 CXX test/cpp_headers/vfio_user_pci.o 00:01:37.795 CXX test/cpp_headers/vfio_user_spec.o 00:01:37.795 CXX test/cpp_headers/vhost.o 00:01:37.795 CXX test/cpp_headers/vmd.o 00:01:37.795 CXX test/cpp_headers/xor.o 00:01:37.795 CXX test/cpp_headers/zipf.o 00:01:37.795 LINK pci_ut 00:01:37.795 LINK nvme_fuzz 00:01:37.795 LINK bdevio 00:01:37.795 LINK accel_perf 00:01:37.795 LINK spdk_bdev 00:01:38.053 LINK spdk_nvme_identify 00:01:38.053 LINK spdk_top 00:01:38.053 LINK vhost_fuzz 00:01:38.053 LINK bdevperf 00:01:38.053 LINK mem_callbacks 00:01:38.053 LINK spdk_nvme_perf 00:01:38.312 LINK memory_ut 00:01:38.312 LINK cuse 00:01:38.879 LINK iscsi_fuzz 00:01:40.783 LINK esnap 00:01:41.042 00:01:41.042 real 0m42.452s 00:01:41.042 user 6m34.217s 00:01:41.042 sys 3m35.202s 00:01:41.042 10:53:38 make -- common/autotest_common.sh@1123 -- $ xtrace_disable 00:01:41.042 10:53:38 make -- common/autotest_common.sh@10 -- $ set +x 00:01:41.042 ************************************ 00:01:41.042 END TEST make 00:01:41.042 ************************************ 00:01:41.042 10:53:38 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:01:41.042 10:53:38 -- pm/common@29 -- $ signal_monitor_resources TERM 00:01:41.042 10:53:38 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:01:41.042 10:53:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:41.042 10:53:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:01:41.042 10:53:38 -- pm/common@44 -- $ pid=1961587 00:01:41.042 10:53:38 -- pm/common@50 -- $ kill -TERM 1961587 00:01:41.042 10:53:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:41.042 10:53:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:01:41.042 10:53:38 -- pm/common@44 -- $ pid=1961588 00:01:41.042 10:53:38 -- pm/common@50 -- $ kill -TERM 1961588 00:01:41.042 10:53:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:41.042 10:53:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:01:41.042 10:53:38 -- pm/common@44 -- $ pid=1961591 00:01:41.042 10:53:38 -- pm/common@50 -- $ kill -TERM 1961591 00:01:41.042 10:53:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:41.042 10:53:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:01:41.042 10:53:38 -- pm/common@44 -- $ pid=1961621 00:01:41.042 10:53:38 -- pm/common@50 -- $ sudo -E kill -TERM 1961621 00:01:41.042 10:53:38 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:01:41.042 10:53:38 -- nvmf/common.sh@7 -- # uname -s 00:01:41.301 10:53:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:41.301 10:53:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:41.301 10:53:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:41.301 10:53:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:41.301 10:53:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:41.301 10:53:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:41.301 10:53:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:41.301 10:53:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:41.301 10:53:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:41.301 10:53:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:41.301 10:53:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:01:41.301 10:53:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:01:41.301 10:53:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:41.301 10:53:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:41.301 10:53:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:01:41.301 10:53:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:01:41.301 10:53:38 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:41.301 10:53:38 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:41.301 10:53:38 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:41.301 10:53:38 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:41.301 10:53:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:41.301 10:53:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:41.301 10:53:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:41.301 10:53:38 -- paths/export.sh@5 -- # export PATH 00:01:41.301 10:53:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:41.301 10:53:38 -- nvmf/common.sh@47 -- # : 0 00:01:41.301 10:53:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:01:41.301 10:53:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:01:41.301 10:53:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:01:41.301 10:53:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:41.301 10:53:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:41.301 10:53:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:01:41.301 10:53:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:01:41.301 10:53:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:01:41.301 10:53:38 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:41.301 10:53:38 -- spdk/autotest.sh@32 -- # uname -s 00:01:41.301 10:53:38 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:41.301 10:53:38 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:41.301 10:53:38 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:41.301 10:53:38 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:41.301 10:53:38 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:41.301 10:53:38 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:41.301 10:53:38 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:41.301 10:53:38 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:41.301 10:53:38 -- spdk/autotest.sh@48 -- # udevadm_pid=2019516 00:01:41.301 10:53:38 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:01:41.301 10:53:38 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:41.301 10:53:38 -- pm/common@17 -- # local monitor 00:01:41.301 10:53:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:41.301 10:53:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:41.301 10:53:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:41.301 10:53:38 -- pm/common@21 -- # date +%s 00:01:41.301 10:53:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:41.301 10:53:38 -- pm/common@21 -- # date +%s 00:01:41.301 10:53:38 -- pm/common@25 -- # sleep 1 00:01:41.301 10:53:38 -- pm/common@21 -- # date +%s 00:01:41.301 10:53:38 -- pm/common@21 -- # date +%s 00:01:41.301 10:53:38 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715763218 00:01:41.301 10:53:38 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715763218 00:01:41.301 10:53:38 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715763218 00:01:41.301 10:53:38 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715763218 00:01:41.301 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715763218_collect-vmstat.pm.log 00:01:41.301 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715763218_collect-cpu-temp.pm.log 00:01:41.301 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715763218_collect-cpu-load.pm.log 00:01:41.301 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715763218_collect-bmc-pm.bmc.pm.log 00:01:42.237 10:53:39 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:42.237 10:53:39 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:01:42.237 10:53:39 -- common/autotest_common.sh@721 -- # xtrace_disable 00:01:42.237 10:53:39 -- common/autotest_common.sh@10 -- # set +x 00:01:42.237 10:53:39 -- spdk/autotest.sh@59 -- # create_test_list 00:01:42.237 10:53:39 -- common/autotest_common.sh@745 -- # xtrace_disable 00:01:42.237 10:53:39 -- common/autotest_common.sh@10 -- # set +x 00:01:42.237 10:53:39 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:01:42.237 10:53:39 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:42.237 10:53:39 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:42.237 10:53:39 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:42.237 10:53:39 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:42.237 10:53:39 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:01:42.237 10:53:39 -- common/autotest_common.sh@1452 -- # uname 00:01:42.237 10:53:39 -- common/autotest_common.sh@1452 -- # '[' Linux = FreeBSD ']' 00:01:42.237 10:53:39 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:01:42.237 10:53:39 -- common/autotest_common.sh@1472 -- # uname 00:01:42.237 10:53:39 -- common/autotest_common.sh@1472 -- # [[ Linux = FreeBSD ]] 00:01:42.237 10:53:39 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:01:42.237 10:53:39 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:01:42.237 10:53:39 -- spdk/autotest.sh@72 -- # hash lcov 00:01:42.237 10:53:39 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:01:42.237 10:53:39 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:01:42.237 --rc lcov_branch_coverage=1 00:01:42.237 --rc lcov_function_coverage=1 00:01:42.237 --rc genhtml_branch_coverage=1 00:01:42.237 --rc genhtml_function_coverage=1 00:01:42.237 --rc genhtml_legend=1 00:01:42.237 --rc geninfo_all_blocks=1 00:01:42.237 ' 00:01:42.237 10:53:39 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:01:42.237 --rc lcov_branch_coverage=1 00:01:42.237 --rc lcov_function_coverage=1 00:01:42.237 --rc genhtml_branch_coverage=1 00:01:42.237 --rc genhtml_function_coverage=1 00:01:42.237 --rc genhtml_legend=1 00:01:42.237 --rc geninfo_all_blocks=1 00:01:42.237 ' 00:01:42.237 10:53:39 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:01:42.237 --rc lcov_branch_coverage=1 00:01:42.237 --rc lcov_function_coverage=1 00:01:42.237 --rc genhtml_branch_coverage=1 00:01:42.237 --rc genhtml_function_coverage=1 00:01:42.237 --rc genhtml_legend=1 00:01:42.237 --rc geninfo_all_blocks=1 00:01:42.237 --no-external' 00:01:42.237 10:53:39 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:01:42.237 --rc lcov_branch_coverage=1 00:01:42.237 --rc lcov_function_coverage=1 00:01:42.237 --rc genhtml_branch_coverage=1 00:01:42.237 --rc genhtml_function_coverage=1 00:01:42.237 --rc genhtml_legend=1 00:01:42.237 --rc geninfo_all_blocks=1 00:01:42.237 --no-external' 00:01:42.237 10:53:39 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:01:42.496 lcov: LCOV version 1.14 00:01:42.496 10:53:39 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:01:52.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:01:52.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:01:52.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:01:52.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:01:52.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:01:52.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:01:52.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:01:52.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:04.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:04.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:04.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:04.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:04.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:04.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:04.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:04.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:04.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:04.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:04.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:04.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:04.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:04.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:04.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:04.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:04.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:04.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:04.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:04.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:04.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:04.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:04.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:04.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:04.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:04.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:04.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:04.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:04.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:04.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:04.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:04.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:04.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:04.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:04.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:04.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:04.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:04.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:04.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:04.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:04.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:04.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:04.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:04.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:04.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:04.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:04.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:04.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:04.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:04.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:04.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:04.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:04.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:04.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:04.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:04.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:04.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:04.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:04.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:04.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:04.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:04.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:04.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:04.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:04.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:04.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:04.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:04.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:04.714 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:06.090 10:54:03 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:06.090 10:54:03 -- common/autotest_common.sh@721 -- # xtrace_disable 00:02:06.090 10:54:03 -- common/autotest_common.sh@10 -- # set +x 00:02:06.090 10:54:03 -- spdk/autotest.sh@91 -- # rm -f 00:02:06.090 10:54:03 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:08.621 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:02:08.621 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:08.621 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:08.621 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:08.621 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:08.621 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:08.621 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:08.880 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:08.880 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:08.880 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:08.880 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:08.880 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:08.880 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:08.880 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:08.880 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:08.880 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:08.880 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:08.880 10:54:06 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:08.880 10:54:06 -- common/autotest_common.sh@1666 -- # zoned_devs=() 00:02:08.880 10:54:06 -- common/autotest_common.sh@1666 -- # local -gA zoned_devs 00:02:08.880 10:54:06 -- common/autotest_common.sh@1667 -- # local nvme bdf 00:02:08.880 10:54:06 -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:02:08.880 10:54:06 -- common/autotest_common.sh@1670 -- # is_block_zoned nvme0n1 00:02:08.880 10:54:06 -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:02:08.880 10:54:06 -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:08.880 10:54:06 -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:02:08.880 10:54:06 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:08.880 10:54:06 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:08.880 10:54:06 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:08.880 10:54:06 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:08.880 10:54:06 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:08.880 10:54:06 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:09.138 No valid GPT data, bailing 00:02:09.138 10:54:06 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:09.138 10:54:06 -- scripts/common.sh@391 -- # pt= 00:02:09.138 10:54:06 -- scripts/common.sh@392 -- # return 1 00:02:09.138 10:54:06 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:09.138 1+0 records in 00:02:09.138 1+0 records out 00:02:09.138 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0073757 s, 142 MB/s 00:02:09.138 10:54:06 -- spdk/autotest.sh@118 -- # sync 00:02:09.138 10:54:06 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:09.138 10:54:06 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:09.138 10:54:06 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:14.407 10:54:11 -- spdk/autotest.sh@124 -- # uname -s 00:02:14.408 10:54:11 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:14.408 10:54:11 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:14.408 10:54:11 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:14.408 10:54:11 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:14.408 10:54:11 -- common/autotest_common.sh@10 -- # set +x 00:02:14.408 ************************************ 00:02:14.408 START TEST setup.sh 00:02:14.408 ************************************ 00:02:14.408 10:54:11 setup.sh -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:14.408 * Looking for test storage... 00:02:14.408 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:14.408 10:54:11 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:14.408 10:54:11 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:14.408 10:54:11 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:14.408 10:54:11 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:14.408 10:54:11 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:14.408 10:54:11 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:14.408 ************************************ 00:02:14.408 START TEST acl 00:02:14.408 ************************************ 00:02:14.408 10:54:11 setup.sh.acl -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:14.408 * Looking for test storage... 00:02:14.408 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:14.408 10:54:11 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:14.408 10:54:11 setup.sh.acl -- common/autotest_common.sh@1666 -- # zoned_devs=() 00:02:14.408 10:54:11 setup.sh.acl -- common/autotest_common.sh@1666 -- # local -gA zoned_devs 00:02:14.408 10:54:11 setup.sh.acl -- common/autotest_common.sh@1667 -- # local nvme bdf 00:02:14.408 10:54:11 setup.sh.acl -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:02:14.408 10:54:11 setup.sh.acl -- common/autotest_common.sh@1670 -- # is_block_zoned nvme0n1 00:02:14.408 10:54:11 setup.sh.acl -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:02:14.408 10:54:11 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:14.408 10:54:11 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:02:14.408 10:54:11 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:14.408 10:54:11 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:14.408 10:54:11 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:14.408 10:54:11 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:14.408 10:54:11 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:14.408 10:54:11 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:14.408 10:54:11 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:17.767 10:54:14 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:17.767 10:54:14 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:17.767 10:54:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.767 10:54:14 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:17.767 10:54:14 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:17.767 10:54:14 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:20.335 Hugepages 00:02:20.335 node hugesize free / total 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:20.335 00:02:20.335 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:20.335 10:54:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:20.336 10:54:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:20.336 10:54:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:20.336 10:54:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:20.336 10:54:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:20.336 10:54:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:20.336 10:54:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:20.336 10:54:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:20.336 10:54:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:20.336 10:54:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:20.336 10:54:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:20.336 10:54:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:20.336 10:54:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:20.336 10:54:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:20.336 10:54:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:20.336 10:54:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:20.336 10:54:17 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:20.336 10:54:17 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:20.336 10:54:17 setup.sh.acl -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:20.336 10:54:17 setup.sh.acl -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:20.336 10:54:17 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:20.336 ************************************ 00:02:20.336 START TEST denied 00:02:20.336 ************************************ 00:02:20.336 10:54:17 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # denied 00:02:20.336 10:54:17 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:02:20.336 10:54:17 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:20.336 10:54:17 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:02:20.336 10:54:17 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:20.336 10:54:17 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:23.622 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:02:23.622 10:54:20 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:02:23.622 10:54:20 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:23.622 10:54:20 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:23.622 10:54:20 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:02:23.622 10:54:20 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:02:23.622 10:54:20 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:23.622 10:54:20 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:23.622 10:54:20 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:23.622 10:54:20 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:23.622 10:54:20 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:26.908 00:02:26.908 real 0m6.567s 00:02:26.908 user 0m2.135s 00:02:26.908 sys 0m3.729s 00:02:26.908 10:54:24 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # xtrace_disable 00:02:26.908 10:54:24 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:26.908 ************************************ 00:02:26.908 END TEST denied 00:02:26.908 ************************************ 00:02:26.908 10:54:24 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:26.908 10:54:24 setup.sh.acl -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:26.908 10:54:24 setup.sh.acl -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:26.908 10:54:24 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:26.908 ************************************ 00:02:26.908 START TEST allowed 00:02:26.908 ************************************ 00:02:26.908 10:54:24 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # allowed 00:02:26.908 10:54:24 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:02:26.908 10:54:24 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:26.908 10:54:24 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:02:26.908 10:54:24 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:26.908 10:54:24 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:31.099 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:02:31.099 10:54:27 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:02:31.099 10:54:27 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:02:31.099 10:54:27 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:02:31.099 10:54:27 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:31.099 10:54:27 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:33.633 00:02:33.633 real 0m6.536s 00:02:33.633 user 0m1.991s 00:02:33.633 sys 0m3.671s 00:02:33.633 10:54:30 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # xtrace_disable 00:02:33.633 10:54:30 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:02:33.633 ************************************ 00:02:33.633 END TEST allowed 00:02:33.633 ************************************ 00:02:33.633 00:02:33.633 real 0m19.220s 00:02:33.633 user 0m6.391s 00:02:33.633 sys 0m11.446s 00:02:33.633 10:54:30 setup.sh.acl -- common/autotest_common.sh@1123 -- # xtrace_disable 00:02:33.633 10:54:30 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:33.633 ************************************ 00:02:33.633 END TEST acl 00:02:33.633 ************************************ 00:02:33.633 10:54:30 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:33.633 10:54:30 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:33.633 10:54:30 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:33.633 10:54:30 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:33.633 ************************************ 00:02:33.633 START TEST hugepages 00:02:33.633 ************************************ 00:02:33.633 10:54:30 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:33.633 * Looking for test storage... 00:02:33.633 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 172919964 kB' 'MemAvailable: 176904508 kB' 'Buffers: 3888 kB' 'Cached: 10742636 kB' 'SwapCached: 0 kB' 'Active: 6766636 kB' 'Inactive: 4441772 kB' 'Active(anon): 6203460 kB' 'Inactive(anon): 0 kB' 'Active(file): 563176 kB' 'Inactive(file): 4441772 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 465084 kB' 'Mapped: 212304 kB' 'Shmem: 5741576 kB' 'KReclaimable: 248096 kB' 'Slab: 781584 kB' 'SReclaimable: 248096 kB' 'SUnreclaim: 533488 kB' 'KernelStack: 20224 kB' 'PageTables: 8736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 101982032 kB' 'Committed_AS: 7606924 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314668 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 18135040 kB' 'DirectMap1G: 181403648 kB' 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.633 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.634 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.635 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.635 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.635 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.635 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.635 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.635 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.635 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.635 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.635 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.635 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.635 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.635 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.635 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.635 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.635 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.635 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.635 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.635 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.635 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.635 10:54:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.635 10:54:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.635 10:54:30 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:02:33.635 10:54:30 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:02:33.635 10:54:30 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:33.635 10:54:30 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:33.635 10:54:30 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:33.635 10:54:30 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:33.635 10:54:30 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:33.635 10:54:30 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:33.635 10:54:30 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:33.635 10:54:30 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:02:33.635 10:54:30 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:02:33.635 10:54:30 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:33.635 10:54:30 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:33.635 10:54:30 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:33.635 10:54:30 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:33.635 10:54:30 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:33.635 10:54:30 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:33.635 10:54:30 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:02:33.635 10:54:30 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:33.635 10:54:30 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:33.635 10:54:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:33.635 10:54:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:33.635 10:54:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:33.635 10:54:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:33.893 10:54:30 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:33.893 10:54:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:33.893 10:54:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:33.893 10:54:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:33.893 10:54:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:33.893 10:54:30 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:33.893 10:54:30 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:33.893 10:54:30 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:33.893 10:54:30 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:33.893 10:54:30 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:33.893 10:54:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:33.893 ************************************ 00:02:33.893 START TEST default_setup 00:02:33.893 ************************************ 00:02:33.893 10:54:30 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # default_setup 00:02:33.893 10:54:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:33.893 10:54:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:02:33.893 10:54:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:33.893 10:54:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:02:33.893 10:54:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:33.893 10:54:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:02:33.893 10:54:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:33.893 10:54:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:33.893 10:54:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:33.893 10:54:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:33.893 10:54:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:02:33.893 10:54:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:33.893 10:54:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:33.893 10:54:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:33.893 10:54:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:33.893 10:54:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:33.893 10:54:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:33.893 10:54:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:33.893 10:54:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:02:33.893 10:54:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:02:33.893 10:54:30 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:02:33.893 10:54:30 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:36.421 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:36.422 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:36.422 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:36.422 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:36.422 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:36.422 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:36.422 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:36.422 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:36.422 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:36.422 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:36.422 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:36.422 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:36.422 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:36.422 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:36.422 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:36.422 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:37.385 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175068968 kB' 'MemAvailable: 179053512 kB' 'Buffers: 3888 kB' 'Cached: 10742740 kB' 'SwapCached: 0 kB' 'Active: 6780620 kB' 'Inactive: 4441772 kB' 'Active(anon): 6217444 kB' 'Inactive(anon): 0 kB' 'Active(file): 563176 kB' 'Inactive(file): 4441772 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 479096 kB' 'Mapped: 212256 kB' 'Shmem: 5741680 kB' 'KReclaimable: 248096 kB' 'Slab: 780388 kB' 'SReclaimable: 248096 kB' 'SUnreclaim: 532292 kB' 'KernelStack: 20192 kB' 'PageTables: 8532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 7624680 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314604 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 18135040 kB' 'DirectMap1G: 181403648 kB' 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.385 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175068388 kB' 'MemAvailable: 179052932 kB' 'Buffers: 3888 kB' 'Cached: 10742744 kB' 'SwapCached: 0 kB' 'Active: 6780908 kB' 'Inactive: 4441772 kB' 'Active(anon): 6217732 kB' 'Inactive(anon): 0 kB' 'Active(file): 563176 kB' 'Inactive(file): 4441772 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 479480 kB' 'Mapped: 212256 kB' 'Shmem: 5741684 kB' 'KReclaimable: 248096 kB' 'Slab: 780412 kB' 'SReclaimable: 248096 kB' 'SUnreclaim: 532316 kB' 'KernelStack: 20208 kB' 'PageTables: 8592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 7624700 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314572 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 18135040 kB' 'DirectMap1G: 181403648 kB' 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.386 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.387 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175081492 kB' 'MemAvailable: 179066036 kB' 'Buffers: 3888 kB' 'Cached: 10742760 kB' 'SwapCached: 0 kB' 'Active: 6780932 kB' 'Inactive: 4441772 kB' 'Active(anon): 6217756 kB' 'Inactive(anon): 0 kB' 'Active(file): 563176 kB' 'Inactive(file): 4441772 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 479488 kB' 'Mapped: 212256 kB' 'Shmem: 5741700 kB' 'KReclaimable: 248096 kB' 'Slab: 780412 kB' 'SReclaimable: 248096 kB' 'SUnreclaim: 532316 kB' 'KernelStack: 20208 kB' 'PageTables: 8592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 7624720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314572 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 18135040 kB' 'DirectMap1G: 181403648 kB' 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.388 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.389 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:37.390 nr_hugepages=1024 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:37.390 resv_hugepages=0 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:37.390 surplus_hugepages=0 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:37.390 anon_hugepages=0 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175081604 kB' 'MemAvailable: 179066148 kB' 'Buffers: 3888 kB' 'Cached: 10742780 kB' 'SwapCached: 0 kB' 'Active: 6780972 kB' 'Inactive: 4441772 kB' 'Active(anon): 6217796 kB' 'Inactive(anon): 0 kB' 'Active(file): 563176 kB' 'Inactive(file): 4441772 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 479488 kB' 'Mapped: 212256 kB' 'Shmem: 5741720 kB' 'KReclaimable: 248096 kB' 'Slab: 780388 kB' 'SReclaimable: 248096 kB' 'SUnreclaim: 532292 kB' 'KernelStack: 20208 kB' 'PageTables: 8592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 7624740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314588 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 18135040 kB' 'DirectMap1G: 181403648 kB' 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.390 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.391 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 84986244 kB' 'MemUsed: 12676440 kB' 'SwapCached: 0 kB' 'Active: 5174308 kB' 'Inactive: 4007712 kB' 'Active(anon): 4706692 kB' 'Inactive(anon): 0 kB' 'Active(file): 467616 kB' 'Inactive(file): 4007712 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8949896 kB' 'Mapped: 184108 kB' 'AnonPages: 235264 kB' 'Shmem: 4474568 kB' 'KernelStack: 12536 kB' 'PageTables: 5468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121520 kB' 'Slab: 378352 kB' 'SReclaimable: 121520 kB' 'SUnreclaim: 256832 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.392 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.393 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.393 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.393 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.393 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.393 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.393 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.393 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.393 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.393 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.393 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.393 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.393 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.393 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.393 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.393 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.393 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.393 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.393 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.393 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.393 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.393 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.393 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:37.652 node0=1024 expecting 1024 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:37.652 00:02:37.652 real 0m3.696s 00:02:37.652 user 0m1.134s 00:02:37.652 sys 0m1.788s 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # xtrace_disable 00:02:37.652 10:54:34 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:02:37.652 ************************************ 00:02:37.652 END TEST default_setup 00:02:37.652 ************************************ 00:02:37.652 10:54:34 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:37.652 10:54:34 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:37.652 10:54:34 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:37.652 10:54:34 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:37.652 ************************************ 00:02:37.652 START TEST per_node_1G_alloc 00:02:37.652 ************************************ 00:02:37.652 10:54:34 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # per_node_1G_alloc 00:02:37.653 10:54:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:02:37.653 10:54:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:37.653 10:54:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:37.653 10:54:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:37.653 10:54:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:02:37.653 10:54:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:37.653 10:54:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:37.653 10:54:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:37.653 10:54:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:37.653 10:54:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:37.653 10:54:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:37.653 10:54:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:37.653 10:54:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:37.653 10:54:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:37.653 10:54:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:37.653 10:54:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:37.653 10:54:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:37.653 10:54:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:37.653 10:54:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:37.653 10:54:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:37.653 10:54:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:37.653 10:54:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:37.653 10:54:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:37.653 10:54:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:37.653 10:54:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:02:37.653 10:54:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:37.653 10:54:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:40.188 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:40.189 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:40.189 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:40.189 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:40.189 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:40.189 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:40.189 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:40.189 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:40.189 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:40.189 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:40.189 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:40.189 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:40.189 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:40.189 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:40.189 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:40.189 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:40.189 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:40.189 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:02:40.189 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:02:40.189 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:40.189 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:40.189 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:40.189 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:40.189 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:40.189 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:40.189 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:40.189 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:40.189 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:40.189 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:40.189 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:40.189 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:40.189 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.189 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:40.189 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:40.189 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.189 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.189 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.189 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.190 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175071592 kB' 'MemAvailable: 179056136 kB' 'Buffers: 3888 kB' 'Cached: 10742876 kB' 'SwapCached: 0 kB' 'Active: 6782996 kB' 'Inactive: 4441772 kB' 'Active(anon): 6219820 kB' 'Inactive(anon): 0 kB' 'Active(file): 563176 kB' 'Inactive(file): 4441772 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481228 kB' 'Mapped: 212200 kB' 'Shmem: 5741816 kB' 'KReclaimable: 248096 kB' 'Slab: 779504 kB' 'SReclaimable: 248096 kB' 'SUnreclaim: 531408 kB' 'KernelStack: 20240 kB' 'PageTables: 8652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 7626364 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314780 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 18135040 kB' 'DirectMap1G: 181403648 kB' 00:02:40.190 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.190 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.190 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.190 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.190 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.190 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.190 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.190 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.190 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.190 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.190 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.190 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.190 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.190 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.190 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.190 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.190 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.190 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.190 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.190 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.190 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.190 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.190 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.190 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.190 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.190 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.190 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.190 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.190 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.190 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.191 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.191 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.191 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.191 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.191 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.191 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.191 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.191 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.191 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.191 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.191 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.191 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.191 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.191 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.191 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.191 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.191 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.191 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.191 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.191 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.191 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.191 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.191 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.191 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.191 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.191 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.191 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.191 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.191 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.191 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.191 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.191 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.191 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.191 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.191 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.191 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.191 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.191 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.191 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.191 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.191 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.191 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.191 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.191 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.191 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.192 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175072140 kB' 'MemAvailable: 179056684 kB' 'Buffers: 3888 kB' 'Cached: 10742880 kB' 'SwapCached: 0 kB' 'Active: 6781608 kB' 'Inactive: 4441772 kB' 'Active(anon): 6218432 kB' 'Inactive(anon): 0 kB' 'Active(file): 563176 kB' 'Inactive(file): 4441772 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 480468 kB' 'Mapped: 212232 kB' 'Shmem: 5741820 kB' 'KReclaimable: 248096 kB' 'Slab: 779608 kB' 'SReclaimable: 248096 kB' 'SUnreclaim: 531512 kB' 'KernelStack: 20192 kB' 'PageTables: 8228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 7627712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314748 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 18135040 kB' 'DirectMap1G: 181403648 kB' 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.193 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.194 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175071980 kB' 'MemAvailable: 179056524 kB' 'Buffers: 3888 kB' 'Cached: 10742896 kB' 'SwapCached: 0 kB' 'Active: 6781988 kB' 'Inactive: 4441772 kB' 'Active(anon): 6218812 kB' 'Inactive(anon): 0 kB' 'Active(file): 563176 kB' 'Inactive(file): 4441772 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 480284 kB' 'Mapped: 212240 kB' 'Shmem: 5741836 kB' 'KReclaimable: 248096 kB' 'Slab: 779640 kB' 'SReclaimable: 248096 kB' 'SUnreclaim: 531544 kB' 'KernelStack: 20400 kB' 'PageTables: 8792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 7626240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314796 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 18135040 kB' 'DirectMap1G: 181403648 kB' 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.195 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.459 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.459 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.459 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.459 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.459 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.459 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.459 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.459 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.459 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.459 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.459 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.459 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.459 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.459 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.459 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.459 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.459 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.459 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.459 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.459 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.459 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.459 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.459 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.459 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.459 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.459 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.459 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.459 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.459 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.459 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.459 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.459 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.459 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.459 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.459 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.459 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.459 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.459 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.459 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.459 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.459 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.459 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.459 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.459 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.459 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.460 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:40.461 nr_hugepages=1024 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:40.461 resv_hugepages=0 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:40.461 surplus_hugepages=0 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:40.461 anon_hugepages=0 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175071568 kB' 'MemAvailable: 179056112 kB' 'Buffers: 3888 kB' 'Cached: 10742920 kB' 'SwapCached: 0 kB' 'Active: 6781824 kB' 'Inactive: 4441772 kB' 'Active(anon): 6218648 kB' 'Inactive(anon): 0 kB' 'Active(file): 563176 kB' 'Inactive(file): 4441772 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 480088 kB' 'Mapped: 212240 kB' 'Shmem: 5741860 kB' 'KReclaimable: 248096 kB' 'Slab: 779632 kB' 'SReclaimable: 248096 kB' 'SUnreclaim: 531536 kB' 'KernelStack: 20288 kB' 'PageTables: 8548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 7627756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314812 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 18135040 kB' 'DirectMap1G: 181403648 kB' 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.461 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.462 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86028512 kB' 'MemUsed: 11634172 kB' 'SwapCached: 0 kB' 'Active: 5175752 kB' 'Inactive: 4007712 kB' 'Active(anon): 4708136 kB' 'Inactive(anon): 0 kB' 'Active(file): 467616 kB' 'Inactive(file): 4007712 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8950000 kB' 'Mapped: 184084 kB' 'AnonPages: 236696 kB' 'Shmem: 4474672 kB' 'KernelStack: 12584 kB' 'PageTables: 5616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121520 kB' 'Slab: 377592 kB' 'SReclaimable: 121520 kB' 'SUnreclaim: 256072 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.463 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.464 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718480 kB' 'MemFree: 89041556 kB' 'MemUsed: 4676924 kB' 'SwapCached: 0 kB' 'Active: 1606284 kB' 'Inactive: 434060 kB' 'Active(anon): 1510724 kB' 'Inactive(anon): 0 kB' 'Active(file): 95560 kB' 'Inactive(file): 434060 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1796832 kB' 'Mapped: 28156 kB' 'AnonPages: 243556 kB' 'Shmem: 1267212 kB' 'KernelStack: 7672 kB' 'PageTables: 3096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 126576 kB' 'Slab: 402040 kB' 'SReclaimable: 126576 kB' 'SUnreclaim: 275464 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.465 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.466 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.466 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.466 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.466 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.466 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.466 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.466 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.466 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.466 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.466 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.466 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.466 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.466 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.466 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.466 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.466 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.466 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.466 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.466 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.466 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.466 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.466 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.466 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:40.466 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.466 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.466 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.466 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:40.466 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:40.466 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:40.466 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:40.466 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:40.466 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:40.466 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:40.466 node0=512 expecting 512 00:02:40.466 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:40.466 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:40.466 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:40.466 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:40.466 node1=512 expecting 512 00:02:40.466 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:40.466 00:02:40.466 real 0m2.855s 00:02:40.466 user 0m1.139s 00:02:40.466 sys 0m1.734s 00:02:40.466 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:02:40.466 10:54:37 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:40.466 ************************************ 00:02:40.466 END TEST per_node_1G_alloc 00:02:40.466 ************************************ 00:02:40.466 10:54:37 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:02:40.466 10:54:37 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:40.466 10:54:37 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:40.466 10:54:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:40.466 ************************************ 00:02:40.466 START TEST even_2G_alloc 00:02:40.466 ************************************ 00:02:40.466 10:54:37 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # even_2G_alloc 00:02:40.466 10:54:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:02:40.466 10:54:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:40.466 10:54:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:40.466 10:54:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:40.466 10:54:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:40.466 10:54:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:40.466 10:54:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:40.466 10:54:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:40.466 10:54:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:40.466 10:54:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:40.466 10:54:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:40.466 10:54:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:40.466 10:54:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:40.466 10:54:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:40.466 10:54:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:40.466 10:54:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:40.466 10:54:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:02:40.466 10:54:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:40.466 10:54:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:40.466 10:54:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:40.466 10:54:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:40.466 10:54:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:40.466 10:54:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:40.466 10:54:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:02:40.466 10:54:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:02:40.466 10:54:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:02:40.466 10:54:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:40.466 10:54:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:43.003 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:43.003 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:43.003 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:43.003 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:43.003 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:43.003 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:43.003 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:43.003 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:43.003 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:43.003 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:43.003 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:43.003 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:43.003 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:43.003 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:43.003 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:43.003 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:43.003 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:43.003 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:02:43.003 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:43.003 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:43.003 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:43.003 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:43.003 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:43.003 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:43.003 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:43.003 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:43.003 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:43.003 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:43.003 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:43.003 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:43.003 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.003 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:43.003 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:43.003 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.003 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.003 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.003 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.003 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175100472 kB' 'MemAvailable: 179085016 kB' 'Buffers: 3888 kB' 'Cached: 10743036 kB' 'SwapCached: 0 kB' 'Active: 6782500 kB' 'Inactive: 4441772 kB' 'Active(anon): 6219324 kB' 'Inactive(anon): 0 kB' 'Active(file): 563176 kB' 'Inactive(file): 4441772 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 480860 kB' 'Mapped: 211052 kB' 'Shmem: 5741976 kB' 'KReclaimable: 248096 kB' 'Slab: 779304 kB' 'SReclaimable: 248096 kB' 'SUnreclaim: 531208 kB' 'KernelStack: 20288 kB' 'PageTables: 8596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 7616112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314748 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 18135040 kB' 'DirectMap1G: 181403648 kB' 00:02:43.003 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.003 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.003 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.003 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.003 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.003 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.003 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.003 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.003 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.003 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.003 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.003 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.003 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.003 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.003 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.003 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.003 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.003 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.004 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.269 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.269 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.269 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.269 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.269 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.269 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.269 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.269 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.269 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.269 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.269 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.269 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.269 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.269 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.269 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.269 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.269 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.269 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.269 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.269 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.269 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.269 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.269 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.269 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.269 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.269 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.269 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.269 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.269 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.269 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.269 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.269 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.269 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.269 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.269 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.269 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.269 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.269 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.269 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.269 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.269 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.269 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.269 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.269 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.269 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.269 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.269 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.269 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.269 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.269 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.269 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175102080 kB' 'MemAvailable: 179086624 kB' 'Buffers: 3888 kB' 'Cached: 10743036 kB' 'SwapCached: 0 kB' 'Active: 6782652 kB' 'Inactive: 4441772 kB' 'Active(anon): 6219476 kB' 'Inactive(anon): 0 kB' 'Active(file): 563176 kB' 'Inactive(file): 4441772 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481020 kB' 'Mapped: 211052 kB' 'Shmem: 5741976 kB' 'KReclaimable: 248096 kB' 'Slab: 779324 kB' 'SReclaimable: 248096 kB' 'SUnreclaim: 531228 kB' 'KernelStack: 20192 kB' 'PageTables: 8536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 7616132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314700 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 18135040 kB' 'DirectMap1G: 181403648 kB' 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.270 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:43.271 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175104512 kB' 'MemAvailable: 179089056 kB' 'Buffers: 3888 kB' 'Cached: 10743052 kB' 'SwapCached: 0 kB' 'Active: 6781156 kB' 'Inactive: 4441772 kB' 'Active(anon): 6217980 kB' 'Inactive(anon): 0 kB' 'Active(file): 563176 kB' 'Inactive(file): 4441772 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 479324 kB' 'Mapped: 211040 kB' 'Shmem: 5741992 kB' 'KReclaimable: 248096 kB' 'Slab: 779208 kB' 'SReclaimable: 248096 kB' 'SUnreclaim: 531112 kB' 'KernelStack: 20128 kB' 'PageTables: 7904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 7614660 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314732 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 18135040 kB' 'DirectMap1G: 181403648 kB' 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.272 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:43.273 nr_hugepages=1024 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:43.273 resv_hugepages=0 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:43.273 surplus_hugepages=0 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:43.273 anon_hugepages=0 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.273 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175106284 kB' 'MemAvailable: 179090828 kB' 'Buffers: 3888 kB' 'Cached: 10743052 kB' 'SwapCached: 0 kB' 'Active: 6781764 kB' 'Inactive: 4441772 kB' 'Active(anon): 6218588 kB' 'Inactive(anon): 0 kB' 'Active(file): 563176 kB' 'Inactive(file): 4441772 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 479888 kB' 'Mapped: 211040 kB' 'Shmem: 5741992 kB' 'KReclaimable: 248096 kB' 'Slab: 779208 kB' 'SReclaimable: 248096 kB' 'SUnreclaim: 531112 kB' 'KernelStack: 20240 kB' 'PageTables: 8048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 7616176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314748 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 18135040 kB' 'DirectMap1G: 181403648 kB' 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.274 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:43.275 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86049676 kB' 'MemUsed: 11613008 kB' 'SwapCached: 0 kB' 'Active: 5175316 kB' 'Inactive: 4007712 kB' 'Active(anon): 4707700 kB' 'Inactive(anon): 0 kB' 'Active(file): 467616 kB' 'Inactive(file): 4007712 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8950152 kB' 'Mapped: 182884 kB' 'AnonPages: 236100 kB' 'Shmem: 4474824 kB' 'KernelStack: 12568 kB' 'PageTables: 5604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121520 kB' 'Slab: 377296 kB' 'SReclaimable: 121520 kB' 'SUnreclaim: 255776 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.276 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718480 kB' 'MemFree: 89060376 kB' 'MemUsed: 4658104 kB' 'SwapCached: 0 kB' 'Active: 1606356 kB' 'Inactive: 434060 kB' 'Active(anon): 1510796 kB' 'Inactive(anon): 0 kB' 'Active(file): 95560 kB' 'Inactive(file): 434060 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1796836 kB' 'Mapped: 28156 kB' 'AnonPages: 243668 kB' 'Shmem: 1267216 kB' 'KernelStack: 7704 kB' 'PageTables: 2696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 126576 kB' 'Slab: 401816 kB' 'SReclaimable: 126576 kB' 'SUnreclaim: 275240 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.277 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:43.278 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:43.278 node0=512 expecting 512 00:02:43.279 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:43.279 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:43.279 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:43.279 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:43.279 node1=512 expecting 512 00:02:43.279 10:54:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:43.279 00:02:43.279 real 0m2.800s 00:02:43.279 user 0m1.134s 00:02:43.279 sys 0m1.685s 00:02:43.279 10:54:40 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:02:43.279 10:54:40 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:43.279 ************************************ 00:02:43.279 END TEST even_2G_alloc 00:02:43.279 ************************************ 00:02:43.279 10:54:40 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:02:43.279 10:54:40 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:43.279 10:54:40 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:43.279 10:54:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:43.279 ************************************ 00:02:43.279 START TEST odd_alloc 00:02:43.279 ************************************ 00:02:43.279 10:54:40 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # odd_alloc 00:02:43.279 10:54:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:02:43.279 10:54:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:02:43.279 10:54:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:43.279 10:54:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:43.279 10:54:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:02:43.279 10:54:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:43.279 10:54:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:43.279 10:54:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:43.279 10:54:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:02:43.279 10:54:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:43.279 10:54:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:43.279 10:54:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:43.279 10:54:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:43.279 10:54:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:43.279 10:54:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:43.279 10:54:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:43.279 10:54:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:02:43.279 10:54:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:43.279 10:54:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:43.279 10:54:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:02:43.279 10:54:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:43.279 10:54:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:43.279 10:54:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:43.279 10:54:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:02:43.279 10:54:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:02:43.279 10:54:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:02:43.279 10:54:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:43.279 10:54:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:45.817 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:45.817 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:45.817 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:45.817 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:45.817 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:45.817 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:45.817 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:45.817 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:45.817 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:45.817 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:45.817 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:45.817 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:45.817 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:45.817 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:45.817 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:45.817 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:45.817 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175078352 kB' 'MemAvailable: 179062896 kB' 'Buffers: 3888 kB' 'Cached: 10743180 kB' 'SwapCached: 0 kB' 'Active: 6780544 kB' 'Inactive: 4441772 kB' 'Active(anon): 6217368 kB' 'Inactive(anon): 0 kB' 'Active(file): 563176 kB' 'Inactive(file): 4441772 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478420 kB' 'Mapped: 211564 kB' 'Shmem: 5742120 kB' 'KReclaimable: 248096 kB' 'Slab: 779176 kB' 'SReclaimable: 248096 kB' 'SUnreclaim: 531080 kB' 'KernelStack: 20416 kB' 'PageTables: 9128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029584 kB' 'Committed_AS: 7618136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314812 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 18135040 kB' 'DirectMap1G: 181403648 kB' 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.817 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.818 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175069108 kB' 'MemAvailable: 179053652 kB' 'Buffers: 3888 kB' 'Cached: 10743184 kB' 'SwapCached: 0 kB' 'Active: 6784652 kB' 'Inactive: 4441772 kB' 'Active(anon): 6221476 kB' 'Inactive(anon): 0 kB' 'Active(file): 563176 kB' 'Inactive(file): 4441772 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 482540 kB' 'Mapped: 211548 kB' 'Shmem: 5742124 kB' 'KReclaimable: 248096 kB' 'Slab: 779192 kB' 'SReclaimable: 248096 kB' 'SUnreclaim: 531096 kB' 'KernelStack: 20144 kB' 'PageTables: 8272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029584 kB' 'Committed_AS: 7621160 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314592 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 18135040 kB' 'DirectMap1G: 181403648 kB' 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.819 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175072788 kB' 'MemAvailable: 179057332 kB' 'Buffers: 3888 kB' 'Cached: 10743200 kB' 'SwapCached: 0 kB' 'Active: 6780004 kB' 'Inactive: 4441772 kB' 'Active(anon): 6216828 kB' 'Inactive(anon): 0 kB' 'Active(file): 563176 kB' 'Inactive(file): 4441772 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477916 kB' 'Mapped: 211388 kB' 'Shmem: 5742140 kB' 'KReclaimable: 248096 kB' 'Slab: 779232 kB' 'SReclaimable: 248096 kB' 'SUnreclaim: 531136 kB' 'KernelStack: 20256 kB' 'PageTables: 8516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029584 kB' 'Committed_AS: 7616556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314732 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 18135040 kB' 'DirectMap1G: 181403648 kB' 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.083 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:02:46.084 nr_hugepages=1025 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:46.084 resv_hugepages=0 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:46.084 surplus_hugepages=0 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:46.084 anon_hugepages=0 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175072516 kB' 'MemAvailable: 179057060 kB' 'Buffers: 3888 kB' 'Cached: 10743220 kB' 'SwapCached: 0 kB' 'Active: 6779456 kB' 'Inactive: 4441772 kB' 'Active(anon): 6216280 kB' 'Inactive(anon): 0 kB' 'Active(file): 563176 kB' 'Inactive(file): 4441772 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477400 kB' 'Mapped: 211044 kB' 'Shmem: 5742160 kB' 'KReclaimable: 248096 kB' 'Slab: 779232 kB' 'SReclaimable: 248096 kB' 'SUnreclaim: 531136 kB' 'KernelStack: 20176 kB' 'PageTables: 8312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029584 kB' 'Committed_AS: 7616440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314732 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 18135040 kB' 'DirectMap1G: 181403648 kB' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.084 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86023708 kB' 'MemUsed: 11638976 kB' 'SwapCached: 0 kB' 'Active: 5173160 kB' 'Inactive: 4007712 kB' 'Active(anon): 4705544 kB' 'Inactive(anon): 0 kB' 'Active(file): 467616 kB' 'Inactive(file): 4007712 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8950280 kB' 'Mapped: 182888 kB' 'AnonPages: 233736 kB' 'Shmem: 4474952 kB' 'KernelStack: 12520 kB' 'PageTables: 5460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121520 kB' 'Slab: 377640 kB' 'SReclaimable: 121520 kB' 'SUnreclaim: 256120 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.085 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718480 kB' 'MemFree: 89048380 kB' 'MemUsed: 4670100 kB' 'SwapCached: 0 kB' 'Active: 1607324 kB' 'Inactive: 434060 kB' 'Active(anon): 1511764 kB' 'Inactive(anon): 0 kB' 'Active(file): 95560 kB' 'Inactive(file): 434060 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1796860 kB' 'Mapped: 28156 kB' 'AnonPages: 244636 kB' 'Shmem: 1267240 kB' 'KernelStack: 7832 kB' 'PageTables: 2876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 126576 kB' 'Slab: 401592 kB' 'SReclaimable: 126576 kB' 'SUnreclaim: 275016 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.086 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.087 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.087 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.087 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.087 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.087 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.087 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.087 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.087 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.087 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.087 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.087 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.087 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.087 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.087 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.087 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.087 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.087 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.087 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.087 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.087 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.087 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.087 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.087 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.087 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.087 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.087 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.087 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.087 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.087 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.087 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.087 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.087 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.087 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.087 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.087 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:46.087 10:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:46.087 10:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:46.087 10:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:46.087 10:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:46.087 10:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:46.087 10:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:02:46.087 node0=512 expecting 513 00:02:46.087 10:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:46.087 10:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:46.087 10:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:46.087 10:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:02:46.087 node1=513 expecting 512 00:02:46.087 10:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:02:46.087 00:02:46.087 real 0m2.720s 00:02:46.087 user 0m1.124s 00:02:46.087 sys 0m1.627s 00:02:46.087 10:54:43 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:02:46.087 10:54:43 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:46.087 ************************************ 00:02:46.087 END TEST odd_alloc 00:02:46.087 ************************************ 00:02:46.087 10:54:43 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:02:46.087 10:54:43 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:46.087 10:54:43 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:46.087 10:54:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:46.087 ************************************ 00:02:46.087 START TEST custom_alloc 00:02:46.087 ************************************ 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # custom_alloc 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:46.087 10:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:48.621 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:48.621 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:48.621 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:48.621 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:48.621 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:48.621 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:48.621 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:48.883 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:48.883 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:48.883 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:48.883 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:48.883 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:48.883 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:48.883 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:48.883 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:48.883 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:48.883 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 174006420 kB' 'MemAvailable: 177990964 kB' 'Buffers: 3888 kB' 'Cached: 10743336 kB' 'SwapCached: 0 kB' 'Active: 6780936 kB' 'Inactive: 4441772 kB' 'Active(anon): 6217760 kB' 'Inactive(anon): 0 kB' 'Active(file): 563176 kB' 'Inactive(file): 4441772 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478224 kB' 'Mapped: 211156 kB' 'Shmem: 5742276 kB' 'KReclaimable: 248096 kB' 'Slab: 779576 kB' 'SReclaimable: 248096 kB' 'SUnreclaim: 531480 kB' 'KernelStack: 20192 kB' 'PageTables: 9104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506320 kB' 'Committed_AS: 7614576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314748 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 18135040 kB' 'DirectMap1G: 181403648 kB' 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.883 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:48.884 10:54:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 174007476 kB' 'MemAvailable: 177992020 kB' 'Buffers: 3888 kB' 'Cached: 10743340 kB' 'SwapCached: 0 kB' 'Active: 6780460 kB' 'Inactive: 4441772 kB' 'Active(anon): 6217284 kB' 'Inactive(anon): 0 kB' 'Active(file): 563176 kB' 'Inactive(file): 4441772 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478256 kB' 'Mapped: 211044 kB' 'Shmem: 5742280 kB' 'KReclaimable: 248096 kB' 'Slab: 779568 kB' 'SReclaimable: 248096 kB' 'SUnreclaim: 531472 kB' 'KernelStack: 20176 kB' 'PageTables: 9048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506320 kB' 'Committed_AS: 7614592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314700 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 18135040 kB' 'DirectMap1G: 181403648 kB' 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.885 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.886 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 174007880 kB' 'MemAvailable: 177992424 kB' 'Buffers: 3888 kB' 'Cached: 10743356 kB' 'SwapCached: 0 kB' 'Active: 6780524 kB' 'Inactive: 4441772 kB' 'Active(anon): 6217348 kB' 'Inactive(anon): 0 kB' 'Active(file): 563176 kB' 'Inactive(file): 4441772 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478252 kB' 'Mapped: 211044 kB' 'Shmem: 5742296 kB' 'KReclaimable: 248096 kB' 'Slab: 779568 kB' 'SReclaimable: 248096 kB' 'SUnreclaim: 531472 kB' 'KernelStack: 20176 kB' 'PageTables: 9048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506320 kB' 'Committed_AS: 7614616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314716 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 18135040 kB' 'DirectMap1G: 181403648 kB' 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.887 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:02:48.888 nr_hugepages=1536 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:48.888 resv_hugepages=0 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:48.888 surplus_hugepages=0 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:48.888 anon_hugepages=0 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.888 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 174008164 kB' 'MemAvailable: 177992708 kB' 'Buffers: 3888 kB' 'Cached: 10743376 kB' 'SwapCached: 0 kB' 'Active: 6780540 kB' 'Inactive: 4441772 kB' 'Active(anon): 6217364 kB' 'Inactive(anon): 0 kB' 'Active(file): 563176 kB' 'Inactive(file): 4441772 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478252 kB' 'Mapped: 211044 kB' 'Shmem: 5742316 kB' 'KReclaimable: 248096 kB' 'Slab: 779568 kB' 'SReclaimable: 248096 kB' 'SUnreclaim: 531472 kB' 'KernelStack: 20176 kB' 'PageTables: 9048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506320 kB' 'Committed_AS: 7614636 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314716 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 18135040 kB' 'DirectMap1G: 181403648 kB' 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.889 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.151 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.151 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.151 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.151 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.151 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.151 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.151 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.151 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.151 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.151 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.151 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.151 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.151 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.151 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.151 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.151 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.151 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.151 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.151 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.151 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.151 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.151 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.151 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.151 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.151 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.151 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.151 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.151 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.151 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.151 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.151 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.151 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.151 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.151 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.151 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86034308 kB' 'MemUsed: 11628376 kB' 'SwapCached: 0 kB' 'Active: 5174268 kB' 'Inactive: 4007712 kB' 'Active(anon): 4706652 kB' 'Inactive(anon): 0 kB' 'Active(file): 467616 kB' 'Inactive(file): 4007712 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8950388 kB' 'Mapped: 182896 kB' 'AnonPages: 234732 kB' 'Shmem: 4475060 kB' 'KernelStack: 12600 kB' 'PageTables: 6320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121520 kB' 'Slab: 377804 kB' 'SReclaimable: 121520 kB' 'SUnreclaim: 256284 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.152 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.153 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718480 kB' 'MemFree: 87977388 kB' 'MemUsed: 5741092 kB' 'SwapCached: 0 kB' 'Active: 1606692 kB' 'Inactive: 434060 kB' 'Active(anon): 1511132 kB' 'Inactive(anon): 0 kB' 'Active(file): 95560 kB' 'Inactive(file): 434060 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1796896 kB' 'Mapped: 28148 kB' 'AnonPages: 243880 kB' 'Shmem: 1267276 kB' 'KernelStack: 7592 kB' 'PageTables: 2780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 126576 kB' 'Slab: 401764 kB' 'SReclaimable: 126576 kB' 'SUnreclaim: 275188 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.154 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:49.155 node0=512 expecting 512 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:02:49.155 node1=1024 expecting 1024 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:02:49.155 00:02:49.155 real 0m2.924s 00:02:49.155 user 0m1.198s 00:02:49.155 sys 0m1.784s 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:02:49.155 10:54:46 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:49.155 ************************************ 00:02:49.155 END TEST custom_alloc 00:02:49.155 ************************************ 00:02:49.155 10:54:46 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:02:49.155 10:54:46 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:49.155 10:54:46 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:49.155 10:54:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:49.155 ************************************ 00:02:49.155 START TEST no_shrink_alloc 00:02:49.155 ************************************ 00:02:49.155 10:54:46 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # no_shrink_alloc 00:02:49.155 10:54:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:02:49.155 10:54:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:49.155 10:54:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:49.155 10:54:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:02:49.155 10:54:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:49.155 10:54:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:49.155 10:54:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:49.155 10:54:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:49.155 10:54:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:49.155 10:54:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:49.155 10:54:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:49.155 10:54:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:49.155 10:54:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:49.156 10:54:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:49.156 10:54:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:49.156 10:54:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:49.156 10:54:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:49.156 10:54:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:49.156 10:54:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:49.156 10:54:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:02:49.156 10:54:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:49.156 10:54:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:51.689 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:51.689 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:51.689 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:51.689 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:51.689 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:51.689 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:51.689 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:51.689 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:51.689 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:51.689 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:51.689 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:51.689 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:51.689 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:51.689 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:51.689 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:51.689 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:51.689 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:51.689 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:02:51.689 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:02:51.689 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:51.689 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:51.689 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:51.689 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:51.689 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:51.689 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:51.689 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:51.689 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:51.689 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:51.689 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:51.689 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:51.689 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.689 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.689 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.689 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.689 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.689 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.689 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.689 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175021452 kB' 'MemAvailable: 179005996 kB' 'Buffers: 3888 kB' 'Cached: 10743476 kB' 'SwapCached: 0 kB' 'Active: 6779900 kB' 'Inactive: 4441772 kB' 'Active(anon): 6216724 kB' 'Inactive(anon): 0 kB' 'Active(file): 563176 kB' 'Inactive(file): 4441772 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477516 kB' 'Mapped: 211072 kB' 'Shmem: 5742416 kB' 'KReclaimable: 248096 kB' 'Slab: 779896 kB' 'SReclaimable: 248096 kB' 'SUnreclaim: 531800 kB' 'KernelStack: 20144 kB' 'PageTables: 8284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 7614752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314620 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 18135040 kB' 'DirectMap1G: 181403648 kB' 00:02:51.689 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.689 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.689 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.689 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.689 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.689 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.689 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.690 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175021636 kB' 'MemAvailable: 179006180 kB' 'Buffers: 3888 kB' 'Cached: 10743492 kB' 'SwapCached: 0 kB' 'Active: 6780332 kB' 'Inactive: 4441772 kB' 'Active(anon): 6217156 kB' 'Inactive(anon): 0 kB' 'Active(file): 563176 kB' 'Inactive(file): 4441772 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478068 kB' 'Mapped: 211072 kB' 'Shmem: 5742432 kB' 'KReclaimable: 248096 kB' 'Slab: 779880 kB' 'SReclaimable: 248096 kB' 'SUnreclaim: 531784 kB' 'KernelStack: 20160 kB' 'PageTables: 8364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 7615140 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314588 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 18135040 kB' 'DirectMap1G: 181403648 kB' 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.691 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.955 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.955 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.956 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175021384 kB' 'MemAvailable: 179005928 kB' 'Buffers: 3888 kB' 'Cached: 10743508 kB' 'SwapCached: 0 kB' 'Active: 6780688 kB' 'Inactive: 4441772 kB' 'Active(anon): 6217512 kB' 'Inactive(anon): 0 kB' 'Active(file): 563176 kB' 'Inactive(file): 4441772 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478416 kB' 'Mapped: 211072 kB' 'Shmem: 5742448 kB' 'KReclaimable: 248096 kB' 'Slab: 779880 kB' 'SReclaimable: 248096 kB' 'SUnreclaim: 531784 kB' 'KernelStack: 20176 kB' 'PageTables: 8416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 7615164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314588 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 18135040 kB' 'DirectMap1G: 181403648 kB' 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.957 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.958 10:54:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.958 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.958 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.958 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.958 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.958 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.958 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:51.959 nr_hugepages=1024 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:51.959 resv_hugepages=0 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:51.959 surplus_hugepages=0 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:51.959 anon_hugepages=0 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175020968 kB' 'MemAvailable: 179005512 kB' 'Buffers: 3888 kB' 'Cached: 10743528 kB' 'SwapCached: 0 kB' 'Active: 6780712 kB' 'Inactive: 4441772 kB' 'Active(anon): 6217536 kB' 'Inactive(anon): 0 kB' 'Active(file): 563176 kB' 'Inactive(file): 4441772 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478416 kB' 'Mapped: 211072 kB' 'Shmem: 5742468 kB' 'KReclaimable: 248096 kB' 'Slab: 779880 kB' 'SReclaimable: 248096 kB' 'SUnreclaim: 531784 kB' 'KernelStack: 20176 kB' 'PageTables: 8416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 7615184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314588 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 18135040 kB' 'DirectMap1G: 181403648 kB' 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.959 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.960 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 84981792 kB' 'MemUsed: 12680892 kB' 'SwapCached: 0 kB' 'Active: 5175252 kB' 'Inactive: 4007712 kB' 'Active(anon): 4707636 kB' 'Inactive(anon): 0 kB' 'Active(file): 467616 kB' 'Inactive(file): 4007712 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8950520 kB' 'Mapped: 182924 kB' 'AnonPages: 235744 kB' 'Shmem: 4475192 kB' 'KernelStack: 12616 kB' 'PageTables: 5784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121520 kB' 'Slab: 378176 kB' 'SReclaimable: 121520 kB' 'SUnreclaim: 256656 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.961 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.962 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.963 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.963 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.963 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.963 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.963 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.963 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.963 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.963 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.963 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.963 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.963 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.963 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.963 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.963 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.963 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.963 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.963 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.963 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:51.963 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:51.963 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:51.963 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:51.963 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:51.963 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:51.963 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:51.963 node0=1024 expecting 1024 00:02:51.963 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:51.963 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:02:51.963 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:02:51.963 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:02:51.963 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:51.963 10:54:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:54.564 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:54.564 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:54.564 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:54.564 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:54.564 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:54.564 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:54.564 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:54.564 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:54.564 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:54.564 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:54.564 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:54.564 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:54.564 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:54.564 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:54.564 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:54.564 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:54.564 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:54.564 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:02:54.564 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:02:54.564 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:02:54.564 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:54.564 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:54.564 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:54.564 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:54.564 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:54.564 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:54.564 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:54.564 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:54.564 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:54.564 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:54.564 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:54.564 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.564 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:54.564 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:54.564 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.564 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.564 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.564 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.564 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175052464 kB' 'MemAvailable: 179036992 kB' 'Buffers: 3888 kB' 'Cached: 10743612 kB' 'SwapCached: 0 kB' 'Active: 6784740 kB' 'Inactive: 4441772 kB' 'Active(anon): 6221564 kB' 'Inactive(anon): 0 kB' 'Active(file): 563176 kB' 'Inactive(file): 4441772 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481788 kB' 'Mapped: 211692 kB' 'Shmem: 5742552 kB' 'KReclaimable: 248064 kB' 'Slab: 778564 kB' 'SReclaimable: 248064 kB' 'SUnreclaim: 530500 kB' 'KernelStack: 20112 kB' 'PageTables: 8280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 7617372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314620 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 18135040 kB' 'DirectMap1G: 181403648 kB' 00:02:54.564 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.564 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.564 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.564 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.564 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.564 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.564 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.564 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.564 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.564 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.564 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.564 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.564 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.564 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.564 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.564 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.564 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.564 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.564 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.564 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.564 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.564 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.565 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175052744 kB' 'MemAvailable: 179037272 kB' 'Buffers: 3888 kB' 'Cached: 10743616 kB' 'SwapCached: 0 kB' 'Active: 6788204 kB' 'Inactive: 4441772 kB' 'Active(anon): 6225028 kB' 'Inactive(anon): 0 kB' 'Active(file): 563176 kB' 'Inactive(file): 4441772 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 486232 kB' 'Mapped: 211588 kB' 'Shmem: 5742556 kB' 'KReclaimable: 248064 kB' 'Slab: 778540 kB' 'SReclaimable: 248064 kB' 'SUnreclaim: 530476 kB' 'KernelStack: 20128 kB' 'PageTables: 8324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 7621756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314592 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 18135040 kB' 'DirectMap1G: 181403648 kB' 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.566 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.567 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175053020 kB' 'MemAvailable: 179037548 kB' 'Buffers: 3888 kB' 'Cached: 10743616 kB' 'SwapCached: 0 kB' 'Active: 6788744 kB' 'Inactive: 4441772 kB' 'Active(anon): 6225568 kB' 'Inactive(anon): 0 kB' 'Active(file): 563176 kB' 'Inactive(file): 4441772 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 486280 kB' 'Mapped: 211928 kB' 'Shmem: 5742556 kB' 'KReclaimable: 248064 kB' 'Slab: 778524 kB' 'SReclaimable: 248064 kB' 'SUnreclaim: 530460 kB' 'KernelStack: 20128 kB' 'PageTables: 8352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 7621776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314592 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 18135040 kB' 'DirectMap1G: 181403648 kB' 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.568 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.569 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:54.570 nr_hugepages=1024 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:54.570 resv_hugepages=0 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:54.570 surplus_hugepages=0 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:54.570 anon_hugepages=0 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175059468 kB' 'MemAvailable: 179043996 kB' 'Buffers: 3888 kB' 'Cached: 10743656 kB' 'SwapCached: 0 kB' 'Active: 6783028 kB' 'Inactive: 4441772 kB' 'Active(anon): 6219852 kB' 'Inactive(anon): 0 kB' 'Active(file): 563176 kB' 'Inactive(file): 4441772 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 480468 kB' 'Mapped: 211096 kB' 'Shmem: 5742596 kB' 'KReclaimable: 248064 kB' 'Slab: 778524 kB' 'SReclaimable: 248064 kB' 'SUnreclaim: 530460 kB' 'KernelStack: 20080 kB' 'PageTables: 8172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 7615680 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314588 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 18135040 kB' 'DirectMap1G: 181403648 kB' 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.570 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:54.571 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85005224 kB' 'MemUsed: 12657460 kB' 'SwapCached: 0 kB' 'Active: 5176192 kB' 'Inactive: 4007712 kB' 'Active(anon): 4708576 kB' 'Inactive(anon): 0 kB' 'Active(file): 467616 kB' 'Inactive(file): 4007712 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8950664 kB' 'Mapped: 182948 kB' 'AnonPages: 236412 kB' 'Shmem: 4475336 kB' 'KernelStack: 12552 kB' 'PageTables: 5580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121520 kB' 'Slab: 377152 kB' 'SReclaimable: 121520 kB' 'SUnreclaim: 255632 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.572 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:54.573 node0=1024 expecting 1024 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:54.573 00:02:54.573 real 0m5.505s 00:02:54.573 user 0m2.224s 00:02:54.573 sys 0m3.311s 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:02:54.573 10:54:51 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:54.573 ************************************ 00:02:54.573 END TEST no_shrink_alloc 00:02:54.573 ************************************ 00:02:54.838 10:54:51 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:02:54.838 10:54:51 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:54.838 10:54:51 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:54.838 10:54:51 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:54.838 10:54:51 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:54.838 10:54:51 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:54.838 10:54:51 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:54.838 10:54:51 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:54.838 10:54:51 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:54.838 10:54:51 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:54.838 10:54:51 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:54.838 10:54:51 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:54.838 10:54:51 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:54.838 10:54:51 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:54.838 00:02:54.838 real 0m21.078s 00:02:54.838 user 0m8.200s 00:02:54.838 sys 0m12.279s 00:02:54.838 10:54:51 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # xtrace_disable 00:02:54.838 10:54:51 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:54.838 ************************************ 00:02:54.838 END TEST hugepages 00:02:54.838 ************************************ 00:02:54.838 10:54:51 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:54.838 10:54:51 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:54.838 10:54:51 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:54.838 10:54:51 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:54.838 ************************************ 00:02:54.838 START TEST driver 00:02:54.838 ************************************ 00:02:54.838 10:54:51 setup.sh.driver -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:54.838 * Looking for test storage... 00:02:54.838 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:54.838 10:54:51 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:02:54.838 10:54:51 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:54.838 10:54:52 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:59.028 10:54:55 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:02:59.028 10:54:55 setup.sh.driver -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:59.028 10:54:55 setup.sh.driver -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:59.028 10:54:55 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:02:59.028 ************************************ 00:02:59.028 START TEST guess_driver 00:02:59.028 ************************************ 00:02:59.028 10:54:55 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # guess_driver 00:02:59.028 10:54:55 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:02:59.028 10:54:55 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:02:59.028 10:54:55 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:02:59.028 10:54:55 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:02:59.028 10:54:55 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:02:59.028 10:54:55 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:02:59.028 10:54:55 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:02:59.028 10:54:55 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:02:59.028 10:54:55 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:02:59.028 10:54:55 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 174 > 0 )) 00:02:59.028 10:54:55 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:02:59.028 10:54:55 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:02:59.028 10:54:55 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:02:59.028 10:54:55 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:02:59.028 10:54:55 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:02:59.028 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:59.028 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:59.028 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:59.028 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:59.028 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:02:59.028 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:02:59.028 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:02:59.028 10:54:55 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:02:59.028 10:54:55 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:02:59.028 10:54:55 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:02:59.028 10:54:55 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:02:59.028 10:54:55 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:02:59.028 Looking for driver=vfio-pci 00:02:59.028 10:54:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:59.028 10:54:55 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:02:59.028 10:54:55 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:02:59.028 10:54:55 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:00.932 10:54:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:00.932 10:54:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:00.932 10:54:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.191 10:54:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:01.191 10:54:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:01.191 10:54:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.191 10:54:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:01.191 10:54:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:01.191 10:54:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.191 10:54:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:01.191 10:54:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:01.191 10:54:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.191 10:54:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:01.191 10:54:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:01.191 10:54:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.191 10:54:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:01.191 10:54:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:01.191 10:54:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.191 10:54:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:01.191 10:54:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:01.191 10:54:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.191 10:54:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:01.191 10:54:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:01.191 10:54:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.191 10:54:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:01.191 10:54:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:01.191 10:54:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.191 10:54:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:01.191 10:54:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:01.191 10:54:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.191 10:54:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:01.191 10:54:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:01.191 10:54:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.191 10:54:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:01.191 10:54:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:01.191 10:54:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.191 10:54:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:01.191 10:54:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:01.191 10:54:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.191 10:54:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:01.191 10:54:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:01.191 10:54:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.191 10:54:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:01.191 10:54:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:01.191 10:54:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.191 10:54:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:01.191 10:54:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:01.191 10:54:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:02.126 10:54:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:02.126 10:54:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:02.126 10:54:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:02.126 10:54:59 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:02.126 10:54:59 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:02.126 10:54:59 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:02.126 10:54:59 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:06.314 00:03:06.314 real 0m7.372s 00:03:06.314 user 0m2.060s 00:03:06.314 sys 0m3.742s 00:03:06.314 10:55:03 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:06.314 10:55:03 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:06.314 ************************************ 00:03:06.314 END TEST guess_driver 00:03:06.314 ************************************ 00:03:06.314 00:03:06.314 real 0m11.304s 00:03:06.314 user 0m3.106s 00:03:06.314 sys 0m5.820s 00:03:06.314 10:55:03 setup.sh.driver -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:06.314 10:55:03 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:06.314 ************************************ 00:03:06.314 END TEST driver 00:03:06.314 ************************************ 00:03:06.314 10:55:03 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:06.314 10:55:03 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:06.314 10:55:03 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:06.314 10:55:03 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:06.314 ************************************ 00:03:06.314 START TEST devices 00:03:06.314 ************************************ 00:03:06.314 10:55:03 setup.sh.devices -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:06.314 * Looking for test storage... 00:03:06.314 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:06.314 10:55:03 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:06.314 10:55:03 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:06.314 10:55:03 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:06.314 10:55:03 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:09.595 10:55:06 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:09.595 10:55:06 setup.sh.devices -- common/autotest_common.sh@1666 -- # zoned_devs=() 00:03:09.595 10:55:06 setup.sh.devices -- common/autotest_common.sh@1666 -- # local -gA zoned_devs 00:03:09.595 10:55:06 setup.sh.devices -- common/autotest_common.sh@1667 -- # local nvme bdf 00:03:09.595 10:55:06 setup.sh.devices -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:03:09.595 10:55:06 setup.sh.devices -- common/autotest_common.sh@1670 -- # is_block_zoned nvme0n1 00:03:09.595 10:55:06 setup.sh.devices -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:03:09.595 10:55:06 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:09.595 10:55:06 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:03:09.595 10:55:06 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:09.595 10:55:06 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:09.595 10:55:06 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:09.595 10:55:06 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:09.595 10:55:06 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:09.595 10:55:06 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:09.595 10:55:06 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:09.595 10:55:06 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:09.595 10:55:06 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:03:09.595 10:55:06 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:09.595 10:55:06 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:09.595 10:55:06 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:09.595 10:55:06 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:09.595 No valid GPT data, bailing 00:03:09.595 10:55:06 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:09.595 10:55:06 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:09.595 10:55:06 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:09.595 10:55:06 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:09.595 10:55:06 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:09.595 10:55:06 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:09.595 10:55:06 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:09.595 10:55:06 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:09.595 10:55:06 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:09.595 10:55:06 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:03:09.595 10:55:06 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:09.595 10:55:06 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:09.595 10:55:06 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:09.595 10:55:06 setup.sh.devices -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:09.595 10:55:06 setup.sh.devices -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:09.595 10:55:06 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:09.595 ************************************ 00:03:09.595 START TEST nvme_mount 00:03:09.595 ************************************ 00:03:09.595 10:55:06 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # nvme_mount 00:03:09.595 10:55:06 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:09.595 10:55:06 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:09.595 10:55:06 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:09.595 10:55:06 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:09.595 10:55:06 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:09.595 10:55:06 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:09.595 10:55:06 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:09.595 10:55:06 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:09.595 10:55:06 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:09.595 10:55:06 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:09.595 10:55:06 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:09.595 10:55:06 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:09.595 10:55:06 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:09.595 10:55:06 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:09.595 10:55:06 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:09.595 10:55:06 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:09.595 10:55:06 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:09.595 10:55:06 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:09.595 10:55:06 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:10.161 Creating new GPT entries in memory. 00:03:10.161 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:10.161 other utilities. 00:03:10.161 10:55:07 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:10.161 10:55:07 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:10.161 10:55:07 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:10.161 10:55:07 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:10.161 10:55:07 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:11.536 Creating new GPT entries in memory. 00:03:11.536 The operation has completed successfully. 00:03:11.536 10:55:08 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:11.536 10:55:08 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:11.536 10:55:08 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2051714 00:03:11.536 10:55:08 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:11.536 10:55:08 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:11.536 10:55:08 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:11.536 10:55:08 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:11.536 10:55:08 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:11.536 10:55:08 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:11.536 10:55:08 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:11.536 10:55:08 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:11.536 10:55:08 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:11.536 10:55:08 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:11.536 10:55:08 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:11.536 10:55:08 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:11.536 10:55:08 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:11.536 10:55:08 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:11.537 10:55:08 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:11.537 10:55:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.537 10:55:08 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:11.537 10:55:08 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:11.537 10:55:08 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:11.537 10:55:08 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:14.065 10:55:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:14.065 10:55:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:14.065 10:55:10 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:14.065 10:55:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.065 10:55:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:14.065 10:55:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.065 10:55:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:14.065 10:55:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.065 10:55:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:14.065 10:55:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.066 10:55:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:14.066 10:55:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.066 10:55:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:14.066 10:55:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.066 10:55:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:14.066 10:55:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.066 10:55:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:14.066 10:55:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.066 10:55:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:14.066 10:55:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.066 10:55:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:14.066 10:55:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.066 10:55:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:14.066 10:55:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.066 10:55:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:14.066 10:55:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.066 10:55:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:14.066 10:55:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.066 10:55:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:14.066 10:55:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.066 10:55:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:14.066 10:55:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.066 10:55:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:14.066 10:55:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.066 10:55:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:14.066 10:55:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.066 10:55:10 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:14.066 10:55:10 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:14.066 10:55:10 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:14.066 10:55:10 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:14.066 10:55:10 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:14.066 10:55:10 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:14.066 10:55:10 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:14.066 10:55:10 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:14.066 10:55:10 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:14.066 10:55:10 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:14.066 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:14.066 10:55:10 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:14.066 10:55:10 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:14.066 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:14.066 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:14.066 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:14.066 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:14.066 10:55:11 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:14.066 10:55:11 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:14.066 10:55:11 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:14.066 10:55:11 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:14.066 10:55:11 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:14.066 10:55:11 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:14.066 10:55:11 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:14.066 10:55:11 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:14.066 10:55:11 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:14.066 10:55:11 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:14.066 10:55:11 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:14.066 10:55:11 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:14.066 10:55:11 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:14.066 10:55:11 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:14.066 10:55:11 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:14.066 10:55:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.066 10:55:11 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:14.066 10:55:11 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:14.066 10:55:11 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:14.066 10:55:11 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:16.596 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:16.596 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:16.596 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:16.596 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.596 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:16.596 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.596 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:16.596 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.596 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:16.596 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.596 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:16.596 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.596 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:16.596 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.596 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:16.596 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.596 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:16.596 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.596 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:16.596 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.596 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:16.596 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.596 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:16.596 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.596 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:16.596 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.596 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:16.596 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.596 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:16.596 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.596 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:16.596 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.596 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:16.596 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.596 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:16.596 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.596 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:16.596 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:16.596 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:16.596 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:16.596 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:16.596 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:16.596 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:03:16.596 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:16.596 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:16.596 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:16.596 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:16.596 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:16.597 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:16.597 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:16.597 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.597 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:16.597 10:55:13 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:16.597 10:55:13 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:16.597 10:55:13 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:19.157 10:55:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:19.157 10:55:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:19.157 10:55:16 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:19.157 10:55:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.157 10:55:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:19.157 10:55:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.157 10:55:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:19.157 10:55:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.157 10:55:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:19.157 10:55:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.157 10:55:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:19.157 10:55:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.157 10:55:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:19.157 10:55:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.157 10:55:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:19.157 10:55:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.157 10:55:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:19.157 10:55:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.157 10:55:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:19.157 10:55:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.157 10:55:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:19.157 10:55:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.157 10:55:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:19.157 10:55:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.157 10:55:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:19.157 10:55:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.157 10:55:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:19.157 10:55:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.157 10:55:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:19.157 10:55:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.157 10:55:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:19.157 10:55:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.157 10:55:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:19.157 10:55:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.157 10:55:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:19.157 10:55:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.415 10:55:16 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:19.415 10:55:16 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:19.415 10:55:16 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:19.415 10:55:16 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:19.415 10:55:16 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:19.415 10:55:16 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:19.415 10:55:16 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:19.415 10:55:16 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:19.415 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:19.415 00:03:19.415 real 0m10.065s 00:03:19.415 user 0m2.773s 00:03:19.415 sys 0m4.989s 00:03:19.415 10:55:16 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:19.415 10:55:16 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:19.415 ************************************ 00:03:19.415 END TEST nvme_mount 00:03:19.415 ************************************ 00:03:19.415 10:55:16 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:19.415 10:55:16 setup.sh.devices -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:19.415 10:55:16 setup.sh.devices -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:19.415 10:55:16 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:19.415 ************************************ 00:03:19.415 START TEST dm_mount 00:03:19.415 ************************************ 00:03:19.415 10:55:16 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # dm_mount 00:03:19.415 10:55:16 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:19.415 10:55:16 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:19.415 10:55:16 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:19.415 10:55:16 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:19.415 10:55:16 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:19.415 10:55:16 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:19.415 10:55:16 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:19.415 10:55:16 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:19.415 10:55:16 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:19.415 10:55:16 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:19.415 10:55:16 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:19.415 10:55:16 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:19.415 10:55:16 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:19.415 10:55:16 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:19.415 10:55:16 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:19.415 10:55:16 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:19.415 10:55:16 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:19.415 10:55:16 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:19.415 10:55:16 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:19.415 10:55:16 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:19.415 10:55:16 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:20.496 Creating new GPT entries in memory. 00:03:20.496 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:20.496 other utilities. 00:03:20.496 10:55:17 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:20.496 10:55:17 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:20.496 10:55:17 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:20.496 10:55:17 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:20.496 10:55:17 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:21.453 Creating new GPT entries in memory. 00:03:21.453 The operation has completed successfully. 00:03:21.453 10:55:18 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:21.453 10:55:18 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:21.453 10:55:18 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:21.453 10:55:18 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:21.453 10:55:18 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:22.411 The operation has completed successfully. 00:03:22.411 10:55:19 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:22.411 10:55:19 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:22.411 10:55:19 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2055675 00:03:22.411 10:55:19 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:22.411 10:55:19 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:22.411 10:55:19 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:22.411 10:55:19 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:22.411 10:55:19 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:22.411 10:55:19 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:22.411 10:55:19 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:22.411 10:55:19 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:22.411 10:55:19 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:22.411 10:55:19 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:03:22.411 10:55:19 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:03:22.411 10:55:19 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:03:22.411 10:55:19 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:03:22.411 10:55:19 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:22.411 10:55:19 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:22.411 10:55:19 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:22.411 10:55:19 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:22.411 10:55:19 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:22.669 10:55:19 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:22.669 10:55:19 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:22.669 10:55:19 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:22.669 10:55:19 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:22.669 10:55:19 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:22.669 10:55:19 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:22.669 10:55:19 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:22.669 10:55:19 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:22.669 10:55:19 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:22.669 10:55:19 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:22.669 10:55:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.669 10:55:19 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:22.669 10:55:19 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:22.669 10:55:19 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:22.669 10:55:19 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:24.571 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.571 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:24.571 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:24.571 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.571 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.571 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.571 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.571 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.571 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.571 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.571 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.571 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.571 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.571 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.571 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.571 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.571 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.571 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.571 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.571 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.571 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.571 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.571 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.571 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.571 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.571 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.571 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.571 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.571 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.571 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.571 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.571 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.571 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.571 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.571 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.571 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.830 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:24.830 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:24.830 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:24.830 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:24.830 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:24.830 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:24.830 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:03:24.830 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:24.830 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:03:24.830 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:24.830 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:24.830 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:24.830 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:24.830 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:24.830 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.830 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:24.830 10:55:21 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:24.830 10:55:21 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.830 10:55:21 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:27.363 10:55:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.363 10:55:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:03:27.363 10:55:24 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:27.363 10:55:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.363 10:55:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.363 10:55:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.363 10:55:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.363 10:55:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.363 10:55:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.363 10:55:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.363 10:55:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.363 10:55:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.363 10:55:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.363 10:55:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.363 10:55:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.363 10:55:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.363 10:55:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.363 10:55:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.363 10:55:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.363 10:55:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.363 10:55:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.363 10:55:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.363 10:55:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.363 10:55:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.363 10:55:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.363 10:55:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.363 10:55:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.363 10:55:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.363 10:55:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.363 10:55:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.363 10:55:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.363 10:55:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.363 10:55:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.363 10:55:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.363 10:55:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.363 10:55:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.363 10:55:24 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:27.363 10:55:24 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:27.363 10:55:24 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:27.363 10:55:24 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:27.363 10:55:24 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:27.363 10:55:24 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:27.363 10:55:24 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:27.363 10:55:24 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:27.363 10:55:24 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:27.623 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:27.623 10:55:24 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:27.623 10:55:24 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:27.623 00:03:27.623 real 0m8.118s 00:03:27.623 user 0m1.849s 00:03:27.623 sys 0m3.217s 00:03:27.623 10:55:24 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:27.623 10:55:24 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:27.623 ************************************ 00:03:27.623 END TEST dm_mount 00:03:27.623 ************************************ 00:03:27.623 10:55:24 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:27.623 10:55:24 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:27.623 10:55:24 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:27.623 10:55:24 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:27.623 10:55:24 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:27.623 10:55:24 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:27.623 10:55:24 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:27.882 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:27.882 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:27.882 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:27.882 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:27.882 10:55:24 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:27.882 10:55:24 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:27.882 10:55:24 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:27.882 10:55:24 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:27.882 10:55:24 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:27.882 10:55:24 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:27.882 10:55:24 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:27.882 00:03:27.882 real 0m21.673s 00:03:27.882 user 0m5.821s 00:03:27.882 sys 0m10.339s 00:03:27.882 10:55:24 setup.sh.devices -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:27.882 10:55:24 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:27.882 ************************************ 00:03:27.882 END TEST devices 00:03:27.882 ************************************ 00:03:27.882 00:03:27.882 real 1m13.663s 00:03:27.882 user 0m23.666s 00:03:27.882 sys 0m40.142s 00:03:27.882 10:55:24 setup.sh -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:27.882 10:55:25 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:27.882 ************************************ 00:03:27.882 END TEST setup.sh 00:03:27.882 ************************************ 00:03:27.882 10:55:25 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:30.413 Hugepages 00:03:30.413 node hugesize free / total 00:03:30.413 node0 1048576kB 0 / 0 00:03:30.413 node0 2048kB 2048 / 2048 00:03:30.413 node1 1048576kB 0 / 0 00:03:30.413 node1 2048kB 0 / 0 00:03:30.413 00:03:30.413 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:30.413 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:30.413 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:30.413 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:30.413 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:30.413 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:30.413 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:30.413 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:30.413 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:30.413 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:30.413 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:30.413 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:30.413 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:30.413 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:30.413 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:30.413 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:30.413 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:30.413 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:30.413 10:55:27 -- spdk/autotest.sh@130 -- # uname -s 00:03:30.413 10:55:27 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:30.413 10:55:27 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:30.413 10:55:27 -- common/autotest_common.sh@1528 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:32.949 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:32.949 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:32.949 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:32.949 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:32.949 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:32.949 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:32.949 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:32.949 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:32.949 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:32.949 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:32.949 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:32.949 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:32.949 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:32.949 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:32.949 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:32.949 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:33.518 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:33.777 10:55:30 -- common/autotest_common.sh@1529 -- # sleep 1 00:03:34.715 10:55:31 -- common/autotest_common.sh@1530 -- # bdfs=() 00:03:34.715 10:55:31 -- common/autotest_common.sh@1530 -- # local bdfs 00:03:34.715 10:55:31 -- common/autotest_common.sh@1531 -- # bdfs=($(get_nvme_bdfs)) 00:03:34.715 10:55:31 -- common/autotest_common.sh@1531 -- # get_nvme_bdfs 00:03:34.715 10:55:31 -- common/autotest_common.sh@1510 -- # bdfs=() 00:03:34.715 10:55:31 -- common/autotest_common.sh@1510 -- # local bdfs 00:03:34.715 10:55:31 -- common/autotest_common.sh@1511 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:34.715 10:55:31 -- common/autotest_common.sh@1511 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:34.715 10:55:31 -- common/autotest_common.sh@1511 -- # jq -r '.config[].params.traddr' 00:03:34.715 10:55:31 -- common/autotest_common.sh@1512 -- # (( 1 == 0 )) 00:03:34.715 10:55:31 -- common/autotest_common.sh@1516 -- # printf '%s\n' 0000:5e:00.0 00:03:34.715 10:55:31 -- common/autotest_common.sh@1533 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:37.251 Waiting for block devices as requested 00:03:37.251 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:37.251 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:37.251 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:37.251 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:37.251 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:37.251 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:37.510 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:37.510 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:37.510 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:37.510 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:37.769 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:37.769 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:37.769 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:38.028 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:38.028 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:38.028 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:38.288 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:38.288 10:55:35 -- common/autotest_common.sh@1535 -- # for bdf in "${bdfs[@]}" 00:03:38.288 10:55:35 -- common/autotest_common.sh@1536 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:38.288 10:55:35 -- common/autotest_common.sh@1499 -- # readlink -f /sys/class/nvme/nvme0 00:03:38.288 10:55:35 -- common/autotest_common.sh@1499 -- # grep 0000:5e:00.0/nvme/nvme 00:03:38.288 10:55:35 -- common/autotest_common.sh@1499 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:38.288 10:55:35 -- common/autotest_common.sh@1500 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:38.288 10:55:35 -- common/autotest_common.sh@1504 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:38.288 10:55:35 -- common/autotest_common.sh@1504 -- # printf '%s\n' nvme0 00:03:38.288 10:55:35 -- common/autotest_common.sh@1536 -- # nvme_ctrlr=/dev/nvme0 00:03:38.288 10:55:35 -- common/autotest_common.sh@1537 -- # [[ -z /dev/nvme0 ]] 00:03:38.288 10:55:35 -- common/autotest_common.sh@1542 -- # nvme id-ctrl /dev/nvme0 00:03:38.288 10:55:35 -- common/autotest_common.sh@1542 -- # grep oacs 00:03:38.288 10:55:35 -- common/autotest_common.sh@1542 -- # cut -d: -f2 00:03:38.288 10:55:35 -- common/autotest_common.sh@1542 -- # oacs=' 0xe' 00:03:38.288 10:55:35 -- common/autotest_common.sh@1543 -- # oacs_ns_manage=8 00:03:38.288 10:55:35 -- common/autotest_common.sh@1545 -- # [[ 8 -ne 0 ]] 00:03:38.288 10:55:35 -- common/autotest_common.sh@1551 -- # nvme id-ctrl /dev/nvme0 00:03:38.288 10:55:35 -- common/autotest_common.sh@1551 -- # grep unvmcap 00:03:38.288 10:55:35 -- common/autotest_common.sh@1551 -- # cut -d: -f2 00:03:38.288 10:55:35 -- common/autotest_common.sh@1551 -- # unvmcap=' 0' 00:03:38.288 10:55:35 -- common/autotest_common.sh@1552 -- # [[ 0 -eq 0 ]] 00:03:38.288 10:55:35 -- common/autotest_common.sh@1554 -- # continue 00:03:38.288 10:55:35 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:38.288 10:55:35 -- common/autotest_common.sh@727 -- # xtrace_disable 00:03:38.288 10:55:35 -- common/autotest_common.sh@10 -- # set +x 00:03:38.288 10:55:35 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:38.288 10:55:35 -- common/autotest_common.sh@721 -- # xtrace_disable 00:03:38.288 10:55:35 -- common/autotest_common.sh@10 -- # set +x 00:03:38.288 10:55:35 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:40.826 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:40.826 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:40.826 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:40.826 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:40.826 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:40.826 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:40.826 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:40.826 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:41.085 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:41.085 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:41.085 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:41.085 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:41.085 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:41.085 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:41.085 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:41.085 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:42.022 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:42.022 10:55:39 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:42.022 10:55:39 -- common/autotest_common.sh@727 -- # xtrace_disable 00:03:42.022 10:55:39 -- common/autotest_common.sh@10 -- # set +x 00:03:42.022 10:55:39 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:42.022 10:55:39 -- common/autotest_common.sh@1588 -- # mapfile -t bdfs 00:03:42.022 10:55:39 -- common/autotest_common.sh@1588 -- # get_nvme_bdfs_by_id 0x0a54 00:03:42.022 10:55:39 -- common/autotest_common.sh@1574 -- # bdfs=() 00:03:42.022 10:55:39 -- common/autotest_common.sh@1574 -- # local bdfs 00:03:42.022 10:55:39 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs 00:03:42.022 10:55:39 -- common/autotest_common.sh@1510 -- # bdfs=() 00:03:42.022 10:55:39 -- common/autotest_common.sh@1510 -- # local bdfs 00:03:42.022 10:55:39 -- common/autotest_common.sh@1511 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:42.022 10:55:39 -- common/autotest_common.sh@1511 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:42.022 10:55:39 -- common/autotest_common.sh@1511 -- # jq -r '.config[].params.traddr' 00:03:42.022 10:55:39 -- common/autotest_common.sh@1512 -- # (( 1 == 0 )) 00:03:42.022 10:55:39 -- common/autotest_common.sh@1516 -- # printf '%s\n' 0000:5e:00.0 00:03:42.022 10:55:39 -- common/autotest_common.sh@1576 -- # for bdf in $(get_nvme_bdfs) 00:03:42.022 10:55:39 -- common/autotest_common.sh@1577 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:03:42.022 10:55:39 -- common/autotest_common.sh@1577 -- # device=0x0a54 00:03:42.022 10:55:39 -- common/autotest_common.sh@1578 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:42.022 10:55:39 -- common/autotest_common.sh@1579 -- # bdfs+=($bdf) 00:03:42.022 10:55:39 -- common/autotest_common.sh@1583 -- # printf '%s\n' 0000:5e:00.0 00:03:42.022 10:55:39 -- common/autotest_common.sh@1589 -- # [[ -z 0000:5e:00.0 ]] 00:03:42.022 10:55:39 -- common/autotest_common.sh@1593 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:42.022 10:55:39 -- common/autotest_common.sh@1594 -- # spdk_tgt_pid=2064312 00:03:42.022 10:55:39 -- common/autotest_common.sh@1595 -- # waitforlisten 2064312 00:03:42.022 10:55:39 -- common/autotest_common.sh@828 -- # '[' -z 2064312 ']' 00:03:42.022 10:55:39 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:42.022 10:55:39 -- common/autotest_common.sh@833 -- # local max_retries=100 00:03:42.022 10:55:39 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:42.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:42.022 10:55:39 -- common/autotest_common.sh@837 -- # xtrace_disable 00:03:42.022 10:55:39 -- common/autotest_common.sh@10 -- # set +x 00:03:42.022 [2024-05-15 10:55:39.238519] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:03:42.022 [2024-05-15 10:55:39.238560] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2064312 ] 00:03:42.022 EAL: No free 2048 kB hugepages reported on node 1 00:03:42.281 [2024-05-15 10:55:39.291302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:42.281 [2024-05-15 10:55:39.370743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:42.849 10:55:40 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:03:42.849 10:55:40 -- common/autotest_common.sh@861 -- # return 0 00:03:42.849 10:55:40 -- common/autotest_common.sh@1597 -- # bdf_id=0 00:03:42.849 10:55:40 -- common/autotest_common.sh@1598 -- # for bdf in "${bdfs[@]}" 00:03:42.849 10:55:40 -- common/autotest_common.sh@1599 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:03:46.136 nvme0n1 00:03:46.136 10:55:43 -- common/autotest_common.sh@1601 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:46.136 [2024-05-15 10:55:43.188948] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:03:46.136 request: 00:03:46.136 { 00:03:46.136 "nvme_ctrlr_name": "nvme0", 00:03:46.136 "password": "test", 00:03:46.136 "method": "bdev_nvme_opal_revert", 00:03:46.136 "req_id": 1 00:03:46.136 } 00:03:46.136 Got JSON-RPC error response 00:03:46.136 response: 00:03:46.136 { 00:03:46.136 "code": -32602, 00:03:46.136 "message": "Invalid parameters" 00:03:46.136 } 00:03:46.136 10:55:43 -- common/autotest_common.sh@1601 -- # true 00:03:46.137 10:55:43 -- common/autotest_common.sh@1602 -- # (( ++bdf_id )) 00:03:46.137 10:55:43 -- common/autotest_common.sh@1605 -- # killprocess 2064312 00:03:46.137 10:55:43 -- common/autotest_common.sh@947 -- # '[' -z 2064312 ']' 00:03:46.137 10:55:43 -- common/autotest_common.sh@951 -- # kill -0 2064312 00:03:46.137 10:55:43 -- common/autotest_common.sh@952 -- # uname 00:03:46.137 10:55:43 -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:03:46.137 10:55:43 -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2064312 00:03:46.137 10:55:43 -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:03:46.137 10:55:43 -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:03:46.137 10:55:43 -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2064312' 00:03:46.137 killing process with pid 2064312 00:03:46.137 10:55:43 -- common/autotest_common.sh@966 -- # kill 2064312 00:03:46.137 10:55:43 -- common/autotest_common.sh@971 -- # wait 2064312 00:03:48.044 10:55:44 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:03:48.044 10:55:44 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:48.044 10:55:44 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:48.044 10:55:44 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:48.044 10:55:44 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:48.044 10:55:44 -- common/autotest_common.sh@721 -- # xtrace_disable 00:03:48.044 10:55:44 -- common/autotest_common.sh@10 -- # set +x 00:03:48.044 10:55:44 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:48.044 10:55:44 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:48.044 10:55:44 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:48.044 10:55:44 -- common/autotest_common.sh@10 -- # set +x 00:03:48.044 ************************************ 00:03:48.044 START TEST env 00:03:48.044 ************************************ 00:03:48.044 10:55:44 env -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:48.044 * Looking for test storage... 00:03:48.044 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:48.044 10:55:44 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:48.044 10:55:44 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:48.044 10:55:44 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:48.044 10:55:44 env -- common/autotest_common.sh@10 -- # set +x 00:03:48.044 ************************************ 00:03:48.044 START TEST env_memory 00:03:48.044 ************************************ 00:03:48.044 10:55:44 env.env_memory -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:48.044 00:03:48.044 00:03:48.044 CUnit - A unit testing framework for C - Version 2.1-3 00:03:48.044 http://cunit.sourceforge.net/ 00:03:48.044 00:03:48.044 00:03:48.044 Suite: memory 00:03:48.044 Test: alloc and free memory map ...[2024-05-15 10:55:45.039044] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:48.044 passed 00:03:48.044 Test: mem map translation ...[2024-05-15 10:55:45.058316] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:48.044 [2024-05-15 10:55:45.058330] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:48.044 [2024-05-15 10:55:45.058366] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:48.044 [2024-05-15 10:55:45.058372] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:48.044 passed 00:03:48.045 Test: mem map registration ...[2024-05-15 10:55:45.097002] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:48.045 [2024-05-15 10:55:45.097014] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:48.045 passed 00:03:48.045 Test: mem map adjacent registrations ...passed 00:03:48.045 00:03:48.045 Run Summary: Type Total Ran Passed Failed Inactive 00:03:48.045 suites 1 1 n/a 0 0 00:03:48.045 tests 4 4 4 0 0 00:03:48.045 asserts 152 152 152 0 n/a 00:03:48.045 00:03:48.045 Elapsed time = 0.140 seconds 00:03:48.045 00:03:48.045 real 0m0.151s 00:03:48.045 user 0m0.145s 00:03:48.045 sys 0m0.005s 00:03:48.045 10:55:45 env.env_memory -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:48.045 10:55:45 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:48.045 ************************************ 00:03:48.045 END TEST env_memory 00:03:48.045 ************************************ 00:03:48.045 10:55:45 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:48.045 10:55:45 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:48.045 10:55:45 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:48.045 10:55:45 env -- common/autotest_common.sh@10 -- # set +x 00:03:48.045 ************************************ 00:03:48.045 START TEST env_vtophys 00:03:48.045 ************************************ 00:03:48.045 10:55:45 env.env_vtophys -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:48.045 EAL: lib.eal log level changed from notice to debug 00:03:48.045 EAL: Detected lcore 0 as core 0 on socket 0 00:03:48.045 EAL: Detected lcore 1 as core 1 on socket 0 00:03:48.045 EAL: Detected lcore 2 as core 2 on socket 0 00:03:48.045 EAL: Detected lcore 3 as core 3 on socket 0 00:03:48.045 EAL: Detected lcore 4 as core 4 on socket 0 00:03:48.045 EAL: Detected lcore 5 as core 5 on socket 0 00:03:48.045 EAL: Detected lcore 6 as core 6 on socket 0 00:03:48.045 EAL: Detected lcore 7 as core 8 on socket 0 00:03:48.045 EAL: Detected lcore 8 as core 9 on socket 0 00:03:48.045 EAL: Detected lcore 9 as core 10 on socket 0 00:03:48.045 EAL: Detected lcore 10 as core 11 on socket 0 00:03:48.045 EAL: Detected lcore 11 as core 12 on socket 0 00:03:48.045 EAL: Detected lcore 12 as core 13 on socket 0 00:03:48.045 EAL: Detected lcore 13 as core 16 on socket 0 00:03:48.045 EAL: Detected lcore 14 as core 17 on socket 0 00:03:48.045 EAL: Detected lcore 15 as core 18 on socket 0 00:03:48.045 EAL: Detected lcore 16 as core 19 on socket 0 00:03:48.045 EAL: Detected lcore 17 as core 20 on socket 0 00:03:48.045 EAL: Detected lcore 18 as core 21 on socket 0 00:03:48.045 EAL: Detected lcore 19 as core 25 on socket 0 00:03:48.045 EAL: Detected lcore 20 as core 26 on socket 0 00:03:48.045 EAL: Detected lcore 21 as core 27 on socket 0 00:03:48.045 EAL: Detected lcore 22 as core 28 on socket 0 00:03:48.045 EAL: Detected lcore 23 as core 29 on socket 0 00:03:48.045 EAL: Detected lcore 24 as core 0 on socket 1 00:03:48.045 EAL: Detected lcore 25 as core 1 on socket 1 00:03:48.045 EAL: Detected lcore 26 as core 2 on socket 1 00:03:48.045 EAL: Detected lcore 27 as core 3 on socket 1 00:03:48.045 EAL: Detected lcore 28 as core 4 on socket 1 00:03:48.045 EAL: Detected lcore 29 as core 5 on socket 1 00:03:48.045 EAL: Detected lcore 30 as core 6 on socket 1 00:03:48.045 EAL: Detected lcore 31 as core 9 on socket 1 00:03:48.045 EAL: Detected lcore 32 as core 10 on socket 1 00:03:48.045 EAL: Detected lcore 33 as core 11 on socket 1 00:03:48.045 EAL: Detected lcore 34 as core 12 on socket 1 00:03:48.045 EAL: Detected lcore 35 as core 13 on socket 1 00:03:48.045 EAL: Detected lcore 36 as core 16 on socket 1 00:03:48.045 EAL: Detected lcore 37 as core 17 on socket 1 00:03:48.045 EAL: Detected lcore 38 as core 18 on socket 1 00:03:48.045 EAL: Detected lcore 39 as core 19 on socket 1 00:03:48.045 EAL: Detected lcore 40 as core 20 on socket 1 00:03:48.045 EAL: Detected lcore 41 as core 21 on socket 1 00:03:48.045 EAL: Detected lcore 42 as core 24 on socket 1 00:03:48.045 EAL: Detected lcore 43 as core 25 on socket 1 00:03:48.045 EAL: Detected lcore 44 as core 26 on socket 1 00:03:48.045 EAL: Detected lcore 45 as core 27 on socket 1 00:03:48.045 EAL: Detected lcore 46 as core 28 on socket 1 00:03:48.045 EAL: Detected lcore 47 as core 29 on socket 1 00:03:48.045 EAL: Detected lcore 48 as core 0 on socket 0 00:03:48.045 EAL: Detected lcore 49 as core 1 on socket 0 00:03:48.045 EAL: Detected lcore 50 as core 2 on socket 0 00:03:48.045 EAL: Detected lcore 51 as core 3 on socket 0 00:03:48.045 EAL: Detected lcore 52 as core 4 on socket 0 00:03:48.045 EAL: Detected lcore 53 as core 5 on socket 0 00:03:48.045 EAL: Detected lcore 54 as core 6 on socket 0 00:03:48.045 EAL: Detected lcore 55 as core 8 on socket 0 00:03:48.045 EAL: Detected lcore 56 as core 9 on socket 0 00:03:48.045 EAL: Detected lcore 57 as core 10 on socket 0 00:03:48.045 EAL: Detected lcore 58 as core 11 on socket 0 00:03:48.045 EAL: Detected lcore 59 as core 12 on socket 0 00:03:48.045 EAL: Detected lcore 60 as core 13 on socket 0 00:03:48.045 EAL: Detected lcore 61 as core 16 on socket 0 00:03:48.045 EAL: Detected lcore 62 as core 17 on socket 0 00:03:48.045 EAL: Detected lcore 63 as core 18 on socket 0 00:03:48.045 EAL: Detected lcore 64 as core 19 on socket 0 00:03:48.045 EAL: Detected lcore 65 as core 20 on socket 0 00:03:48.045 EAL: Detected lcore 66 as core 21 on socket 0 00:03:48.045 EAL: Detected lcore 67 as core 25 on socket 0 00:03:48.045 EAL: Detected lcore 68 as core 26 on socket 0 00:03:48.045 EAL: Detected lcore 69 as core 27 on socket 0 00:03:48.045 EAL: Detected lcore 70 as core 28 on socket 0 00:03:48.045 EAL: Detected lcore 71 as core 29 on socket 0 00:03:48.045 EAL: Detected lcore 72 as core 0 on socket 1 00:03:48.045 EAL: Detected lcore 73 as core 1 on socket 1 00:03:48.045 EAL: Detected lcore 74 as core 2 on socket 1 00:03:48.045 EAL: Detected lcore 75 as core 3 on socket 1 00:03:48.045 EAL: Detected lcore 76 as core 4 on socket 1 00:03:48.045 EAL: Detected lcore 77 as core 5 on socket 1 00:03:48.045 EAL: Detected lcore 78 as core 6 on socket 1 00:03:48.046 EAL: Detected lcore 79 as core 9 on socket 1 00:03:48.046 EAL: Detected lcore 80 as core 10 on socket 1 00:03:48.046 EAL: Detected lcore 81 as core 11 on socket 1 00:03:48.046 EAL: Detected lcore 82 as core 12 on socket 1 00:03:48.046 EAL: Detected lcore 83 as core 13 on socket 1 00:03:48.046 EAL: Detected lcore 84 as core 16 on socket 1 00:03:48.046 EAL: Detected lcore 85 as core 17 on socket 1 00:03:48.046 EAL: Detected lcore 86 as core 18 on socket 1 00:03:48.046 EAL: Detected lcore 87 as core 19 on socket 1 00:03:48.046 EAL: Detected lcore 88 as core 20 on socket 1 00:03:48.046 EAL: Detected lcore 89 as core 21 on socket 1 00:03:48.046 EAL: Detected lcore 90 as core 24 on socket 1 00:03:48.046 EAL: Detected lcore 91 as core 25 on socket 1 00:03:48.046 EAL: Detected lcore 92 as core 26 on socket 1 00:03:48.046 EAL: Detected lcore 93 as core 27 on socket 1 00:03:48.046 EAL: Detected lcore 94 as core 28 on socket 1 00:03:48.046 EAL: Detected lcore 95 as core 29 on socket 1 00:03:48.046 EAL: Maximum logical cores by configuration: 128 00:03:48.046 EAL: Detected CPU lcores: 96 00:03:48.046 EAL: Detected NUMA nodes: 2 00:03:48.046 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:03:48.046 EAL: Detected shared linkage of DPDK 00:03:48.046 EAL: No shared files mode enabled, IPC will be disabled 00:03:48.046 EAL: Bus pci wants IOVA as 'DC' 00:03:48.046 EAL: Buses did not request a specific IOVA mode. 00:03:48.046 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:48.046 EAL: Selected IOVA mode 'VA' 00:03:48.046 EAL: No free 2048 kB hugepages reported on node 1 00:03:48.046 EAL: Probing VFIO support... 00:03:48.046 EAL: IOMMU type 1 (Type 1) is supported 00:03:48.046 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:48.046 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:48.046 EAL: VFIO support initialized 00:03:48.046 EAL: Ask a virtual area of 0x2e000 bytes 00:03:48.046 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:48.046 EAL: Setting up physically contiguous memory... 00:03:48.046 EAL: Setting maximum number of open files to 524288 00:03:48.046 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:48.046 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:48.046 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:48.046 EAL: Ask a virtual area of 0x61000 bytes 00:03:48.046 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:48.046 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:48.046 EAL: Ask a virtual area of 0x400000000 bytes 00:03:48.046 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:48.046 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:48.046 EAL: Ask a virtual area of 0x61000 bytes 00:03:48.046 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:48.046 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:48.046 EAL: Ask a virtual area of 0x400000000 bytes 00:03:48.046 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:48.046 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:48.046 EAL: Ask a virtual area of 0x61000 bytes 00:03:48.046 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:48.046 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:48.046 EAL: Ask a virtual area of 0x400000000 bytes 00:03:48.046 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:48.046 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:48.046 EAL: Ask a virtual area of 0x61000 bytes 00:03:48.046 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:48.046 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:48.046 EAL: Ask a virtual area of 0x400000000 bytes 00:03:48.046 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:48.047 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:48.047 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:48.047 EAL: Ask a virtual area of 0x61000 bytes 00:03:48.047 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:48.047 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:48.047 EAL: Ask a virtual area of 0x400000000 bytes 00:03:48.047 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:48.047 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:48.047 EAL: Ask a virtual area of 0x61000 bytes 00:03:48.047 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:48.047 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:48.047 EAL: Ask a virtual area of 0x400000000 bytes 00:03:48.047 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:48.047 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:48.047 EAL: Ask a virtual area of 0x61000 bytes 00:03:48.047 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:48.047 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:48.047 EAL: Ask a virtual area of 0x400000000 bytes 00:03:48.047 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:48.047 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:48.047 EAL: Ask a virtual area of 0x61000 bytes 00:03:48.047 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:48.047 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:48.047 EAL: Ask a virtual area of 0x400000000 bytes 00:03:48.047 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:48.047 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:48.047 EAL: Hugepages will be freed exactly as allocated. 00:03:48.047 EAL: No shared files mode enabled, IPC is disabled 00:03:48.047 EAL: No shared files mode enabled, IPC is disabled 00:03:48.047 EAL: TSC frequency is ~2300000 KHz 00:03:48.047 EAL: Main lcore 0 is ready (tid=7f7f3add0a00;cpuset=[0]) 00:03:48.047 EAL: Trying to obtain current memory policy. 00:03:48.047 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.047 EAL: Restoring previous memory policy: 0 00:03:48.047 EAL: request: mp_malloc_sync 00:03:48.047 EAL: No shared files mode enabled, IPC is disabled 00:03:48.047 EAL: Heap on socket 0 was expanded by 2MB 00:03:48.047 EAL: No shared files mode enabled, IPC is disabled 00:03:48.047 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:48.047 EAL: Mem event callback 'spdk:(nil)' registered 00:03:48.047 00:03:48.047 00:03:48.047 CUnit - A unit testing framework for C - Version 2.1-3 00:03:48.047 http://cunit.sourceforge.net/ 00:03:48.047 00:03:48.047 00:03:48.047 Suite: components_suite 00:03:48.047 Test: vtophys_malloc_test ...passed 00:03:48.047 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:48.047 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.047 EAL: Restoring previous memory policy: 4 00:03:48.047 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.047 EAL: request: mp_malloc_sync 00:03:48.047 EAL: No shared files mode enabled, IPC is disabled 00:03:48.047 EAL: Heap on socket 0 was expanded by 4MB 00:03:48.047 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.047 EAL: request: mp_malloc_sync 00:03:48.047 EAL: No shared files mode enabled, IPC is disabled 00:03:48.047 EAL: Heap on socket 0 was shrunk by 4MB 00:03:48.047 EAL: Trying to obtain current memory policy. 00:03:48.048 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.048 EAL: Restoring previous memory policy: 4 00:03:48.048 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.048 EAL: request: mp_malloc_sync 00:03:48.048 EAL: No shared files mode enabled, IPC is disabled 00:03:48.048 EAL: Heap on socket 0 was expanded by 6MB 00:03:48.048 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.048 EAL: request: mp_malloc_sync 00:03:48.048 EAL: No shared files mode enabled, IPC is disabled 00:03:48.048 EAL: Heap on socket 0 was shrunk by 6MB 00:03:48.048 EAL: Trying to obtain current memory policy. 00:03:48.048 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.048 EAL: Restoring previous memory policy: 4 00:03:48.048 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.048 EAL: request: mp_malloc_sync 00:03:48.048 EAL: No shared files mode enabled, IPC is disabled 00:03:48.048 EAL: Heap on socket 0 was expanded by 10MB 00:03:48.048 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.048 EAL: request: mp_malloc_sync 00:03:48.048 EAL: No shared files mode enabled, IPC is disabled 00:03:48.048 EAL: Heap on socket 0 was shrunk by 10MB 00:03:48.048 EAL: Trying to obtain current memory policy. 00:03:48.048 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.048 EAL: Restoring previous memory policy: 4 00:03:48.048 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.048 EAL: request: mp_malloc_sync 00:03:48.048 EAL: No shared files mode enabled, IPC is disabled 00:03:48.048 EAL: Heap on socket 0 was expanded by 18MB 00:03:48.048 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.048 EAL: request: mp_malloc_sync 00:03:48.048 EAL: No shared files mode enabled, IPC is disabled 00:03:48.048 EAL: Heap on socket 0 was shrunk by 18MB 00:03:48.048 EAL: Trying to obtain current memory policy. 00:03:48.048 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.307 EAL: Restoring previous memory policy: 4 00:03:48.307 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.307 EAL: request: mp_malloc_sync 00:03:48.307 EAL: No shared files mode enabled, IPC is disabled 00:03:48.307 EAL: Heap on socket 0 was expanded by 34MB 00:03:48.307 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.307 EAL: request: mp_malloc_sync 00:03:48.307 EAL: No shared files mode enabled, IPC is disabled 00:03:48.307 EAL: Heap on socket 0 was shrunk by 34MB 00:03:48.307 EAL: Trying to obtain current memory policy. 00:03:48.307 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.307 EAL: Restoring previous memory policy: 4 00:03:48.307 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.307 EAL: request: mp_malloc_sync 00:03:48.307 EAL: No shared files mode enabled, IPC is disabled 00:03:48.307 EAL: Heap on socket 0 was expanded by 66MB 00:03:48.307 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.308 EAL: request: mp_malloc_sync 00:03:48.308 EAL: No shared files mode enabled, IPC is disabled 00:03:48.308 EAL: Heap on socket 0 was shrunk by 66MB 00:03:48.308 EAL: Trying to obtain current memory policy. 00:03:48.308 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.308 EAL: Restoring previous memory policy: 4 00:03:48.308 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.308 EAL: request: mp_malloc_sync 00:03:48.308 EAL: No shared files mode enabled, IPC is disabled 00:03:48.308 EAL: Heap on socket 0 was expanded by 130MB 00:03:48.308 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.308 EAL: request: mp_malloc_sync 00:03:48.308 EAL: No shared files mode enabled, IPC is disabled 00:03:48.308 EAL: Heap on socket 0 was shrunk by 130MB 00:03:48.308 EAL: Trying to obtain current memory policy. 00:03:48.308 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.308 EAL: Restoring previous memory policy: 4 00:03:48.308 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.308 EAL: request: mp_malloc_sync 00:03:48.308 EAL: No shared files mode enabled, IPC is disabled 00:03:48.308 EAL: Heap on socket 0 was expanded by 258MB 00:03:48.308 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.308 EAL: request: mp_malloc_sync 00:03:48.308 EAL: No shared files mode enabled, IPC is disabled 00:03:48.308 EAL: Heap on socket 0 was shrunk by 258MB 00:03:48.308 EAL: Trying to obtain current memory policy. 00:03:48.308 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.566 EAL: Restoring previous memory policy: 4 00:03:48.566 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.566 EAL: request: mp_malloc_sync 00:03:48.566 EAL: No shared files mode enabled, IPC is disabled 00:03:48.566 EAL: Heap on socket 0 was expanded by 514MB 00:03:48.566 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.566 EAL: request: mp_malloc_sync 00:03:48.566 EAL: No shared files mode enabled, IPC is disabled 00:03:48.566 EAL: Heap on socket 0 was shrunk by 514MB 00:03:48.566 EAL: Trying to obtain current memory policy. 00:03:48.566 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.838 EAL: Restoring previous memory policy: 4 00:03:48.838 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.838 EAL: request: mp_malloc_sync 00:03:48.838 EAL: No shared files mode enabled, IPC is disabled 00:03:48.838 EAL: Heap on socket 0 was expanded by 1026MB 00:03:49.186 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.186 EAL: request: mp_malloc_sync 00:03:49.186 EAL: No shared files mode enabled, IPC is disabled 00:03:49.186 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:49.186 passed 00:03:49.186 00:03:49.186 Run Summary: Type Total Ran Passed Failed Inactive 00:03:49.186 suites 1 1 n/a 0 0 00:03:49.186 tests 2 2 2 0 0 00:03:49.186 asserts 497 497 497 0 n/a 00:03:49.186 00:03:49.186 Elapsed time = 0.965 seconds 00:03:49.186 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.186 EAL: request: mp_malloc_sync 00:03:49.186 EAL: No shared files mode enabled, IPC is disabled 00:03:49.186 EAL: Heap on socket 0 was shrunk by 2MB 00:03:49.186 EAL: No shared files mode enabled, IPC is disabled 00:03:49.186 EAL: No shared files mode enabled, IPC is disabled 00:03:49.186 EAL: No shared files mode enabled, IPC is disabled 00:03:49.186 00:03:49.186 real 0m1.074s 00:03:49.186 user 0m0.629s 00:03:49.186 sys 0m0.417s 00:03:49.186 10:55:46 env.env_vtophys -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:49.186 10:55:46 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:49.186 ************************************ 00:03:49.186 END TEST env_vtophys 00:03:49.186 ************************************ 00:03:49.186 10:55:46 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:49.186 10:55:46 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:49.186 10:55:46 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:49.186 10:55:46 env -- common/autotest_common.sh@10 -- # set +x 00:03:49.186 ************************************ 00:03:49.186 START TEST env_pci 00:03:49.186 ************************************ 00:03:49.186 10:55:46 env.env_pci -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:49.186 00:03:49.186 00:03:49.186 CUnit - A unit testing framework for C - Version 2.1-3 00:03:49.186 http://cunit.sourceforge.net/ 00:03:49.186 00:03:49.186 00:03:49.186 Suite: pci 00:03:49.186 Test: pci_hook ...[2024-05-15 10:55:46.381231] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2065747 has claimed it 00:03:49.186 EAL: Cannot find device (10000:00:01.0) 00:03:49.186 EAL: Failed to attach device on primary process 00:03:49.186 passed 00:03:49.186 00:03:49.186 Run Summary: Type Total Ran Passed Failed Inactive 00:03:49.186 suites 1 1 n/a 0 0 00:03:49.186 tests 1 1 1 0 0 00:03:49.186 asserts 25 25 25 0 n/a 00:03:49.186 00:03:49.186 Elapsed time = 0.025 seconds 00:03:49.186 00:03:49.186 real 0m0.044s 00:03:49.186 user 0m0.015s 00:03:49.186 sys 0m0.029s 00:03:49.186 10:55:46 env.env_pci -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:49.186 10:55:46 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:49.186 ************************************ 00:03:49.186 END TEST env_pci 00:03:49.186 ************************************ 00:03:49.448 10:55:46 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:49.448 10:55:46 env -- env/env.sh@15 -- # uname 00:03:49.448 10:55:46 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:49.448 10:55:46 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:49.448 10:55:46 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:49.448 10:55:46 env -- common/autotest_common.sh@1098 -- # '[' 5 -le 1 ']' 00:03:49.448 10:55:46 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:49.448 10:55:46 env -- common/autotest_common.sh@10 -- # set +x 00:03:49.448 ************************************ 00:03:49.448 START TEST env_dpdk_post_init 00:03:49.448 ************************************ 00:03:49.448 10:55:46 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:49.448 EAL: Detected CPU lcores: 96 00:03:49.448 EAL: Detected NUMA nodes: 2 00:03:49.448 EAL: Detected shared linkage of DPDK 00:03:49.448 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:49.448 EAL: Selected IOVA mode 'VA' 00:03:49.448 EAL: No free 2048 kB hugepages reported on node 1 00:03:49.448 EAL: VFIO support initialized 00:03:49.448 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:49.448 EAL: Using IOMMU type 1 (Type 1) 00:03:49.448 EAL: Ignore mapping IO port bar(1) 00:03:49.448 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:03:49.448 EAL: Ignore mapping IO port bar(1) 00:03:49.448 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:03:49.448 EAL: Ignore mapping IO port bar(1) 00:03:49.448 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:03:49.448 EAL: Ignore mapping IO port bar(1) 00:03:49.448 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:03:49.448 EAL: Ignore mapping IO port bar(1) 00:03:49.448 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:03:49.448 EAL: Ignore mapping IO port bar(1) 00:03:49.448 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:03:49.448 EAL: Ignore mapping IO port bar(1) 00:03:49.448 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:03:49.448 EAL: Ignore mapping IO port bar(1) 00:03:49.448 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:03:50.385 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:03:50.385 EAL: Ignore mapping IO port bar(1) 00:03:50.385 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:03:50.385 EAL: Ignore mapping IO port bar(1) 00:03:50.386 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:03:50.386 EAL: Ignore mapping IO port bar(1) 00:03:50.386 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:03:50.386 EAL: Ignore mapping IO port bar(1) 00:03:50.386 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:03:50.386 EAL: Ignore mapping IO port bar(1) 00:03:50.386 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:03:50.386 EAL: Ignore mapping IO port bar(1) 00:03:50.386 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:03:50.386 EAL: Ignore mapping IO port bar(1) 00:03:50.386 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:03:50.386 EAL: Ignore mapping IO port bar(1) 00:03:50.386 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:03:53.675 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:03:53.675 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:03:53.675 Starting DPDK initialization... 00:03:53.675 Starting SPDK post initialization... 00:03:53.675 SPDK NVMe probe 00:03:53.675 Attaching to 0000:5e:00.0 00:03:53.675 Attached to 0000:5e:00.0 00:03:53.675 Cleaning up... 00:03:53.675 00:03:53.675 real 0m4.344s 00:03:53.675 user 0m3.274s 00:03:53.675 sys 0m0.143s 00:03:53.675 10:55:50 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:53.675 10:55:50 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:53.675 ************************************ 00:03:53.675 END TEST env_dpdk_post_init 00:03:53.675 ************************************ 00:03:53.675 10:55:50 env -- env/env.sh@26 -- # uname 00:03:53.675 10:55:50 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:53.675 10:55:50 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:53.675 10:55:50 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:53.675 10:55:50 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:53.675 10:55:50 env -- common/autotest_common.sh@10 -- # set +x 00:03:53.675 ************************************ 00:03:53.675 START TEST env_mem_callbacks 00:03:53.675 ************************************ 00:03:53.675 10:55:50 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:53.675 EAL: Detected CPU lcores: 96 00:03:53.675 EAL: Detected NUMA nodes: 2 00:03:53.675 EAL: Detected shared linkage of DPDK 00:03:53.675 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:53.935 EAL: Selected IOVA mode 'VA' 00:03:53.935 EAL: No free 2048 kB hugepages reported on node 1 00:03:53.935 EAL: VFIO support initialized 00:03:53.935 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:53.935 00:03:53.935 00:03:53.935 CUnit - A unit testing framework for C - Version 2.1-3 00:03:53.935 http://cunit.sourceforge.net/ 00:03:53.935 00:03:53.935 00:03:53.935 Suite: memory 00:03:53.935 Test: test ... 00:03:53.935 register 0x200000200000 2097152 00:03:53.935 malloc 3145728 00:03:53.935 register 0x200000400000 4194304 00:03:53.935 buf 0x200000500000 len 3145728 PASSED 00:03:53.935 malloc 64 00:03:53.935 buf 0x2000004fff40 len 64 PASSED 00:03:53.935 malloc 4194304 00:03:53.935 register 0x200000800000 6291456 00:03:53.935 buf 0x200000a00000 len 4194304 PASSED 00:03:53.935 free 0x200000500000 3145728 00:03:53.935 free 0x2000004fff40 64 00:03:53.935 unregister 0x200000400000 4194304 PASSED 00:03:53.935 free 0x200000a00000 4194304 00:03:53.935 unregister 0x200000800000 6291456 PASSED 00:03:53.935 malloc 8388608 00:03:53.935 register 0x200000400000 10485760 00:03:53.935 buf 0x200000600000 len 8388608 PASSED 00:03:53.935 free 0x200000600000 8388608 00:03:53.935 unregister 0x200000400000 10485760 PASSED 00:03:53.935 passed 00:03:53.935 00:03:53.935 Run Summary: Type Total Ran Passed Failed Inactive 00:03:53.935 suites 1 1 n/a 0 0 00:03:53.935 tests 1 1 1 0 0 00:03:53.935 asserts 15 15 15 0 n/a 00:03:53.935 00:03:53.935 Elapsed time = 0.005 seconds 00:03:53.935 00:03:53.935 real 0m0.052s 00:03:53.935 user 0m0.016s 00:03:53.935 sys 0m0.036s 00:03:53.935 10:55:50 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:53.935 10:55:50 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:53.935 ************************************ 00:03:53.935 END TEST env_mem_callbacks 00:03:53.935 ************************************ 00:03:53.935 00:03:53.935 real 0m6.094s 00:03:53.935 user 0m4.253s 00:03:53.935 sys 0m0.903s 00:03:53.935 10:55:50 env -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:53.935 10:55:50 env -- common/autotest_common.sh@10 -- # set +x 00:03:53.935 ************************************ 00:03:53.935 END TEST env 00:03:53.935 ************************************ 00:03:53.935 10:55:51 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:53.935 10:55:51 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:53.935 10:55:51 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:53.935 10:55:51 -- common/autotest_common.sh@10 -- # set +x 00:03:53.935 ************************************ 00:03:53.935 START TEST rpc 00:03:53.935 ************************************ 00:03:53.935 10:55:51 rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:53.935 * Looking for test storage... 00:03:53.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:53.935 10:55:51 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2066575 00:03:53.935 10:55:51 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:53.935 10:55:51 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:53.935 10:55:51 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2066575 00:03:53.935 10:55:51 rpc -- common/autotest_common.sh@828 -- # '[' -z 2066575 ']' 00:03:53.935 10:55:51 rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:53.935 10:55:51 rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:03:53.935 10:55:51 rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:53.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:53.935 10:55:51 rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:03:53.935 10:55:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:53.935 [2024-05-15 10:55:51.176688] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:03:53.935 [2024-05-15 10:55:51.176732] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2066575 ] 00:03:53.935 EAL: No free 2048 kB hugepages reported on node 1 00:03:54.194 [2024-05-15 10:55:51.229167] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:54.195 [2024-05-15 10:55:51.302793] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:54.195 [2024-05-15 10:55:51.302834] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2066575' to capture a snapshot of events at runtime. 00:03:54.195 [2024-05-15 10:55:51.302843] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:54.195 [2024-05-15 10:55:51.302848] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:54.195 [2024-05-15 10:55:51.302853] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2066575 for offline analysis/debug. 00:03:54.195 [2024-05-15 10:55:51.302878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:54.763 10:55:51 rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:03:54.763 10:55:51 rpc -- common/autotest_common.sh@861 -- # return 0 00:03:54.763 10:55:51 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:54.763 10:55:51 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:54.763 10:55:51 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:54.763 10:55:51 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:54.763 10:55:51 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:54.763 10:55:51 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:54.763 10:55:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:54.763 ************************************ 00:03:54.763 START TEST rpc_integrity 00:03:54.763 ************************************ 00:03:54.763 10:55:52 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # rpc_integrity 00:03:54.763 10:55:52 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:54.763 10:55:52 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:54.763 10:55:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:54.763 10:55:52 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:54.763 10:55:52 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:54.763 10:55:52 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:55.023 10:55:52 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:55.023 10:55:52 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:55.023 10:55:52 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:55.023 10:55:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.023 10:55:52 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:55.023 10:55:52 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:55.023 10:55:52 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:55.023 10:55:52 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:55.023 10:55:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.023 10:55:52 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:55.023 10:55:52 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:55.023 { 00:03:55.023 "name": "Malloc0", 00:03:55.023 "aliases": [ 00:03:55.023 "67efcc12-f698-4140-9a80-226b5d544b8a" 00:03:55.023 ], 00:03:55.023 "product_name": "Malloc disk", 00:03:55.023 "block_size": 512, 00:03:55.023 "num_blocks": 16384, 00:03:55.023 "uuid": "67efcc12-f698-4140-9a80-226b5d544b8a", 00:03:55.023 "assigned_rate_limits": { 00:03:55.023 "rw_ios_per_sec": 0, 00:03:55.023 "rw_mbytes_per_sec": 0, 00:03:55.023 "r_mbytes_per_sec": 0, 00:03:55.023 "w_mbytes_per_sec": 0 00:03:55.023 }, 00:03:55.023 "claimed": false, 00:03:55.023 "zoned": false, 00:03:55.023 "supported_io_types": { 00:03:55.023 "read": true, 00:03:55.023 "write": true, 00:03:55.023 "unmap": true, 00:03:55.023 "write_zeroes": true, 00:03:55.023 "flush": true, 00:03:55.023 "reset": true, 00:03:55.023 "compare": false, 00:03:55.023 "compare_and_write": false, 00:03:55.023 "abort": true, 00:03:55.023 "nvme_admin": false, 00:03:55.023 "nvme_io": false 00:03:55.023 }, 00:03:55.023 "memory_domains": [ 00:03:55.023 { 00:03:55.023 "dma_device_id": "system", 00:03:55.023 "dma_device_type": 1 00:03:55.023 }, 00:03:55.023 { 00:03:55.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:55.023 "dma_device_type": 2 00:03:55.023 } 00:03:55.023 ], 00:03:55.023 "driver_specific": {} 00:03:55.023 } 00:03:55.023 ]' 00:03:55.023 10:55:52 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:55.023 10:55:52 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:55.023 10:55:52 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:55.023 10:55:52 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:55.023 10:55:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.023 [2024-05-15 10:55:52.149864] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:55.023 [2024-05-15 10:55:52.149894] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:55.023 [2024-05-15 10:55:52.149906] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1975260 00:03:55.023 [2024-05-15 10:55:52.149912] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:55.023 [2024-05-15 10:55:52.151180] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:55.023 [2024-05-15 10:55:52.151201] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:55.023 Passthru0 00:03:55.023 10:55:52 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:55.023 10:55:52 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:55.023 10:55:52 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:55.023 10:55:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.023 10:55:52 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:55.023 10:55:52 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:55.023 { 00:03:55.023 "name": "Malloc0", 00:03:55.023 "aliases": [ 00:03:55.023 "67efcc12-f698-4140-9a80-226b5d544b8a" 00:03:55.023 ], 00:03:55.023 "product_name": "Malloc disk", 00:03:55.023 "block_size": 512, 00:03:55.023 "num_blocks": 16384, 00:03:55.023 "uuid": "67efcc12-f698-4140-9a80-226b5d544b8a", 00:03:55.023 "assigned_rate_limits": { 00:03:55.023 "rw_ios_per_sec": 0, 00:03:55.023 "rw_mbytes_per_sec": 0, 00:03:55.023 "r_mbytes_per_sec": 0, 00:03:55.023 "w_mbytes_per_sec": 0 00:03:55.023 }, 00:03:55.023 "claimed": true, 00:03:55.023 "claim_type": "exclusive_write", 00:03:55.023 "zoned": false, 00:03:55.023 "supported_io_types": { 00:03:55.023 "read": true, 00:03:55.023 "write": true, 00:03:55.023 "unmap": true, 00:03:55.023 "write_zeroes": true, 00:03:55.023 "flush": true, 00:03:55.023 "reset": true, 00:03:55.023 "compare": false, 00:03:55.023 "compare_and_write": false, 00:03:55.023 "abort": true, 00:03:55.023 "nvme_admin": false, 00:03:55.023 "nvme_io": false 00:03:55.023 }, 00:03:55.023 "memory_domains": [ 00:03:55.023 { 00:03:55.023 "dma_device_id": "system", 00:03:55.023 "dma_device_type": 1 00:03:55.023 }, 00:03:55.023 { 00:03:55.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:55.023 "dma_device_type": 2 00:03:55.023 } 00:03:55.023 ], 00:03:55.023 "driver_specific": {} 00:03:55.023 }, 00:03:55.023 { 00:03:55.023 "name": "Passthru0", 00:03:55.023 "aliases": [ 00:03:55.023 "b8c1f1bc-cc68-5a53-b509-4038cc075258" 00:03:55.023 ], 00:03:55.023 "product_name": "passthru", 00:03:55.023 "block_size": 512, 00:03:55.023 "num_blocks": 16384, 00:03:55.023 "uuid": "b8c1f1bc-cc68-5a53-b509-4038cc075258", 00:03:55.023 "assigned_rate_limits": { 00:03:55.023 "rw_ios_per_sec": 0, 00:03:55.023 "rw_mbytes_per_sec": 0, 00:03:55.023 "r_mbytes_per_sec": 0, 00:03:55.023 "w_mbytes_per_sec": 0 00:03:55.023 }, 00:03:55.023 "claimed": false, 00:03:55.023 "zoned": false, 00:03:55.023 "supported_io_types": { 00:03:55.023 "read": true, 00:03:55.023 "write": true, 00:03:55.024 "unmap": true, 00:03:55.024 "write_zeroes": true, 00:03:55.024 "flush": true, 00:03:55.024 "reset": true, 00:03:55.024 "compare": false, 00:03:55.024 "compare_and_write": false, 00:03:55.024 "abort": true, 00:03:55.024 "nvme_admin": false, 00:03:55.024 "nvme_io": false 00:03:55.024 }, 00:03:55.024 "memory_domains": [ 00:03:55.024 { 00:03:55.024 "dma_device_id": "system", 00:03:55.024 "dma_device_type": 1 00:03:55.024 }, 00:03:55.024 { 00:03:55.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:55.024 "dma_device_type": 2 00:03:55.024 } 00:03:55.024 ], 00:03:55.024 "driver_specific": { 00:03:55.024 "passthru": { 00:03:55.024 "name": "Passthru0", 00:03:55.024 "base_bdev_name": "Malloc0" 00:03:55.024 } 00:03:55.024 } 00:03:55.024 } 00:03:55.024 ]' 00:03:55.024 10:55:52 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:55.024 10:55:52 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:55.024 10:55:52 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:55.024 10:55:52 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:55.024 10:55:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.024 10:55:52 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:55.024 10:55:52 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:55.024 10:55:52 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:55.024 10:55:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.024 10:55:52 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:55.024 10:55:52 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:55.024 10:55:52 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:55.024 10:55:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.024 10:55:52 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:55.024 10:55:52 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:55.024 10:55:52 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:55.024 10:55:52 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:55.024 00:03:55.024 real 0m0.270s 00:03:55.024 user 0m0.167s 00:03:55.024 sys 0m0.037s 00:03:55.024 10:55:52 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:55.024 10:55:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.024 ************************************ 00:03:55.024 END TEST rpc_integrity 00:03:55.024 ************************************ 00:03:55.284 10:55:52 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:55.284 10:55:52 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:55.284 10:55:52 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:55.284 10:55:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:55.284 ************************************ 00:03:55.284 START TEST rpc_plugins 00:03:55.284 ************************************ 00:03:55.284 10:55:52 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # rpc_plugins 00:03:55.284 10:55:52 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:55.284 10:55:52 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:55.284 10:55:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:55.284 10:55:52 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:55.284 10:55:52 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:55.284 10:55:52 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:55.284 10:55:52 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:55.284 10:55:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:55.284 10:55:52 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:55.284 10:55:52 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:55.284 { 00:03:55.284 "name": "Malloc1", 00:03:55.284 "aliases": [ 00:03:55.284 "a4efc043-273f-4d4c-bde7-91c1328949fc" 00:03:55.284 ], 00:03:55.284 "product_name": "Malloc disk", 00:03:55.284 "block_size": 4096, 00:03:55.284 "num_blocks": 256, 00:03:55.284 "uuid": "a4efc043-273f-4d4c-bde7-91c1328949fc", 00:03:55.284 "assigned_rate_limits": { 00:03:55.284 "rw_ios_per_sec": 0, 00:03:55.284 "rw_mbytes_per_sec": 0, 00:03:55.284 "r_mbytes_per_sec": 0, 00:03:55.284 "w_mbytes_per_sec": 0 00:03:55.284 }, 00:03:55.284 "claimed": false, 00:03:55.284 "zoned": false, 00:03:55.284 "supported_io_types": { 00:03:55.284 "read": true, 00:03:55.284 "write": true, 00:03:55.284 "unmap": true, 00:03:55.284 "write_zeroes": true, 00:03:55.284 "flush": true, 00:03:55.284 "reset": true, 00:03:55.284 "compare": false, 00:03:55.284 "compare_and_write": false, 00:03:55.284 "abort": true, 00:03:55.284 "nvme_admin": false, 00:03:55.284 "nvme_io": false 00:03:55.284 }, 00:03:55.284 "memory_domains": [ 00:03:55.284 { 00:03:55.284 "dma_device_id": "system", 00:03:55.284 "dma_device_type": 1 00:03:55.284 }, 00:03:55.284 { 00:03:55.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:55.284 "dma_device_type": 2 00:03:55.284 } 00:03:55.284 ], 00:03:55.284 "driver_specific": {} 00:03:55.284 } 00:03:55.284 ]' 00:03:55.284 10:55:52 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:55.284 10:55:52 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:55.284 10:55:52 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:55.284 10:55:52 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:55.284 10:55:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:55.284 10:55:52 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:55.284 10:55:52 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:55.284 10:55:52 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:55.284 10:55:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:55.284 10:55:52 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:55.284 10:55:52 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:55.284 10:55:52 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:55.284 10:55:52 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:55.284 00:03:55.284 real 0m0.132s 00:03:55.284 user 0m0.086s 00:03:55.284 sys 0m0.015s 00:03:55.284 10:55:52 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:55.284 10:55:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:55.284 ************************************ 00:03:55.284 END TEST rpc_plugins 00:03:55.284 ************************************ 00:03:55.284 10:55:52 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:55.284 10:55:52 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:55.284 10:55:52 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:55.284 10:55:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:55.543 ************************************ 00:03:55.543 START TEST rpc_trace_cmd_test 00:03:55.543 ************************************ 00:03:55.543 10:55:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # rpc_trace_cmd_test 00:03:55.543 10:55:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:55.543 10:55:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:55.543 10:55:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:55.543 10:55:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:55.543 10:55:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:55.543 10:55:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:55.543 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2066575", 00:03:55.543 "tpoint_group_mask": "0x8", 00:03:55.543 "iscsi_conn": { 00:03:55.543 "mask": "0x2", 00:03:55.543 "tpoint_mask": "0x0" 00:03:55.543 }, 00:03:55.543 "scsi": { 00:03:55.543 "mask": "0x4", 00:03:55.543 "tpoint_mask": "0x0" 00:03:55.543 }, 00:03:55.543 "bdev": { 00:03:55.543 "mask": "0x8", 00:03:55.543 "tpoint_mask": "0xffffffffffffffff" 00:03:55.543 }, 00:03:55.543 "nvmf_rdma": { 00:03:55.543 "mask": "0x10", 00:03:55.543 "tpoint_mask": "0x0" 00:03:55.543 }, 00:03:55.543 "nvmf_tcp": { 00:03:55.543 "mask": "0x20", 00:03:55.543 "tpoint_mask": "0x0" 00:03:55.543 }, 00:03:55.543 "ftl": { 00:03:55.543 "mask": "0x40", 00:03:55.543 "tpoint_mask": "0x0" 00:03:55.543 }, 00:03:55.543 "blobfs": { 00:03:55.543 "mask": "0x80", 00:03:55.543 "tpoint_mask": "0x0" 00:03:55.543 }, 00:03:55.543 "dsa": { 00:03:55.543 "mask": "0x200", 00:03:55.543 "tpoint_mask": "0x0" 00:03:55.543 }, 00:03:55.543 "thread": { 00:03:55.543 "mask": "0x400", 00:03:55.543 "tpoint_mask": "0x0" 00:03:55.543 }, 00:03:55.543 "nvme_pcie": { 00:03:55.543 "mask": "0x800", 00:03:55.543 "tpoint_mask": "0x0" 00:03:55.543 }, 00:03:55.543 "iaa": { 00:03:55.543 "mask": "0x1000", 00:03:55.543 "tpoint_mask": "0x0" 00:03:55.543 }, 00:03:55.543 "nvme_tcp": { 00:03:55.543 "mask": "0x2000", 00:03:55.543 "tpoint_mask": "0x0" 00:03:55.543 }, 00:03:55.543 "bdev_nvme": { 00:03:55.543 "mask": "0x4000", 00:03:55.543 "tpoint_mask": "0x0" 00:03:55.543 }, 00:03:55.543 "sock": { 00:03:55.543 "mask": "0x8000", 00:03:55.543 "tpoint_mask": "0x0" 00:03:55.543 } 00:03:55.543 }' 00:03:55.543 10:55:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:55.543 10:55:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:03:55.543 10:55:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:55.543 10:55:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:55.543 10:55:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:55.543 10:55:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:55.543 10:55:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:55.543 10:55:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:55.543 10:55:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:55.543 10:55:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:55.544 00:03:55.544 real 0m0.206s 00:03:55.544 user 0m0.175s 00:03:55.544 sys 0m0.022s 00:03:55.544 10:55:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:55.544 10:55:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:55.544 ************************************ 00:03:55.544 END TEST rpc_trace_cmd_test 00:03:55.544 ************************************ 00:03:55.803 10:55:52 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:55.803 10:55:52 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:55.803 10:55:52 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:55.803 10:55:52 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:55.803 10:55:52 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:55.803 10:55:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:55.803 ************************************ 00:03:55.803 START TEST rpc_daemon_integrity 00:03:55.803 ************************************ 00:03:55.803 10:55:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # rpc_integrity 00:03:55.803 10:55:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:55.803 10:55:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:55.803 10:55:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.803 10:55:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:55.803 10:55:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:55.803 10:55:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:55.803 10:55:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:55.803 10:55:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:55.803 10:55:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:55.803 10:55:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.803 10:55:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:55.803 10:55:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:55.803 10:55:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:55.803 10:55:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:55.803 10:55:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.803 10:55:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:55.803 10:55:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:55.803 { 00:03:55.803 "name": "Malloc2", 00:03:55.803 "aliases": [ 00:03:55.803 "d6c6bf6a-ece1-43fd-82f5-31b50ec5a4ea" 00:03:55.803 ], 00:03:55.803 "product_name": "Malloc disk", 00:03:55.803 "block_size": 512, 00:03:55.803 "num_blocks": 16384, 00:03:55.803 "uuid": "d6c6bf6a-ece1-43fd-82f5-31b50ec5a4ea", 00:03:55.803 "assigned_rate_limits": { 00:03:55.803 "rw_ios_per_sec": 0, 00:03:55.803 "rw_mbytes_per_sec": 0, 00:03:55.803 "r_mbytes_per_sec": 0, 00:03:55.803 "w_mbytes_per_sec": 0 00:03:55.803 }, 00:03:55.803 "claimed": false, 00:03:55.803 "zoned": false, 00:03:55.803 "supported_io_types": { 00:03:55.803 "read": true, 00:03:55.803 "write": true, 00:03:55.803 "unmap": true, 00:03:55.803 "write_zeroes": true, 00:03:55.803 "flush": true, 00:03:55.803 "reset": true, 00:03:55.803 "compare": false, 00:03:55.803 "compare_and_write": false, 00:03:55.803 "abort": true, 00:03:55.803 "nvme_admin": false, 00:03:55.803 "nvme_io": false 00:03:55.803 }, 00:03:55.803 "memory_domains": [ 00:03:55.803 { 00:03:55.803 "dma_device_id": "system", 00:03:55.803 "dma_device_type": 1 00:03:55.803 }, 00:03:55.803 { 00:03:55.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:55.803 "dma_device_type": 2 00:03:55.803 } 00:03:55.803 ], 00:03:55.803 "driver_specific": {} 00:03:55.803 } 00:03:55.803 ]' 00:03:55.803 10:55:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:55.803 10:55:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:55.803 10:55:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:55.803 10:55:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:55.803 10:55:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.803 [2024-05-15 10:55:52.976129] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:55.803 [2024-05-15 10:55:52.976157] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:55.803 [2024-05-15 10:55:52.976173] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b19340 00:03:55.803 [2024-05-15 10:55:52.976179] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:55.803 [2024-05-15 10:55:52.977158] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:55.803 [2024-05-15 10:55:52.977186] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:55.803 Passthru0 00:03:55.803 10:55:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:55.803 10:55:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:55.803 10:55:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:55.803 10:55:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.803 10:55:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:55.803 10:55:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:55.803 { 00:03:55.803 "name": "Malloc2", 00:03:55.803 "aliases": [ 00:03:55.803 "d6c6bf6a-ece1-43fd-82f5-31b50ec5a4ea" 00:03:55.803 ], 00:03:55.803 "product_name": "Malloc disk", 00:03:55.803 "block_size": 512, 00:03:55.803 "num_blocks": 16384, 00:03:55.803 "uuid": "d6c6bf6a-ece1-43fd-82f5-31b50ec5a4ea", 00:03:55.803 "assigned_rate_limits": { 00:03:55.803 "rw_ios_per_sec": 0, 00:03:55.803 "rw_mbytes_per_sec": 0, 00:03:55.803 "r_mbytes_per_sec": 0, 00:03:55.803 "w_mbytes_per_sec": 0 00:03:55.803 }, 00:03:55.803 "claimed": true, 00:03:55.803 "claim_type": "exclusive_write", 00:03:55.803 "zoned": false, 00:03:55.803 "supported_io_types": { 00:03:55.803 "read": true, 00:03:55.803 "write": true, 00:03:55.803 "unmap": true, 00:03:55.803 "write_zeroes": true, 00:03:55.803 "flush": true, 00:03:55.803 "reset": true, 00:03:55.803 "compare": false, 00:03:55.803 "compare_and_write": false, 00:03:55.803 "abort": true, 00:03:55.803 "nvme_admin": false, 00:03:55.803 "nvme_io": false 00:03:55.803 }, 00:03:55.803 "memory_domains": [ 00:03:55.803 { 00:03:55.803 "dma_device_id": "system", 00:03:55.803 "dma_device_type": 1 00:03:55.803 }, 00:03:55.803 { 00:03:55.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:55.803 "dma_device_type": 2 00:03:55.803 } 00:03:55.803 ], 00:03:55.803 "driver_specific": {} 00:03:55.803 }, 00:03:55.803 { 00:03:55.803 "name": "Passthru0", 00:03:55.803 "aliases": [ 00:03:55.803 "9866da1b-bde7-538b-a356-e238ff5e727b" 00:03:55.803 ], 00:03:55.803 "product_name": "passthru", 00:03:55.803 "block_size": 512, 00:03:55.803 "num_blocks": 16384, 00:03:55.803 "uuid": "9866da1b-bde7-538b-a356-e238ff5e727b", 00:03:55.803 "assigned_rate_limits": { 00:03:55.803 "rw_ios_per_sec": 0, 00:03:55.803 "rw_mbytes_per_sec": 0, 00:03:55.803 "r_mbytes_per_sec": 0, 00:03:55.803 "w_mbytes_per_sec": 0 00:03:55.803 }, 00:03:55.803 "claimed": false, 00:03:55.803 "zoned": false, 00:03:55.803 "supported_io_types": { 00:03:55.803 "read": true, 00:03:55.803 "write": true, 00:03:55.803 "unmap": true, 00:03:55.803 "write_zeroes": true, 00:03:55.803 "flush": true, 00:03:55.803 "reset": true, 00:03:55.803 "compare": false, 00:03:55.803 "compare_and_write": false, 00:03:55.803 "abort": true, 00:03:55.803 "nvme_admin": false, 00:03:55.803 "nvme_io": false 00:03:55.803 }, 00:03:55.803 "memory_domains": [ 00:03:55.803 { 00:03:55.803 "dma_device_id": "system", 00:03:55.803 "dma_device_type": 1 00:03:55.803 }, 00:03:55.803 { 00:03:55.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:55.803 "dma_device_type": 2 00:03:55.803 } 00:03:55.803 ], 00:03:55.803 "driver_specific": { 00:03:55.803 "passthru": { 00:03:55.803 "name": "Passthru0", 00:03:55.803 "base_bdev_name": "Malloc2" 00:03:55.803 } 00:03:55.803 } 00:03:55.803 } 00:03:55.803 ]' 00:03:55.803 10:55:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:55.803 10:55:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:55.804 10:55:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:55.804 10:55:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:55.804 10:55:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.804 10:55:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:55.804 10:55:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:55.804 10:55:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:55.804 10:55:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.804 10:55:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:55.804 10:55:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:55.804 10:55:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:55.804 10:55:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.063 10:55:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:56.063 10:55:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:56.063 10:55:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:56.063 10:55:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:56.063 00:03:56.063 real 0m0.263s 00:03:56.063 user 0m0.174s 00:03:56.063 sys 0m0.030s 00:03:56.063 10:55:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:56.063 10:55:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.063 ************************************ 00:03:56.063 END TEST rpc_daemon_integrity 00:03:56.063 ************************************ 00:03:56.063 10:55:53 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:56.063 10:55:53 rpc -- rpc/rpc.sh@84 -- # killprocess 2066575 00:03:56.063 10:55:53 rpc -- common/autotest_common.sh@947 -- # '[' -z 2066575 ']' 00:03:56.063 10:55:53 rpc -- common/autotest_common.sh@951 -- # kill -0 2066575 00:03:56.063 10:55:53 rpc -- common/autotest_common.sh@952 -- # uname 00:03:56.063 10:55:53 rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:03:56.063 10:55:53 rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2066575 00:03:56.063 10:55:53 rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:03:56.063 10:55:53 rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:03:56.063 10:55:53 rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2066575' 00:03:56.063 killing process with pid 2066575 00:03:56.063 10:55:53 rpc -- common/autotest_common.sh@966 -- # kill 2066575 00:03:56.063 10:55:53 rpc -- common/autotest_common.sh@971 -- # wait 2066575 00:03:56.322 00:03:56.322 real 0m2.485s 00:03:56.322 user 0m3.200s 00:03:56.322 sys 0m0.661s 00:03:56.322 10:55:53 rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:56.323 10:55:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.323 ************************************ 00:03:56.323 END TEST rpc 00:03:56.323 ************************************ 00:03:56.323 10:55:53 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:56.323 10:55:53 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:56.323 10:55:53 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:56.323 10:55:53 -- common/autotest_common.sh@10 -- # set +x 00:03:56.582 ************************************ 00:03:56.582 START TEST skip_rpc 00:03:56.582 ************************************ 00:03:56.582 10:55:53 skip_rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:56.582 * Looking for test storage... 00:03:56.582 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:56.582 10:55:53 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:56.582 10:55:53 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:56.582 10:55:53 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:56.582 10:55:53 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:56.582 10:55:53 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:56.582 10:55:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.582 ************************************ 00:03:56.582 START TEST skip_rpc 00:03:56.582 ************************************ 00:03:56.582 10:55:53 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # test_skip_rpc 00:03:56.582 10:55:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2067213 00:03:56.583 10:55:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:56.583 10:55:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:56.583 10:55:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:56.583 [2024-05-15 10:55:53.784518] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:03:56.583 [2024-05-15 10:55:53.784558] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2067213 ] 00:03:56.583 EAL: No free 2048 kB hugepages reported on node 1 00:03:56.583 [2024-05-15 10:55:53.836330] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:56.842 [2024-05-15 10:55:53.909582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.115 10:55:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:02.115 10:55:58 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # local es=0 00:04:02.115 10:55:58 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:02.115 10:55:58 skip_rpc.skip_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:04:02.115 10:55:58 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:02.115 10:55:58 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:04:02.115 10:55:58 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:02.115 10:55:58 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # rpc_cmd spdk_get_version 00:04:02.115 10:55:58 skip_rpc.skip_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:02.115 10:55:58 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.115 10:55:58 skip_rpc.skip_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:04:02.115 10:55:58 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # es=1 00:04:02.115 10:55:58 skip_rpc.skip_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:04:02.115 10:55:58 skip_rpc.skip_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:04:02.116 10:55:58 skip_rpc.skip_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:04:02.116 10:55:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:02.116 10:55:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2067213 00:04:02.116 10:55:58 skip_rpc.skip_rpc -- common/autotest_common.sh@947 -- # '[' -z 2067213 ']' 00:04:02.116 10:55:58 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # kill -0 2067213 00:04:02.116 10:55:58 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # uname 00:04:02.116 10:55:58 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:02.116 10:55:58 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2067213 00:04:02.116 10:55:58 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:02.116 10:55:58 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:02.116 10:55:58 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2067213' 00:04:02.116 killing process with pid 2067213 00:04:02.116 10:55:58 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # kill 2067213 00:04:02.116 10:55:58 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # wait 2067213 00:04:02.116 00:04:02.116 real 0m5.389s 00:04:02.116 user 0m5.163s 00:04:02.116 sys 0m0.252s 00:04:02.116 10:55:59 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:02.116 10:55:59 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.116 ************************************ 00:04:02.116 END TEST skip_rpc 00:04:02.116 ************************************ 00:04:02.116 10:55:59 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:02.116 10:55:59 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:02.116 10:55:59 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:02.116 10:55:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.116 ************************************ 00:04:02.116 START TEST skip_rpc_with_json 00:04:02.116 ************************************ 00:04:02.116 10:55:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # test_skip_rpc_with_json 00:04:02.116 10:55:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:02.116 10:55:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2068159 00:04:02.116 10:55:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:02.116 10:55:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:02.116 10:55:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2068159 00:04:02.116 10:55:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@828 -- # '[' -z 2068159 ']' 00:04:02.116 10:55:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:02.116 10:55:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:02.116 10:55:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:02.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:02.116 10:55:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:02.116 10:55:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:02.116 [2024-05-15 10:55:59.249438] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:04:02.116 [2024-05-15 10:55:59.249481] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2068159 ] 00:04:02.116 EAL: No free 2048 kB hugepages reported on node 1 00:04:02.116 [2024-05-15 10:55:59.302982] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:02.116 [2024-05-15 10:55:59.371516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.054 10:56:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:03.054 10:56:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@861 -- # return 0 00:04:03.054 10:56:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:03.054 10:56:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:03.054 10:56:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:03.054 [2024-05-15 10:56:00.050022] nvmf_rpc.c:2547:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:03.054 request: 00:04:03.054 { 00:04:03.054 "trtype": "tcp", 00:04:03.054 "method": "nvmf_get_transports", 00:04:03.054 "req_id": 1 00:04:03.054 } 00:04:03.054 Got JSON-RPC error response 00:04:03.054 response: 00:04:03.054 { 00:04:03.054 "code": -19, 00:04:03.054 "message": "No such device" 00:04:03.054 } 00:04:03.054 10:56:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:04:03.054 10:56:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:03.054 10:56:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:03.054 10:56:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:03.054 [2024-05-15 10:56:00.058110] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:03.054 10:56:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:03.054 10:56:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:03.054 10:56:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:03.054 10:56:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:03.054 10:56:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:03.054 10:56:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:03.054 { 00:04:03.054 "subsystems": [ 00:04:03.054 { 00:04:03.054 "subsystem": "vfio_user_target", 00:04:03.054 "config": null 00:04:03.054 }, 00:04:03.054 { 00:04:03.054 "subsystem": "keyring", 00:04:03.054 "config": [] 00:04:03.054 }, 00:04:03.054 { 00:04:03.054 "subsystem": "iobuf", 00:04:03.054 "config": [ 00:04:03.054 { 00:04:03.054 "method": "iobuf_set_options", 00:04:03.054 "params": { 00:04:03.054 "small_pool_count": 8192, 00:04:03.054 "large_pool_count": 1024, 00:04:03.054 "small_bufsize": 8192, 00:04:03.054 "large_bufsize": 135168 00:04:03.054 } 00:04:03.054 } 00:04:03.054 ] 00:04:03.054 }, 00:04:03.054 { 00:04:03.054 "subsystem": "sock", 00:04:03.054 "config": [ 00:04:03.054 { 00:04:03.054 "method": "sock_impl_set_options", 00:04:03.054 "params": { 00:04:03.054 "impl_name": "posix", 00:04:03.054 "recv_buf_size": 2097152, 00:04:03.054 "send_buf_size": 2097152, 00:04:03.054 "enable_recv_pipe": true, 00:04:03.054 "enable_quickack": false, 00:04:03.054 "enable_placement_id": 0, 00:04:03.054 "enable_zerocopy_send_server": true, 00:04:03.054 "enable_zerocopy_send_client": false, 00:04:03.054 "zerocopy_threshold": 0, 00:04:03.054 "tls_version": 0, 00:04:03.054 "enable_ktls": false 00:04:03.054 } 00:04:03.054 }, 00:04:03.054 { 00:04:03.054 "method": "sock_impl_set_options", 00:04:03.054 "params": { 00:04:03.054 "impl_name": "ssl", 00:04:03.054 "recv_buf_size": 4096, 00:04:03.054 "send_buf_size": 4096, 00:04:03.054 "enable_recv_pipe": true, 00:04:03.054 "enable_quickack": false, 00:04:03.054 "enable_placement_id": 0, 00:04:03.054 "enable_zerocopy_send_server": true, 00:04:03.054 "enable_zerocopy_send_client": false, 00:04:03.054 "zerocopy_threshold": 0, 00:04:03.054 "tls_version": 0, 00:04:03.054 "enable_ktls": false 00:04:03.054 } 00:04:03.054 } 00:04:03.054 ] 00:04:03.054 }, 00:04:03.054 { 00:04:03.054 "subsystem": "vmd", 00:04:03.054 "config": [] 00:04:03.054 }, 00:04:03.054 { 00:04:03.054 "subsystem": "accel", 00:04:03.054 "config": [ 00:04:03.054 { 00:04:03.054 "method": "accel_set_options", 00:04:03.054 "params": { 00:04:03.054 "small_cache_size": 128, 00:04:03.054 "large_cache_size": 16, 00:04:03.054 "task_count": 2048, 00:04:03.054 "sequence_count": 2048, 00:04:03.054 "buf_count": 2048 00:04:03.054 } 00:04:03.054 } 00:04:03.054 ] 00:04:03.054 }, 00:04:03.054 { 00:04:03.054 "subsystem": "bdev", 00:04:03.054 "config": [ 00:04:03.054 { 00:04:03.054 "method": "bdev_set_options", 00:04:03.054 "params": { 00:04:03.054 "bdev_io_pool_size": 65535, 00:04:03.054 "bdev_io_cache_size": 256, 00:04:03.054 "bdev_auto_examine": true, 00:04:03.054 "iobuf_small_cache_size": 128, 00:04:03.054 "iobuf_large_cache_size": 16 00:04:03.054 } 00:04:03.054 }, 00:04:03.054 { 00:04:03.054 "method": "bdev_raid_set_options", 00:04:03.054 "params": { 00:04:03.054 "process_window_size_kb": 1024 00:04:03.054 } 00:04:03.054 }, 00:04:03.054 { 00:04:03.054 "method": "bdev_iscsi_set_options", 00:04:03.054 "params": { 00:04:03.055 "timeout_sec": 30 00:04:03.055 } 00:04:03.055 }, 00:04:03.055 { 00:04:03.055 "method": "bdev_nvme_set_options", 00:04:03.055 "params": { 00:04:03.055 "action_on_timeout": "none", 00:04:03.055 "timeout_us": 0, 00:04:03.055 "timeout_admin_us": 0, 00:04:03.055 "keep_alive_timeout_ms": 10000, 00:04:03.055 "arbitration_burst": 0, 00:04:03.055 "low_priority_weight": 0, 00:04:03.055 "medium_priority_weight": 0, 00:04:03.055 "high_priority_weight": 0, 00:04:03.055 "nvme_adminq_poll_period_us": 10000, 00:04:03.055 "nvme_ioq_poll_period_us": 0, 00:04:03.055 "io_queue_requests": 0, 00:04:03.055 "delay_cmd_submit": true, 00:04:03.055 "transport_retry_count": 4, 00:04:03.055 "bdev_retry_count": 3, 00:04:03.055 "transport_ack_timeout": 0, 00:04:03.055 "ctrlr_loss_timeout_sec": 0, 00:04:03.055 "reconnect_delay_sec": 0, 00:04:03.055 "fast_io_fail_timeout_sec": 0, 00:04:03.055 "disable_auto_failback": false, 00:04:03.055 "generate_uuids": false, 00:04:03.055 "transport_tos": 0, 00:04:03.055 "nvme_error_stat": false, 00:04:03.055 "rdma_srq_size": 0, 00:04:03.055 "io_path_stat": false, 00:04:03.055 "allow_accel_sequence": false, 00:04:03.055 "rdma_max_cq_size": 0, 00:04:03.055 "rdma_cm_event_timeout_ms": 0, 00:04:03.055 "dhchap_digests": [ 00:04:03.055 "sha256", 00:04:03.055 "sha384", 00:04:03.055 "sha512" 00:04:03.055 ], 00:04:03.055 "dhchap_dhgroups": [ 00:04:03.055 "null", 00:04:03.055 "ffdhe2048", 00:04:03.055 "ffdhe3072", 00:04:03.055 "ffdhe4096", 00:04:03.055 "ffdhe6144", 00:04:03.055 "ffdhe8192" 00:04:03.055 ] 00:04:03.055 } 00:04:03.055 }, 00:04:03.055 { 00:04:03.055 "method": "bdev_nvme_set_hotplug", 00:04:03.055 "params": { 00:04:03.055 "period_us": 100000, 00:04:03.055 "enable": false 00:04:03.055 } 00:04:03.055 }, 00:04:03.055 { 00:04:03.055 "method": "bdev_wait_for_examine" 00:04:03.055 } 00:04:03.055 ] 00:04:03.055 }, 00:04:03.055 { 00:04:03.055 "subsystem": "scsi", 00:04:03.055 "config": null 00:04:03.055 }, 00:04:03.055 { 00:04:03.055 "subsystem": "scheduler", 00:04:03.055 "config": [ 00:04:03.055 { 00:04:03.055 "method": "framework_set_scheduler", 00:04:03.055 "params": { 00:04:03.055 "name": "static" 00:04:03.055 } 00:04:03.055 } 00:04:03.055 ] 00:04:03.055 }, 00:04:03.055 { 00:04:03.055 "subsystem": "vhost_scsi", 00:04:03.055 "config": [] 00:04:03.055 }, 00:04:03.055 { 00:04:03.055 "subsystem": "vhost_blk", 00:04:03.055 "config": [] 00:04:03.055 }, 00:04:03.055 { 00:04:03.055 "subsystem": "ublk", 00:04:03.055 "config": [] 00:04:03.055 }, 00:04:03.055 { 00:04:03.055 "subsystem": "nbd", 00:04:03.055 "config": [] 00:04:03.055 }, 00:04:03.055 { 00:04:03.055 "subsystem": "nvmf", 00:04:03.055 "config": [ 00:04:03.055 { 00:04:03.055 "method": "nvmf_set_config", 00:04:03.055 "params": { 00:04:03.055 "discovery_filter": "match_any", 00:04:03.055 "admin_cmd_passthru": { 00:04:03.055 "identify_ctrlr": false 00:04:03.055 } 00:04:03.055 } 00:04:03.055 }, 00:04:03.055 { 00:04:03.055 "method": "nvmf_set_max_subsystems", 00:04:03.055 "params": { 00:04:03.055 "max_subsystems": 1024 00:04:03.055 } 00:04:03.055 }, 00:04:03.055 { 00:04:03.055 "method": "nvmf_set_crdt", 00:04:03.055 "params": { 00:04:03.055 "crdt1": 0, 00:04:03.055 "crdt2": 0, 00:04:03.055 "crdt3": 0 00:04:03.055 } 00:04:03.055 }, 00:04:03.055 { 00:04:03.055 "method": "nvmf_create_transport", 00:04:03.055 "params": { 00:04:03.055 "trtype": "TCP", 00:04:03.055 "max_queue_depth": 128, 00:04:03.055 "max_io_qpairs_per_ctrlr": 127, 00:04:03.055 "in_capsule_data_size": 4096, 00:04:03.055 "max_io_size": 131072, 00:04:03.055 "io_unit_size": 131072, 00:04:03.055 "max_aq_depth": 128, 00:04:03.055 "num_shared_buffers": 511, 00:04:03.055 "buf_cache_size": 4294967295, 00:04:03.055 "dif_insert_or_strip": false, 00:04:03.055 "zcopy": false, 00:04:03.055 "c2h_success": true, 00:04:03.055 "sock_priority": 0, 00:04:03.055 "abort_timeout_sec": 1, 00:04:03.055 "ack_timeout": 0, 00:04:03.055 "data_wr_pool_size": 0 00:04:03.055 } 00:04:03.055 } 00:04:03.055 ] 00:04:03.055 }, 00:04:03.055 { 00:04:03.055 "subsystem": "iscsi", 00:04:03.055 "config": [ 00:04:03.055 { 00:04:03.055 "method": "iscsi_set_options", 00:04:03.055 "params": { 00:04:03.055 "node_base": "iqn.2016-06.io.spdk", 00:04:03.055 "max_sessions": 128, 00:04:03.055 "max_connections_per_session": 2, 00:04:03.055 "max_queue_depth": 64, 00:04:03.055 "default_time2wait": 2, 00:04:03.055 "default_time2retain": 20, 00:04:03.055 "first_burst_length": 8192, 00:04:03.055 "immediate_data": true, 00:04:03.055 "allow_duplicated_isid": false, 00:04:03.055 "error_recovery_level": 0, 00:04:03.055 "nop_timeout": 60, 00:04:03.055 "nop_in_interval": 30, 00:04:03.055 "disable_chap": false, 00:04:03.055 "require_chap": false, 00:04:03.055 "mutual_chap": false, 00:04:03.055 "chap_group": 0, 00:04:03.055 "max_large_datain_per_connection": 64, 00:04:03.055 "max_r2t_per_connection": 4, 00:04:03.055 "pdu_pool_size": 36864, 00:04:03.055 "immediate_data_pool_size": 16384, 00:04:03.055 "data_out_pool_size": 2048 00:04:03.055 } 00:04:03.055 } 00:04:03.055 ] 00:04:03.055 } 00:04:03.055 ] 00:04:03.055 } 00:04:03.055 10:56:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:03.055 10:56:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2068159 00:04:03.055 10:56:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # '[' -z 2068159 ']' 00:04:03.055 10:56:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # kill -0 2068159 00:04:03.055 10:56:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # uname 00:04:03.055 10:56:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:03.055 10:56:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2068159 00:04:03.055 10:56:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:03.055 10:56:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:03.055 10:56:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2068159' 00:04:03.055 killing process with pid 2068159 00:04:03.055 10:56:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # kill 2068159 00:04:03.055 10:56:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # wait 2068159 00:04:03.624 10:56:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2068398 00:04:03.624 10:56:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:03.624 10:56:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:08.896 10:56:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2068398 00:04:08.896 10:56:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # '[' -z 2068398 ']' 00:04:08.896 10:56:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # kill -0 2068398 00:04:08.896 10:56:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # uname 00:04:08.896 10:56:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:08.896 10:56:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2068398 00:04:08.896 10:56:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:08.896 10:56:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:08.896 10:56:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2068398' 00:04:08.896 killing process with pid 2068398 00:04:08.896 10:56:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # kill 2068398 00:04:08.896 10:56:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # wait 2068398 00:04:08.896 10:56:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:08.896 10:56:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:08.896 00:04:08.896 real 0m6.775s 00:04:08.896 user 0m6.602s 00:04:08.896 sys 0m0.563s 00:04:08.896 10:56:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:08.896 10:56:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:08.896 ************************************ 00:04:08.896 END TEST skip_rpc_with_json 00:04:08.896 ************************************ 00:04:08.896 10:56:06 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:08.896 10:56:06 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:08.896 10:56:06 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:08.896 10:56:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.896 ************************************ 00:04:08.896 START TEST skip_rpc_with_delay 00:04:08.896 ************************************ 00:04:08.896 10:56:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # test_skip_rpc_with_delay 00:04:08.896 10:56:06 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:08.896 10:56:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # local es=0 00:04:08.896 10:56:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:08.896 10:56:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:08.896 10:56:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:08.896 10:56:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:08.896 10:56:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:08.896 10:56:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:08.896 10:56:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:08.896 10:56:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:08.896 10:56:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:08.896 10:56:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:08.896 [2024-05-15 10:56:06.098055] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:08.896 [2024-05-15 10:56:06.098112] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:08.896 10:56:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # es=1 00:04:08.896 10:56:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:04:08.896 10:56:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:04:08.896 10:56:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:04:08.896 00:04:08.896 real 0m0.063s 00:04:08.896 user 0m0.040s 00:04:08.897 sys 0m0.022s 00:04:08.897 10:56:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:08.897 10:56:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:08.897 ************************************ 00:04:08.897 END TEST skip_rpc_with_delay 00:04:08.897 ************************************ 00:04:08.897 10:56:06 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:08.897 10:56:06 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:08.897 10:56:06 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:08.897 10:56:06 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:08.897 10:56:06 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:08.897 10:56:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.157 ************************************ 00:04:09.157 START TEST exit_on_failed_rpc_init 00:04:09.157 ************************************ 00:04:09.157 10:56:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # test_exit_on_failed_rpc_init 00:04:09.157 10:56:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2069375 00:04:09.157 10:56:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2069375 00:04:09.157 10:56:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:09.157 10:56:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@828 -- # '[' -z 2069375 ']' 00:04:09.157 10:56:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:09.157 10:56:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:09.157 10:56:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:09.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:09.157 10:56:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:09.157 10:56:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:09.157 [2024-05-15 10:56:06.238804] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:04:09.157 [2024-05-15 10:56:06.238846] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2069375 ] 00:04:09.157 EAL: No free 2048 kB hugepages reported on node 1 00:04:09.157 [2024-05-15 10:56:06.293113] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.157 [2024-05-15 10:56:06.365456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.095 10:56:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:10.095 10:56:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@861 -- # return 0 00:04:10.095 10:56:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:10.095 10:56:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:10.095 10:56:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # local es=0 00:04:10.095 10:56:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:10.095 10:56:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:10.095 10:56:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:10.095 10:56:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:10.095 10:56:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:10.095 10:56:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:10.095 10:56:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:10.095 10:56:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:10.095 10:56:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:10.095 10:56:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:10.095 [2024-05-15 10:56:07.078485] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:04:10.095 [2024-05-15 10:56:07.078528] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2069607 ] 00:04:10.095 EAL: No free 2048 kB hugepages reported on node 1 00:04:10.095 [2024-05-15 10:56:07.130038] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:10.095 [2024-05-15 10:56:07.203635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:10.095 [2024-05-15 10:56:07.203703] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:10.095 [2024-05-15 10:56:07.203711] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:10.095 [2024-05-15 10:56:07.203717] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:10.095 10:56:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # es=234 00:04:10.095 10:56:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:04:10.095 10:56:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # es=106 00:04:10.095 10:56:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # case "$es" in 00:04:10.095 10:56:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@669 -- # es=1 00:04:10.095 10:56:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:04:10.095 10:56:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:10.095 10:56:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2069375 00:04:10.095 10:56:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@947 -- # '[' -z 2069375 ']' 00:04:10.095 10:56:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # kill -0 2069375 00:04:10.095 10:56:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # uname 00:04:10.095 10:56:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:10.095 10:56:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2069375 00:04:10.095 10:56:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:10.095 10:56:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:10.095 10:56:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2069375' 00:04:10.095 killing process with pid 2069375 00:04:10.095 10:56:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # kill 2069375 00:04:10.095 10:56:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # wait 2069375 00:04:10.666 00:04:10.666 real 0m1.490s 00:04:10.666 user 0m1.748s 00:04:10.666 sys 0m0.368s 00:04:10.666 10:56:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:10.666 10:56:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:10.666 ************************************ 00:04:10.666 END TEST exit_on_failed_rpc_init 00:04:10.666 ************************************ 00:04:10.666 10:56:07 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:10.666 00:04:10.666 real 0m14.107s 00:04:10.666 user 0m13.717s 00:04:10.666 sys 0m1.448s 00:04:10.666 10:56:07 skip_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:10.666 10:56:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.666 ************************************ 00:04:10.666 END TEST skip_rpc 00:04:10.666 ************************************ 00:04:10.666 10:56:07 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:10.666 10:56:07 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:10.666 10:56:07 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:10.666 10:56:07 -- common/autotest_common.sh@10 -- # set +x 00:04:10.666 ************************************ 00:04:10.666 START TEST rpc_client 00:04:10.666 ************************************ 00:04:10.666 10:56:07 rpc_client -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:10.666 * Looking for test storage... 00:04:10.666 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:10.666 10:56:07 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:10.666 OK 00:04:10.666 10:56:07 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:10.666 00:04:10.666 real 0m0.099s 00:04:10.666 user 0m0.043s 00:04:10.666 sys 0m0.064s 00:04:10.666 10:56:07 rpc_client -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:10.666 10:56:07 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:10.666 ************************************ 00:04:10.666 END TEST rpc_client 00:04:10.666 ************************************ 00:04:10.666 10:56:07 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:10.666 10:56:07 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:10.666 10:56:07 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:10.666 10:56:07 -- common/autotest_common.sh@10 -- # set +x 00:04:10.926 ************************************ 00:04:10.926 START TEST json_config 00:04:10.926 ************************************ 00:04:10.926 10:56:07 json_config -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:10.926 10:56:08 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:10.926 10:56:08 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:10.926 10:56:08 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:10.926 10:56:08 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:10.926 10:56:08 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:10.926 10:56:08 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:10.926 10:56:08 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:10.926 10:56:08 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:10.926 10:56:08 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:10.926 10:56:08 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:10.926 10:56:08 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:10.926 10:56:08 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:10.926 10:56:08 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:10.926 10:56:08 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:10.926 10:56:08 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:10.926 10:56:08 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:10.926 10:56:08 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:10.926 10:56:08 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:10.926 10:56:08 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:10.926 10:56:08 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:10.926 10:56:08 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:10.926 10:56:08 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:10.926 10:56:08 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:10.926 10:56:08 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:10.926 10:56:08 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:10.926 10:56:08 json_config -- paths/export.sh@5 -- # export PATH 00:04:10.926 10:56:08 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:10.926 10:56:08 json_config -- nvmf/common.sh@47 -- # : 0 00:04:10.926 10:56:08 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:10.926 10:56:08 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:10.926 10:56:08 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:10.926 10:56:08 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:10.926 10:56:08 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:10.926 10:56:08 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:10.926 10:56:08 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:10.926 10:56:08 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:10.926 10:56:08 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:10.926 10:56:08 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:10.926 10:56:08 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:10.926 10:56:08 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:10.926 10:56:08 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:10.926 10:56:08 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:10.926 10:56:08 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:10.926 10:56:08 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:10.926 10:56:08 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:10.926 10:56:08 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:10.926 10:56:08 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:10.926 10:56:08 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:10.926 10:56:08 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:10.926 10:56:08 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:10.926 10:56:08 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:10.926 10:56:08 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:10.926 INFO: JSON configuration test init 00:04:10.926 10:56:08 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:10.926 10:56:08 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:10.926 10:56:08 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:04:10.926 10:56:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.927 10:56:08 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:10.927 10:56:08 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:04:10.927 10:56:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.927 10:56:08 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:10.927 10:56:08 json_config -- json_config/common.sh@9 -- # local app=target 00:04:10.927 10:56:08 json_config -- json_config/common.sh@10 -- # shift 00:04:10.927 10:56:08 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:10.927 10:56:08 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:10.927 10:56:08 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:10.927 10:56:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:10.927 10:56:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:10.927 10:56:08 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2069815 00:04:10.927 10:56:08 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:10.927 Waiting for target to run... 00:04:10.927 10:56:08 json_config -- json_config/common.sh@25 -- # waitforlisten 2069815 /var/tmp/spdk_tgt.sock 00:04:10.927 10:56:08 json_config -- common/autotest_common.sh@828 -- # '[' -z 2069815 ']' 00:04:10.927 10:56:08 json_config -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:10.927 10:56:08 json_config -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:10.927 10:56:08 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:10.927 10:56:08 json_config -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:10.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:10.927 10:56:08 json_config -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:10.927 10:56:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.927 [2024-05-15 10:56:08.098862] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:04:10.927 [2024-05-15 10:56:08.098914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2069815 ] 00:04:10.927 EAL: No free 2048 kB hugepages reported on node 1 00:04:11.186 [2024-05-15 10:56:08.359658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.186 [2024-05-15 10:56:08.426913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.754 10:56:08 json_config -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:11.754 10:56:08 json_config -- common/autotest_common.sh@861 -- # return 0 00:04:11.754 10:56:08 json_config -- json_config/common.sh@26 -- # echo '' 00:04:11.754 00:04:11.754 10:56:08 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:11.754 10:56:08 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:11.754 10:56:08 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:04:11.754 10:56:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:11.754 10:56:08 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:11.754 10:56:08 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:11.754 10:56:08 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:04:11.754 10:56:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:11.754 10:56:08 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:11.754 10:56:08 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:11.754 10:56:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:15.037 10:56:12 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:15.037 10:56:12 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:15.037 10:56:12 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:04:15.037 10:56:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:15.037 10:56:12 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:15.037 10:56:12 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:15.037 10:56:12 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:15.037 10:56:12 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:15.037 10:56:12 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:15.037 10:56:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:15.037 10:56:12 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:15.037 10:56:12 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:15.037 10:56:12 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:15.037 10:56:12 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:15.037 10:56:12 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:04:15.037 10:56:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:15.037 10:56:12 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:15.037 10:56:12 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:15.037 10:56:12 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:15.037 10:56:12 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:15.037 10:56:12 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:15.037 10:56:12 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:15.037 10:56:12 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:15.037 10:56:12 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:04:15.037 10:56:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:15.037 10:56:12 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:15.037 10:56:12 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:15.037 10:56:12 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:15.037 10:56:12 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:15.037 10:56:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:15.294 MallocForNvmf0 00:04:15.294 10:56:12 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:15.294 10:56:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:15.552 MallocForNvmf1 00:04:15.552 10:56:12 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:15.552 10:56:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:15.552 [2024-05-15 10:56:12.731077] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:15.552 10:56:12 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:15.552 10:56:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:15.811 10:56:12 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:15.811 10:56:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:16.072 10:56:13 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:16.072 10:56:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:16.072 10:56:13 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:16.072 10:56:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:16.331 [2024-05-15 10:56:13.417057] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:04:16.331 [2024-05-15 10:56:13.417406] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:16.331 10:56:13 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:16.331 10:56:13 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:04:16.331 10:56:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.331 10:56:13 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:16.331 10:56:13 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:04:16.331 10:56:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.331 10:56:13 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:16.331 10:56:13 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:16.331 10:56:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:16.607 MallocBdevForConfigChangeCheck 00:04:16.607 10:56:13 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:16.607 10:56:13 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:04:16.607 10:56:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.607 10:56:13 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:16.607 10:56:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:16.875 10:56:13 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:16.875 INFO: shutting down applications... 00:04:16.875 10:56:13 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:16.875 10:56:13 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:16.875 10:56:13 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:16.875 10:56:13 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:18.774 Calling clear_iscsi_subsystem 00:04:18.774 Calling clear_nvmf_subsystem 00:04:18.774 Calling clear_nbd_subsystem 00:04:18.774 Calling clear_ublk_subsystem 00:04:18.774 Calling clear_vhost_blk_subsystem 00:04:18.775 Calling clear_vhost_scsi_subsystem 00:04:18.775 Calling clear_bdev_subsystem 00:04:18.775 10:56:15 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:18.775 10:56:15 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:18.775 10:56:15 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:18.775 10:56:15 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:18.775 10:56:15 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:18.775 10:56:15 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:18.775 10:56:15 json_config -- json_config/json_config.sh@345 -- # break 00:04:18.775 10:56:15 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:18.775 10:56:15 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:18.775 10:56:15 json_config -- json_config/common.sh@31 -- # local app=target 00:04:18.775 10:56:15 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:18.775 10:56:15 json_config -- json_config/common.sh@35 -- # [[ -n 2069815 ]] 00:04:18.775 10:56:15 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2069815 00:04:18.775 [2024-05-15 10:56:15.920448] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:04:18.775 10:56:15 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:18.775 10:56:15 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:18.775 10:56:15 json_config -- json_config/common.sh@41 -- # kill -0 2069815 00:04:18.775 10:56:15 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:19.341 10:56:16 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:19.341 10:56:16 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:19.341 10:56:16 json_config -- json_config/common.sh@41 -- # kill -0 2069815 00:04:19.341 10:56:16 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:19.341 10:56:16 json_config -- json_config/common.sh@43 -- # break 00:04:19.341 10:56:16 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:19.341 10:56:16 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:19.341 SPDK target shutdown done 00:04:19.341 10:56:16 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:19.341 INFO: relaunching applications... 00:04:19.341 10:56:16 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:19.341 10:56:16 json_config -- json_config/common.sh@9 -- # local app=target 00:04:19.341 10:56:16 json_config -- json_config/common.sh@10 -- # shift 00:04:19.341 10:56:16 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:19.341 10:56:16 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:19.341 10:56:16 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:19.341 10:56:16 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:19.341 10:56:16 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:19.341 10:56:16 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2071458 00:04:19.341 10:56:16 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:19.341 Waiting for target to run... 00:04:19.341 10:56:16 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:19.341 10:56:16 json_config -- json_config/common.sh@25 -- # waitforlisten 2071458 /var/tmp/spdk_tgt.sock 00:04:19.341 10:56:16 json_config -- common/autotest_common.sh@828 -- # '[' -z 2071458 ']' 00:04:19.341 10:56:16 json_config -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:19.341 10:56:16 json_config -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:19.341 10:56:16 json_config -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:19.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:19.341 10:56:16 json_config -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:19.341 10:56:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.341 [2024-05-15 10:56:16.480826] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:04:19.341 [2024-05-15 10:56:16.480887] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2071458 ] 00:04:19.341 EAL: No free 2048 kB hugepages reported on node 1 00:04:19.908 [2024-05-15 10:56:16.923590] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.908 [2024-05-15 10:56:17.014198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.193 [2024-05-15 10:56:20.012931] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:23.193 [2024-05-15 10:56:20.044928] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:04:23.193 [2024-05-15 10:56:20.045279] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:23.452 10:56:20 json_config -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:23.452 10:56:20 json_config -- common/autotest_common.sh@861 -- # return 0 00:04:23.452 10:56:20 json_config -- json_config/common.sh@26 -- # echo '' 00:04:23.452 00:04:23.452 10:56:20 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:23.452 10:56:20 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:23.452 INFO: Checking if target configuration is the same... 00:04:23.452 10:56:20 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:23.452 10:56:20 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:23.452 10:56:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:23.452 + '[' 2 -ne 2 ']' 00:04:23.452 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:23.452 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:23.452 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:23.452 +++ basename /dev/fd/62 00:04:23.452 ++ mktemp /tmp/62.XXX 00:04:23.452 + tmp_file_1=/tmp/62.NlV 00:04:23.452 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:23.452 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:23.452 + tmp_file_2=/tmp/spdk_tgt_config.json.aBp 00:04:23.452 + ret=0 00:04:23.452 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:23.710 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:23.968 + diff -u /tmp/62.NlV /tmp/spdk_tgt_config.json.aBp 00:04:23.968 + echo 'INFO: JSON config files are the same' 00:04:23.968 INFO: JSON config files are the same 00:04:23.969 + rm /tmp/62.NlV /tmp/spdk_tgt_config.json.aBp 00:04:23.969 + exit 0 00:04:23.969 10:56:20 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:23.969 10:56:20 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:23.969 INFO: changing configuration and checking if this can be detected... 00:04:23.969 10:56:20 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:23.969 10:56:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:23.969 10:56:21 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:23.969 10:56:21 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:23.969 10:56:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:23.969 + '[' 2 -ne 2 ']' 00:04:23.969 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:23.969 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:23.969 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:23.969 +++ basename /dev/fd/62 00:04:23.969 ++ mktemp /tmp/62.XXX 00:04:23.969 + tmp_file_1=/tmp/62.lPn 00:04:23.969 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:23.969 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:23.969 + tmp_file_2=/tmp/spdk_tgt_config.json.pE5 00:04:23.969 + ret=0 00:04:23.969 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:24.227 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:24.486 + diff -u /tmp/62.lPn /tmp/spdk_tgt_config.json.pE5 00:04:24.486 + ret=1 00:04:24.486 + echo '=== Start of file: /tmp/62.lPn ===' 00:04:24.486 + cat /tmp/62.lPn 00:04:24.486 + echo '=== End of file: /tmp/62.lPn ===' 00:04:24.486 + echo '' 00:04:24.486 + echo '=== Start of file: /tmp/spdk_tgt_config.json.pE5 ===' 00:04:24.486 + cat /tmp/spdk_tgt_config.json.pE5 00:04:24.486 + echo '=== End of file: /tmp/spdk_tgt_config.json.pE5 ===' 00:04:24.486 + echo '' 00:04:24.486 + rm /tmp/62.lPn /tmp/spdk_tgt_config.json.pE5 00:04:24.486 + exit 1 00:04:24.486 10:56:21 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:24.486 INFO: configuration change detected. 00:04:24.486 10:56:21 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:24.486 10:56:21 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:24.486 10:56:21 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:04:24.486 10:56:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.486 10:56:21 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:24.486 10:56:21 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:24.486 10:56:21 json_config -- json_config/json_config.sh@317 -- # [[ -n 2071458 ]] 00:04:24.486 10:56:21 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:24.486 10:56:21 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:24.486 10:56:21 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:04:24.486 10:56:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.486 10:56:21 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:24.486 10:56:21 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:24.486 10:56:21 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:24.486 10:56:21 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:24.486 10:56:21 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:24.486 10:56:21 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:24.486 10:56:21 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:04:24.486 10:56:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.486 10:56:21 json_config -- json_config/json_config.sh@323 -- # killprocess 2071458 00:04:24.486 10:56:21 json_config -- common/autotest_common.sh@947 -- # '[' -z 2071458 ']' 00:04:24.486 10:56:21 json_config -- common/autotest_common.sh@951 -- # kill -0 2071458 00:04:24.486 10:56:21 json_config -- common/autotest_common.sh@952 -- # uname 00:04:24.486 10:56:21 json_config -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:24.486 10:56:21 json_config -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2071458 00:04:24.486 10:56:21 json_config -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:24.486 10:56:21 json_config -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:24.486 10:56:21 json_config -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2071458' 00:04:24.486 killing process with pid 2071458 00:04:24.486 10:56:21 json_config -- common/autotest_common.sh@966 -- # kill 2071458 00:04:24.486 [2024-05-15 10:56:21.609217] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:04:24.486 10:56:21 json_config -- common/autotest_common.sh@971 -- # wait 2071458 00:04:25.861 10:56:23 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:25.861 10:56:23 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:25.861 10:56:23 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:04:25.861 10:56:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.120 10:56:23 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:26.120 10:56:23 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:26.120 INFO: Success 00:04:26.120 00:04:26.120 real 0m15.202s 00:04:26.120 user 0m16.001s 00:04:26.120 sys 0m1.861s 00:04:26.120 10:56:23 json_config -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:26.120 10:56:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.120 ************************************ 00:04:26.120 END TEST json_config 00:04:26.120 ************************************ 00:04:26.120 10:56:23 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:26.120 10:56:23 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:26.120 10:56:23 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:26.120 10:56:23 -- common/autotest_common.sh@10 -- # set +x 00:04:26.120 ************************************ 00:04:26.120 START TEST json_config_extra_key 00:04:26.120 ************************************ 00:04:26.120 10:56:23 json_config_extra_key -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:26.120 10:56:23 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:26.120 10:56:23 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:26.120 10:56:23 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:26.120 10:56:23 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:26.120 10:56:23 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:26.120 10:56:23 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:26.120 10:56:23 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:26.120 10:56:23 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:26.120 10:56:23 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:26.120 10:56:23 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:26.120 10:56:23 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:26.120 10:56:23 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:26.120 10:56:23 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:26.120 10:56:23 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:26.120 10:56:23 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:26.120 10:56:23 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:26.120 10:56:23 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:26.120 10:56:23 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:26.120 10:56:23 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:26.120 10:56:23 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:26.120 10:56:23 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:26.120 10:56:23 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:26.120 10:56:23 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.120 10:56:23 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.120 10:56:23 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.120 10:56:23 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:26.120 10:56:23 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.120 10:56:23 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:26.120 10:56:23 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:26.120 10:56:23 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:26.120 10:56:23 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:26.120 10:56:23 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:26.120 10:56:23 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:26.120 10:56:23 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:26.120 10:56:23 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:26.120 10:56:23 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:26.120 10:56:23 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:26.120 10:56:23 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:26.120 10:56:23 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:26.120 10:56:23 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:26.120 10:56:23 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:26.120 10:56:23 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:26.120 10:56:23 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:26.120 10:56:23 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:26.120 10:56:23 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:26.120 10:56:23 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:26.120 10:56:23 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:26.120 INFO: launching applications... 00:04:26.120 10:56:23 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:26.120 10:56:23 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:26.120 10:56:23 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:26.120 10:56:23 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:26.120 10:56:23 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:26.120 10:56:23 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:26.120 10:56:23 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:26.120 10:56:23 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:26.120 10:56:23 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2072732 00:04:26.120 10:56:23 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:26.120 Waiting for target to run... 00:04:26.120 10:56:23 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2072732 /var/tmp/spdk_tgt.sock 00:04:26.120 10:56:23 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:26.120 10:56:23 json_config_extra_key -- common/autotest_common.sh@828 -- # '[' -z 2072732 ']' 00:04:26.120 10:56:23 json_config_extra_key -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:26.120 10:56:23 json_config_extra_key -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:26.120 10:56:23 json_config_extra_key -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:26.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:26.120 10:56:23 json_config_extra_key -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:26.120 10:56:23 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:26.120 [2024-05-15 10:56:23.354140] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:04:26.120 [2024-05-15 10:56:23.354197] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2072732 ] 00:04:26.120 EAL: No free 2048 kB hugepages reported on node 1 00:04:26.378 [2024-05-15 10:56:23.614551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.636 [2024-05-15 10:56:23.683709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.894 10:56:24 json_config_extra_key -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:26.894 10:56:24 json_config_extra_key -- common/autotest_common.sh@861 -- # return 0 00:04:26.894 10:56:24 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:26.894 00:04:26.894 10:56:24 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:26.894 INFO: shutting down applications... 00:04:26.894 10:56:24 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:26.894 10:56:24 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:26.894 10:56:24 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:26.894 10:56:24 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2072732 ]] 00:04:26.894 10:56:24 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2072732 00:04:26.894 10:56:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:26.894 10:56:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:26.894 10:56:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2072732 00:04:26.894 10:56:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:27.462 10:56:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:27.462 10:56:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:27.462 10:56:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2072732 00:04:27.462 10:56:24 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:27.462 10:56:24 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:27.462 10:56:24 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:27.462 10:56:24 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:27.462 SPDK target shutdown done 00:04:27.462 10:56:24 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:27.462 Success 00:04:27.462 00:04:27.462 real 0m1.436s 00:04:27.463 user 0m1.267s 00:04:27.463 sys 0m0.350s 00:04:27.463 10:56:24 json_config_extra_key -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:27.463 10:56:24 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:27.463 ************************************ 00:04:27.463 END TEST json_config_extra_key 00:04:27.463 ************************************ 00:04:27.463 10:56:24 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:27.463 10:56:24 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:27.463 10:56:24 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:27.463 10:56:24 -- common/autotest_common.sh@10 -- # set +x 00:04:27.463 ************************************ 00:04:27.463 START TEST alias_rpc 00:04:27.463 ************************************ 00:04:27.463 10:56:24 alias_rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:27.721 * Looking for test storage... 00:04:27.721 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:27.721 10:56:24 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:27.721 10:56:24 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2073012 00:04:27.721 10:56:24 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.721 10:56:24 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2073012 00:04:27.721 10:56:24 alias_rpc -- common/autotest_common.sh@828 -- # '[' -z 2073012 ']' 00:04:27.721 10:56:24 alias_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:27.721 10:56:24 alias_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:27.721 10:56:24 alias_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:27.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:27.721 10:56:24 alias_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:27.721 10:56:24 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.721 [2024-05-15 10:56:24.864922] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:04:27.721 [2024-05-15 10:56:24.864975] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2073012 ] 00:04:27.721 EAL: No free 2048 kB hugepages reported on node 1 00:04:27.721 [2024-05-15 10:56:24.919877] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.979 [2024-05-15 10:56:24.997053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.546 10:56:25 alias_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:28.546 10:56:25 alias_rpc -- common/autotest_common.sh@861 -- # return 0 00:04:28.546 10:56:25 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:28.805 10:56:25 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2073012 00:04:28.805 10:56:25 alias_rpc -- common/autotest_common.sh@947 -- # '[' -z 2073012 ']' 00:04:28.805 10:56:25 alias_rpc -- common/autotest_common.sh@951 -- # kill -0 2073012 00:04:28.805 10:56:25 alias_rpc -- common/autotest_common.sh@952 -- # uname 00:04:28.805 10:56:25 alias_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:28.805 10:56:25 alias_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2073012 00:04:28.805 10:56:25 alias_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:28.805 10:56:25 alias_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:28.805 10:56:25 alias_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2073012' 00:04:28.805 killing process with pid 2073012 00:04:28.805 10:56:25 alias_rpc -- common/autotest_common.sh@966 -- # kill 2073012 00:04:28.805 10:56:25 alias_rpc -- common/autotest_common.sh@971 -- # wait 2073012 00:04:29.064 00:04:29.064 real 0m1.525s 00:04:29.064 user 0m1.671s 00:04:29.064 sys 0m0.400s 00:04:29.064 10:56:26 alias_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:29.064 10:56:26 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.064 ************************************ 00:04:29.064 END TEST alias_rpc 00:04:29.064 ************************************ 00:04:29.064 10:56:26 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:04:29.064 10:56:26 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:29.064 10:56:26 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:29.064 10:56:26 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:29.064 10:56:26 -- common/autotest_common.sh@10 -- # set +x 00:04:29.064 ************************************ 00:04:29.064 START TEST spdkcli_tcp 00:04:29.064 ************************************ 00:04:29.064 10:56:26 spdkcli_tcp -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:29.324 * Looking for test storage... 00:04:29.324 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:29.324 10:56:26 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:29.324 10:56:26 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:29.324 10:56:26 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:29.324 10:56:26 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:29.324 10:56:26 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:29.324 10:56:26 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:29.324 10:56:26 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:29.324 10:56:26 spdkcli_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:04:29.324 10:56:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:29.324 10:56:26 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2073305 00:04:29.324 10:56:26 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2073305 00:04:29.324 10:56:26 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:29.324 10:56:26 spdkcli_tcp -- common/autotest_common.sh@828 -- # '[' -z 2073305 ']' 00:04:29.324 10:56:26 spdkcli_tcp -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.324 10:56:26 spdkcli_tcp -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:29.324 10:56:26 spdkcli_tcp -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.324 10:56:26 spdkcli_tcp -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:29.324 10:56:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:29.324 [2024-05-15 10:56:26.458930] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:04:29.324 [2024-05-15 10:56:26.458974] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2073305 ] 00:04:29.324 EAL: No free 2048 kB hugepages reported on node 1 00:04:29.324 [2024-05-15 10:56:26.511180] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:29.324 [2024-05-15 10:56:26.585010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:29.324 [2024-05-15 10:56:26.585013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.261 10:56:27 spdkcli_tcp -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:30.261 10:56:27 spdkcli_tcp -- common/autotest_common.sh@861 -- # return 0 00:04:30.261 10:56:27 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2073454 00:04:30.262 10:56:27 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:30.262 10:56:27 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:30.262 [ 00:04:30.262 "bdev_malloc_delete", 00:04:30.262 "bdev_malloc_create", 00:04:30.262 "bdev_null_resize", 00:04:30.262 "bdev_null_delete", 00:04:30.262 "bdev_null_create", 00:04:30.262 "bdev_nvme_cuse_unregister", 00:04:30.262 "bdev_nvme_cuse_register", 00:04:30.262 "bdev_opal_new_user", 00:04:30.262 "bdev_opal_set_lock_state", 00:04:30.262 "bdev_opal_delete", 00:04:30.262 "bdev_opal_get_info", 00:04:30.262 "bdev_opal_create", 00:04:30.262 "bdev_nvme_opal_revert", 00:04:30.262 "bdev_nvme_opal_init", 00:04:30.262 "bdev_nvme_send_cmd", 00:04:30.262 "bdev_nvme_get_path_iostat", 00:04:30.262 "bdev_nvme_get_mdns_discovery_info", 00:04:30.262 "bdev_nvme_stop_mdns_discovery", 00:04:30.262 "bdev_nvme_start_mdns_discovery", 00:04:30.262 "bdev_nvme_set_multipath_policy", 00:04:30.262 "bdev_nvme_set_preferred_path", 00:04:30.262 "bdev_nvme_get_io_paths", 00:04:30.262 "bdev_nvme_remove_error_injection", 00:04:30.262 "bdev_nvme_add_error_injection", 00:04:30.262 "bdev_nvme_get_discovery_info", 00:04:30.262 "bdev_nvme_stop_discovery", 00:04:30.262 "bdev_nvme_start_discovery", 00:04:30.262 "bdev_nvme_get_controller_health_info", 00:04:30.262 "bdev_nvme_disable_controller", 00:04:30.262 "bdev_nvme_enable_controller", 00:04:30.262 "bdev_nvme_reset_controller", 00:04:30.262 "bdev_nvme_get_transport_statistics", 00:04:30.262 "bdev_nvme_apply_firmware", 00:04:30.262 "bdev_nvme_detach_controller", 00:04:30.262 "bdev_nvme_get_controllers", 00:04:30.262 "bdev_nvme_attach_controller", 00:04:30.262 "bdev_nvme_set_hotplug", 00:04:30.262 "bdev_nvme_set_options", 00:04:30.262 "bdev_passthru_delete", 00:04:30.262 "bdev_passthru_create", 00:04:30.262 "bdev_lvol_check_shallow_copy", 00:04:30.262 "bdev_lvol_start_shallow_copy", 00:04:30.262 "bdev_lvol_grow_lvstore", 00:04:30.262 "bdev_lvol_get_lvols", 00:04:30.262 "bdev_lvol_get_lvstores", 00:04:30.262 "bdev_lvol_delete", 00:04:30.262 "bdev_lvol_set_read_only", 00:04:30.262 "bdev_lvol_resize", 00:04:30.262 "bdev_lvol_decouple_parent", 00:04:30.262 "bdev_lvol_inflate", 00:04:30.262 "bdev_lvol_rename", 00:04:30.262 "bdev_lvol_clone_bdev", 00:04:30.262 "bdev_lvol_clone", 00:04:30.262 "bdev_lvol_snapshot", 00:04:30.262 "bdev_lvol_create", 00:04:30.262 "bdev_lvol_delete_lvstore", 00:04:30.262 "bdev_lvol_rename_lvstore", 00:04:30.262 "bdev_lvol_create_lvstore", 00:04:30.262 "bdev_raid_set_options", 00:04:30.262 "bdev_raid_remove_base_bdev", 00:04:30.262 "bdev_raid_add_base_bdev", 00:04:30.262 "bdev_raid_delete", 00:04:30.262 "bdev_raid_create", 00:04:30.262 "bdev_raid_get_bdevs", 00:04:30.262 "bdev_error_inject_error", 00:04:30.262 "bdev_error_delete", 00:04:30.262 "bdev_error_create", 00:04:30.262 "bdev_split_delete", 00:04:30.262 "bdev_split_create", 00:04:30.262 "bdev_delay_delete", 00:04:30.262 "bdev_delay_create", 00:04:30.262 "bdev_delay_update_latency", 00:04:30.262 "bdev_zone_block_delete", 00:04:30.262 "bdev_zone_block_create", 00:04:30.262 "blobfs_create", 00:04:30.262 "blobfs_detect", 00:04:30.262 "blobfs_set_cache_size", 00:04:30.262 "bdev_aio_delete", 00:04:30.262 "bdev_aio_rescan", 00:04:30.262 "bdev_aio_create", 00:04:30.262 "bdev_ftl_set_property", 00:04:30.262 "bdev_ftl_get_properties", 00:04:30.262 "bdev_ftl_get_stats", 00:04:30.262 "bdev_ftl_unmap", 00:04:30.262 "bdev_ftl_unload", 00:04:30.262 "bdev_ftl_delete", 00:04:30.262 "bdev_ftl_load", 00:04:30.262 "bdev_ftl_create", 00:04:30.262 "bdev_virtio_attach_controller", 00:04:30.262 "bdev_virtio_scsi_get_devices", 00:04:30.262 "bdev_virtio_detach_controller", 00:04:30.262 "bdev_virtio_blk_set_hotplug", 00:04:30.262 "bdev_iscsi_delete", 00:04:30.262 "bdev_iscsi_create", 00:04:30.262 "bdev_iscsi_set_options", 00:04:30.262 "accel_error_inject_error", 00:04:30.262 "ioat_scan_accel_module", 00:04:30.262 "dsa_scan_accel_module", 00:04:30.262 "iaa_scan_accel_module", 00:04:30.262 "vfu_virtio_create_scsi_endpoint", 00:04:30.262 "vfu_virtio_scsi_remove_target", 00:04:30.262 "vfu_virtio_scsi_add_target", 00:04:30.262 "vfu_virtio_create_blk_endpoint", 00:04:30.262 "vfu_virtio_delete_endpoint", 00:04:30.262 "keyring_file_remove_key", 00:04:30.262 "keyring_file_add_key", 00:04:30.262 "iscsi_get_histogram", 00:04:30.262 "iscsi_enable_histogram", 00:04:30.262 "iscsi_set_options", 00:04:30.262 "iscsi_get_auth_groups", 00:04:30.262 "iscsi_auth_group_remove_secret", 00:04:30.262 "iscsi_auth_group_add_secret", 00:04:30.262 "iscsi_delete_auth_group", 00:04:30.262 "iscsi_create_auth_group", 00:04:30.262 "iscsi_set_discovery_auth", 00:04:30.262 "iscsi_get_options", 00:04:30.262 "iscsi_target_node_request_logout", 00:04:30.262 "iscsi_target_node_set_redirect", 00:04:30.262 "iscsi_target_node_set_auth", 00:04:30.262 "iscsi_target_node_add_lun", 00:04:30.262 "iscsi_get_stats", 00:04:30.262 "iscsi_get_connections", 00:04:30.262 "iscsi_portal_group_set_auth", 00:04:30.262 "iscsi_start_portal_group", 00:04:30.262 "iscsi_delete_portal_group", 00:04:30.262 "iscsi_create_portal_group", 00:04:30.262 "iscsi_get_portal_groups", 00:04:30.262 "iscsi_delete_target_node", 00:04:30.262 "iscsi_target_node_remove_pg_ig_maps", 00:04:30.262 "iscsi_target_node_add_pg_ig_maps", 00:04:30.262 "iscsi_create_target_node", 00:04:30.262 "iscsi_get_target_nodes", 00:04:30.262 "iscsi_delete_initiator_group", 00:04:30.262 "iscsi_initiator_group_remove_initiators", 00:04:30.262 "iscsi_initiator_group_add_initiators", 00:04:30.262 "iscsi_create_initiator_group", 00:04:30.262 "iscsi_get_initiator_groups", 00:04:30.262 "nvmf_set_crdt", 00:04:30.262 "nvmf_set_config", 00:04:30.262 "nvmf_set_max_subsystems", 00:04:30.262 "nvmf_stop_mdns_prr", 00:04:30.262 "nvmf_publish_mdns_prr", 00:04:30.262 "nvmf_subsystem_get_listeners", 00:04:30.262 "nvmf_subsystem_get_qpairs", 00:04:30.262 "nvmf_subsystem_get_controllers", 00:04:30.262 "nvmf_get_stats", 00:04:30.262 "nvmf_get_transports", 00:04:30.262 "nvmf_create_transport", 00:04:30.262 "nvmf_get_targets", 00:04:30.262 "nvmf_delete_target", 00:04:30.262 "nvmf_create_target", 00:04:30.262 "nvmf_subsystem_allow_any_host", 00:04:30.262 "nvmf_subsystem_remove_host", 00:04:30.262 "nvmf_subsystem_add_host", 00:04:30.262 "nvmf_ns_remove_host", 00:04:30.262 "nvmf_ns_add_host", 00:04:30.262 "nvmf_subsystem_remove_ns", 00:04:30.262 "nvmf_subsystem_add_ns", 00:04:30.262 "nvmf_subsystem_listener_set_ana_state", 00:04:30.262 "nvmf_discovery_get_referrals", 00:04:30.262 "nvmf_discovery_remove_referral", 00:04:30.262 "nvmf_discovery_add_referral", 00:04:30.262 "nvmf_subsystem_remove_listener", 00:04:30.262 "nvmf_subsystem_add_listener", 00:04:30.262 "nvmf_delete_subsystem", 00:04:30.262 "nvmf_create_subsystem", 00:04:30.262 "nvmf_get_subsystems", 00:04:30.262 "env_dpdk_get_mem_stats", 00:04:30.262 "nbd_get_disks", 00:04:30.262 "nbd_stop_disk", 00:04:30.262 "nbd_start_disk", 00:04:30.262 "ublk_recover_disk", 00:04:30.262 "ublk_get_disks", 00:04:30.262 "ublk_stop_disk", 00:04:30.262 "ublk_start_disk", 00:04:30.262 "ublk_destroy_target", 00:04:30.262 "ublk_create_target", 00:04:30.262 "virtio_blk_create_transport", 00:04:30.262 "virtio_blk_get_transports", 00:04:30.262 "vhost_controller_set_coalescing", 00:04:30.262 "vhost_get_controllers", 00:04:30.262 "vhost_delete_controller", 00:04:30.262 "vhost_create_blk_controller", 00:04:30.262 "vhost_scsi_controller_remove_target", 00:04:30.262 "vhost_scsi_controller_add_target", 00:04:30.262 "vhost_start_scsi_controller", 00:04:30.262 "vhost_create_scsi_controller", 00:04:30.262 "thread_set_cpumask", 00:04:30.262 "framework_get_scheduler", 00:04:30.262 "framework_set_scheduler", 00:04:30.262 "framework_get_reactors", 00:04:30.262 "thread_get_io_channels", 00:04:30.262 "thread_get_pollers", 00:04:30.262 "thread_get_stats", 00:04:30.262 "framework_monitor_context_switch", 00:04:30.262 "spdk_kill_instance", 00:04:30.262 "log_enable_timestamps", 00:04:30.262 "log_get_flags", 00:04:30.262 "log_clear_flag", 00:04:30.262 "log_set_flag", 00:04:30.262 "log_get_level", 00:04:30.262 "log_set_level", 00:04:30.262 "log_get_print_level", 00:04:30.262 "log_set_print_level", 00:04:30.263 "framework_enable_cpumask_locks", 00:04:30.263 "framework_disable_cpumask_locks", 00:04:30.263 "framework_wait_init", 00:04:30.263 "framework_start_init", 00:04:30.263 "scsi_get_devices", 00:04:30.263 "bdev_get_histogram", 00:04:30.263 "bdev_enable_histogram", 00:04:30.263 "bdev_set_qos_limit", 00:04:30.263 "bdev_set_qd_sampling_period", 00:04:30.263 "bdev_get_bdevs", 00:04:30.263 "bdev_reset_iostat", 00:04:30.263 "bdev_get_iostat", 00:04:30.263 "bdev_examine", 00:04:30.263 "bdev_wait_for_examine", 00:04:30.263 "bdev_set_options", 00:04:30.263 "notify_get_notifications", 00:04:30.263 "notify_get_types", 00:04:30.263 "accel_get_stats", 00:04:30.263 "accel_set_options", 00:04:30.263 "accel_set_driver", 00:04:30.263 "accel_crypto_key_destroy", 00:04:30.263 "accel_crypto_keys_get", 00:04:30.263 "accel_crypto_key_create", 00:04:30.263 "accel_assign_opc", 00:04:30.263 "accel_get_module_info", 00:04:30.263 "accel_get_opc_assignments", 00:04:30.263 "vmd_rescan", 00:04:30.263 "vmd_remove_device", 00:04:30.263 "vmd_enable", 00:04:30.263 "sock_get_default_impl", 00:04:30.263 "sock_set_default_impl", 00:04:30.263 "sock_impl_set_options", 00:04:30.263 "sock_impl_get_options", 00:04:30.263 "iobuf_get_stats", 00:04:30.263 "iobuf_set_options", 00:04:30.263 "keyring_get_keys", 00:04:30.263 "framework_get_pci_devices", 00:04:30.263 "framework_get_config", 00:04:30.263 "framework_get_subsystems", 00:04:30.263 "vfu_tgt_set_base_path", 00:04:30.263 "trace_get_info", 00:04:30.263 "trace_get_tpoint_group_mask", 00:04:30.263 "trace_disable_tpoint_group", 00:04:30.263 "trace_enable_tpoint_group", 00:04:30.263 "trace_clear_tpoint_mask", 00:04:30.263 "trace_set_tpoint_mask", 00:04:30.263 "spdk_get_version", 00:04:30.263 "rpc_get_methods" 00:04:30.263 ] 00:04:30.263 10:56:27 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:30.263 10:56:27 spdkcli_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:04:30.263 10:56:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:30.263 10:56:27 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:30.263 10:56:27 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2073305 00:04:30.263 10:56:27 spdkcli_tcp -- common/autotest_common.sh@947 -- # '[' -z 2073305 ']' 00:04:30.263 10:56:27 spdkcli_tcp -- common/autotest_common.sh@951 -- # kill -0 2073305 00:04:30.263 10:56:27 spdkcli_tcp -- common/autotest_common.sh@952 -- # uname 00:04:30.263 10:56:27 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:30.263 10:56:27 spdkcli_tcp -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2073305 00:04:30.263 10:56:27 spdkcli_tcp -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:30.263 10:56:27 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:30.263 10:56:27 spdkcli_tcp -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2073305' 00:04:30.263 killing process with pid 2073305 00:04:30.263 10:56:27 spdkcli_tcp -- common/autotest_common.sh@966 -- # kill 2073305 00:04:30.263 10:56:27 spdkcli_tcp -- common/autotest_common.sh@971 -- # wait 2073305 00:04:30.831 00:04:30.831 real 0m1.540s 00:04:30.831 user 0m2.888s 00:04:30.831 sys 0m0.405s 00:04:30.831 10:56:27 spdkcli_tcp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:30.831 10:56:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:30.831 ************************************ 00:04:30.831 END TEST spdkcli_tcp 00:04:30.831 ************************************ 00:04:30.831 10:56:27 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:30.831 10:56:27 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:30.831 10:56:27 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:30.831 10:56:27 -- common/autotest_common.sh@10 -- # set +x 00:04:30.831 ************************************ 00:04:30.831 START TEST dpdk_mem_utility 00:04:30.831 ************************************ 00:04:30.831 10:56:27 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:30.831 * Looking for test storage... 00:04:30.831 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:30.831 10:56:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:30.831 10:56:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2073601 00:04:30.831 10:56:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2073601 00:04:30.831 10:56:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:30.831 10:56:28 dpdk_mem_utility -- common/autotest_common.sh@828 -- # '[' -z 2073601 ']' 00:04:30.831 10:56:28 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.831 10:56:28 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:30.831 10:56:28 dpdk_mem_utility -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.831 10:56:28 dpdk_mem_utility -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:30.831 10:56:28 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:30.831 [2024-05-15 10:56:28.054977] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:04:30.832 [2024-05-15 10:56:28.055019] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2073601 ] 00:04:30.832 EAL: No free 2048 kB hugepages reported on node 1 00:04:31.091 [2024-05-15 10:56:28.108306] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.091 [2024-05-15 10:56:28.186646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.659 10:56:28 dpdk_mem_utility -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:31.659 10:56:28 dpdk_mem_utility -- common/autotest_common.sh@861 -- # return 0 00:04:31.659 10:56:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:31.659 10:56:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:31.659 10:56:28 dpdk_mem_utility -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:31.659 10:56:28 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:31.659 { 00:04:31.659 "filename": "/tmp/spdk_mem_dump.txt" 00:04:31.659 } 00:04:31.659 10:56:28 dpdk_mem_utility -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:31.659 10:56:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:31.659 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:31.659 1 heaps totaling size 814.000000 MiB 00:04:31.659 size: 814.000000 MiB heap id: 0 00:04:31.659 end heaps---------- 00:04:31.659 8 mempools totaling size 598.116089 MiB 00:04:31.659 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:31.659 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:31.659 size: 84.521057 MiB name: bdev_io_2073601 00:04:31.659 size: 51.011292 MiB name: evtpool_2073601 00:04:31.659 size: 50.003479 MiB name: msgpool_2073601 00:04:31.659 size: 21.763794 MiB name: PDU_Pool 00:04:31.659 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:31.659 size: 0.026123 MiB name: Session_Pool 00:04:31.659 end mempools------- 00:04:31.659 6 memzones totaling size 4.142822 MiB 00:04:31.659 size: 1.000366 MiB name: RG_ring_0_2073601 00:04:31.659 size: 1.000366 MiB name: RG_ring_1_2073601 00:04:31.659 size: 1.000366 MiB name: RG_ring_4_2073601 00:04:31.659 size: 1.000366 MiB name: RG_ring_5_2073601 00:04:31.659 size: 0.125366 MiB name: RG_ring_2_2073601 00:04:31.659 size: 0.015991 MiB name: RG_ring_3_2073601 00:04:31.659 end memzones------- 00:04:31.659 10:56:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:31.919 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:31.919 list of free elements. size: 12.519348 MiB 00:04:31.919 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:31.919 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:31.919 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:31.919 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:31.919 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:31.919 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:31.919 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:31.919 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:31.919 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:31.919 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:31.919 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:31.919 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:31.919 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:31.919 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:31.919 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:31.919 list of standard malloc elements. size: 199.218079 MiB 00:04:31.919 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:31.919 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:31.919 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:31.919 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:31.919 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:31.919 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:31.919 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:31.919 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:31.919 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:31.919 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:31.919 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:31.919 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:31.919 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:31.919 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:31.919 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:31.919 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:31.919 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:31.919 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:31.919 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:31.919 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:31.919 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:31.919 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:31.919 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:31.919 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:31.919 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:31.919 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:31.919 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:31.919 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:31.919 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:31.919 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:31.919 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:31.919 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:31.919 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:31.919 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:31.919 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:31.919 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:31.919 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:31.919 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:31.919 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:31.919 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:31.919 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:31.919 list of memzone associated elements. size: 602.262573 MiB 00:04:31.919 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:31.919 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:31.919 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:31.919 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:31.919 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:31.919 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2073601_0 00:04:31.919 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:31.919 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2073601_0 00:04:31.919 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:31.919 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2073601_0 00:04:31.919 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:31.919 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:31.919 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:31.919 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:31.919 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:31.919 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2073601 00:04:31.919 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:31.919 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2073601 00:04:31.919 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:31.919 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2073601 00:04:31.919 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:31.919 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:31.919 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:31.919 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:31.919 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:31.919 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:31.919 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:31.919 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:31.919 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:31.919 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2073601 00:04:31.919 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:31.919 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2073601 00:04:31.919 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:31.919 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2073601 00:04:31.919 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:31.919 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2073601 00:04:31.919 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:31.919 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2073601 00:04:31.919 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:31.919 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:31.919 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:31.919 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:31.919 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:31.919 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:31.919 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:31.919 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2073601 00:04:31.919 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:31.919 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:31.919 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:31.919 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:31.919 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:31.919 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2073601 00:04:31.919 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:31.919 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:31.919 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:31.919 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2073601 00:04:31.919 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:31.919 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2073601 00:04:31.919 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:31.919 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:31.919 10:56:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:31.919 10:56:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2073601 00:04:31.919 10:56:28 dpdk_mem_utility -- common/autotest_common.sh@947 -- # '[' -z 2073601 ']' 00:04:31.919 10:56:28 dpdk_mem_utility -- common/autotest_common.sh@951 -- # kill -0 2073601 00:04:31.919 10:56:28 dpdk_mem_utility -- common/autotest_common.sh@952 -- # uname 00:04:31.919 10:56:28 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:31.919 10:56:28 dpdk_mem_utility -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2073601 00:04:31.919 10:56:28 dpdk_mem_utility -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:31.919 10:56:28 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:31.919 10:56:28 dpdk_mem_utility -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2073601' 00:04:31.919 killing process with pid 2073601 00:04:31.919 10:56:28 dpdk_mem_utility -- common/autotest_common.sh@966 -- # kill 2073601 00:04:31.920 10:56:28 dpdk_mem_utility -- common/autotest_common.sh@971 -- # wait 2073601 00:04:32.179 00:04:32.179 real 0m1.382s 00:04:32.179 user 0m1.449s 00:04:32.179 sys 0m0.359s 00:04:32.179 10:56:29 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:32.179 10:56:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:32.179 ************************************ 00:04:32.179 END TEST dpdk_mem_utility 00:04:32.179 ************************************ 00:04:32.179 10:56:29 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:32.179 10:56:29 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:32.179 10:56:29 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:32.179 10:56:29 -- common/autotest_common.sh@10 -- # set +x 00:04:32.179 ************************************ 00:04:32.179 START TEST event 00:04:32.179 ************************************ 00:04:32.179 10:56:29 event -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:32.439 * Looking for test storage... 00:04:32.439 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:32.439 10:56:29 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:32.439 10:56:29 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:32.439 10:56:29 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:32.439 10:56:29 event -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:04:32.439 10:56:29 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:32.439 10:56:29 event -- common/autotest_common.sh@10 -- # set +x 00:04:32.439 ************************************ 00:04:32.439 START TEST event_perf 00:04:32.439 ************************************ 00:04:32.439 10:56:29 event.event_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:32.439 Running I/O for 1 seconds...[2024-05-15 10:56:29.521251] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:04:32.439 [2024-05-15 10:56:29.521303] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2073895 ] 00:04:32.439 EAL: No free 2048 kB hugepages reported on node 1 00:04:32.439 [2024-05-15 10:56:29.576356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:32.439 [2024-05-15 10:56:29.651686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:32.439 [2024-05-15 10:56:29.651706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:32.439 [2024-05-15 10:56:29.651799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:32.439 [2024-05-15 10:56:29.651800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.816 Running I/O for 1 seconds... 00:04:33.816 lcore 0: 196865 00:04:33.816 lcore 1: 196865 00:04:33.816 lcore 2: 196865 00:04:33.816 lcore 3: 196866 00:04:33.816 done. 00:04:33.816 00:04:33.816 real 0m1.239s 00:04:33.816 user 0m4.162s 00:04:33.816 sys 0m0.073s 00:04:33.816 10:56:30 event.event_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:33.816 10:56:30 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:33.816 ************************************ 00:04:33.816 END TEST event_perf 00:04:33.816 ************************************ 00:04:33.816 10:56:30 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:33.816 10:56:30 event -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:04:33.816 10:56:30 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:33.816 10:56:30 event -- common/autotest_common.sh@10 -- # set +x 00:04:33.816 ************************************ 00:04:33.816 START TEST event_reactor 00:04:33.816 ************************************ 00:04:33.816 10:56:30 event.event_reactor -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:33.816 [2024-05-15 10:56:30.833566] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:04:33.816 [2024-05-15 10:56:30.833624] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2074150 ] 00:04:33.816 EAL: No free 2048 kB hugepages reported on node 1 00:04:33.816 [2024-05-15 10:56:30.889432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.816 [2024-05-15 10:56:30.960244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.194 test_start 00:04:35.194 oneshot 00:04:35.194 tick 100 00:04:35.194 tick 100 00:04:35.194 tick 250 00:04:35.194 tick 100 00:04:35.194 tick 100 00:04:35.194 tick 250 00:04:35.194 tick 100 00:04:35.194 tick 500 00:04:35.194 tick 100 00:04:35.194 tick 100 00:04:35.194 tick 250 00:04:35.194 tick 100 00:04:35.194 tick 100 00:04:35.194 test_end 00:04:35.194 00:04:35.194 real 0m1.233s 00:04:35.194 user 0m1.159s 00:04:35.194 sys 0m0.069s 00:04:35.194 10:56:32 event.event_reactor -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:35.194 10:56:32 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:35.194 ************************************ 00:04:35.194 END TEST event_reactor 00:04:35.194 ************************************ 00:04:35.194 10:56:32 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:35.194 10:56:32 event -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:04:35.194 10:56:32 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:35.194 10:56:32 event -- common/autotest_common.sh@10 -- # set +x 00:04:35.194 ************************************ 00:04:35.194 START TEST event_reactor_perf 00:04:35.194 ************************************ 00:04:35.194 10:56:32 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:35.194 [2024-05-15 10:56:32.122141] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:04:35.194 [2024-05-15 10:56:32.122210] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2074398 ] 00:04:35.194 EAL: No free 2048 kB hugepages reported on node 1 00:04:35.194 [2024-05-15 10:56:32.179072] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.194 [2024-05-15 10:56:32.250458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.129 test_start 00:04:36.130 test_end 00:04:36.130 Performance: 494032 events per second 00:04:36.130 00:04:36.130 real 0m1.236s 00:04:36.130 user 0m1.161s 00:04:36.130 sys 0m0.071s 00:04:36.130 10:56:33 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:36.130 10:56:33 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:36.130 ************************************ 00:04:36.130 END TEST event_reactor_perf 00:04:36.130 ************************************ 00:04:36.130 10:56:33 event -- event/event.sh@49 -- # uname -s 00:04:36.130 10:56:33 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:36.130 10:56:33 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:36.130 10:56:33 event -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:36.130 10:56:33 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:36.130 10:56:33 event -- common/autotest_common.sh@10 -- # set +x 00:04:36.388 ************************************ 00:04:36.388 START TEST event_scheduler 00:04:36.388 ************************************ 00:04:36.388 10:56:33 event.event_scheduler -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:36.388 * Looking for test storage... 00:04:36.388 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:36.388 10:56:33 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:36.388 10:56:33 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2074677 00:04:36.388 10:56:33 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:36.388 10:56:33 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2074677 00:04:36.388 10:56:33 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:36.388 10:56:33 event.event_scheduler -- common/autotest_common.sh@828 -- # '[' -z 2074677 ']' 00:04:36.388 10:56:33 event.event_scheduler -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.388 10:56:33 event.event_scheduler -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:36.388 10:56:33 event.event_scheduler -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.388 10:56:33 event.event_scheduler -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:36.388 10:56:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:36.388 [2024-05-15 10:56:33.530934] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:04:36.388 [2024-05-15 10:56:33.530980] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2074677 ] 00:04:36.388 EAL: No free 2048 kB hugepages reported on node 1 00:04:36.388 [2024-05-15 10:56:33.590126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:36.647 [2024-05-15 10:56:33.676823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.647 [2024-05-15 10:56:33.676909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:36.647 [2024-05-15 10:56:33.676995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:36.647 [2024-05-15 10:56:33.676997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:37.215 10:56:34 event.event_scheduler -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:37.215 10:56:34 event.event_scheduler -- common/autotest_common.sh@861 -- # return 0 00:04:37.215 10:56:34 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:37.215 10:56:34 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:37.215 10:56:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:37.215 POWER: Env isn't set yet! 00:04:37.215 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:37.215 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:37.215 POWER: Cannot set governor of lcore 0 to userspace 00:04:37.215 POWER: Attempting to initialise PSTAT power management... 00:04:37.215 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:04:37.215 POWER: Initialized successfully for lcore 0 power management 00:04:37.215 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:04:37.215 POWER: Initialized successfully for lcore 1 power management 00:04:37.215 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:04:37.215 POWER: Initialized successfully for lcore 2 power management 00:04:37.215 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:04:37.215 POWER: Initialized successfully for lcore 3 power management 00:04:37.215 10:56:34 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:37.215 10:56:34 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:37.215 10:56:34 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:37.215 10:56:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:37.215 [2024-05-15 10:56:34.455894] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:37.215 10:56:34 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:37.215 10:56:34 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:37.215 10:56:34 event.event_scheduler -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:37.215 10:56:34 event.event_scheduler -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:37.215 10:56:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:37.475 ************************************ 00:04:37.475 START TEST scheduler_create_thread 00:04:37.475 ************************************ 00:04:37.475 10:56:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # scheduler_create_thread 00:04:37.475 10:56:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:37.475 10:56:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:37.475 10:56:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.475 2 00:04:37.475 10:56:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:37.475 10:56:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:37.475 10:56:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:37.475 10:56:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.475 3 00:04:37.475 10:56:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:37.475 10:56:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:37.475 10:56:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:37.475 10:56:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.475 4 00:04:37.475 10:56:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:37.475 10:56:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:37.475 10:56:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:37.475 10:56:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.475 5 00:04:37.475 10:56:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:37.475 10:56:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:37.475 10:56:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:37.475 10:56:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.475 6 00:04:37.475 10:56:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:37.475 10:56:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:37.475 10:56:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:37.475 10:56:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.475 7 00:04:37.475 10:56:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:37.475 10:56:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:37.475 10:56:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:37.475 10:56:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.475 8 00:04:37.475 10:56:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:37.475 10:56:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:37.475 10:56:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:37.475 10:56:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.475 9 00:04:37.475 10:56:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:37.475 10:56:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:37.475 10:56:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:37.475 10:56:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.475 10 00:04:37.475 10:56:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:37.475 10:56:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:37.475 10:56:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:37.475 10:56:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.475 10:56:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:37.475 10:56:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:37.475 10:56:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:37.475 10:56:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:37.475 10:56:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.475 10:56:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:37.475 10:56:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:37.475 10:56:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:37.475 10:56:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.047 10:56:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:38.047 10:56:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:38.047 10:56:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:38.047 10:56:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:38.047 10:56:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.994 10:56:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:38.994 00:04:38.994 real 0m1.758s 00:04:38.994 user 0m0.019s 00:04:38.994 sys 0m0.009s 00:04:38.994 10:56:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:38.994 10:56:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.994 ************************************ 00:04:38.994 END TEST scheduler_create_thread 00:04:38.994 ************************************ 00:04:39.276 10:56:36 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:39.276 10:56:36 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2074677 00:04:39.276 10:56:36 event.event_scheduler -- common/autotest_common.sh@947 -- # '[' -z 2074677 ']' 00:04:39.276 10:56:36 event.event_scheduler -- common/autotest_common.sh@951 -- # kill -0 2074677 00:04:39.276 10:56:36 event.event_scheduler -- common/autotest_common.sh@952 -- # uname 00:04:39.276 10:56:36 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:39.276 10:56:36 event.event_scheduler -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2074677 00:04:39.276 10:56:36 event.event_scheduler -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:04:39.276 10:56:36 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:04:39.276 10:56:36 event.event_scheduler -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2074677' 00:04:39.276 killing process with pid 2074677 00:04:39.276 10:56:36 event.event_scheduler -- common/autotest_common.sh@966 -- # kill 2074677 00:04:39.276 10:56:36 event.event_scheduler -- common/autotest_common.sh@971 -- # wait 2074677 00:04:39.548 [2024-05-15 10:56:36.731052] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:39.808 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:04:39.808 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:04:39.808 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:04:39.808 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:04:39.808 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:04:39.808 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:04:39.808 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:04:39.808 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:04:39.808 00:04:39.808 real 0m3.525s 00:04:39.808 user 0m6.357s 00:04:39.808 sys 0m0.353s 00:04:39.808 10:56:36 event.event_scheduler -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:39.808 10:56:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:39.808 ************************************ 00:04:39.808 END TEST event_scheduler 00:04:39.808 ************************************ 00:04:39.808 10:56:36 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:39.808 10:56:36 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:39.808 10:56:36 event -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:39.808 10:56:36 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:39.808 10:56:36 event -- common/autotest_common.sh@10 -- # set +x 00:04:39.808 ************************************ 00:04:39.808 START TEST app_repeat 00:04:39.808 ************************************ 00:04:39.808 10:56:37 event.app_repeat -- common/autotest_common.sh@1122 -- # app_repeat_test 00:04:39.808 10:56:37 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.808 10:56:37 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.808 10:56:37 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:39.808 10:56:37 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:39.808 10:56:37 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:39.808 10:56:37 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:39.808 10:56:37 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:39.808 10:56:37 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2075413 00:04:39.808 10:56:37 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:39.808 10:56:37 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:39.808 10:56:37 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2075413' 00:04:39.808 Process app_repeat pid: 2075413 00:04:39.808 10:56:37 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:39.808 10:56:37 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:39.808 spdk_app_start Round 0 00:04:39.808 10:56:37 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2075413 /var/tmp/spdk-nbd.sock 00:04:39.808 10:56:37 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 2075413 ']' 00:04:39.808 10:56:37 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:39.808 10:56:37 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:39.808 10:56:37 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:39.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:39.808 10:56:37 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:39.808 10:56:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:39.808 [2024-05-15 10:56:37.047582] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:04:39.808 [2024-05-15 10:56:37.047632] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2075413 ] 00:04:39.808 EAL: No free 2048 kB hugepages reported on node 1 00:04:40.068 [2024-05-15 10:56:37.103752] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:40.068 [2024-05-15 10:56:37.176334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:40.068 [2024-05-15 10:56:37.176337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.636 10:56:37 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:40.636 10:56:37 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:04:40.636 10:56:37 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:40.895 Malloc0 00:04:40.895 10:56:38 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:41.155 Malloc1 00:04:41.155 10:56:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:41.155 10:56:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.155 10:56:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:41.155 10:56:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:41.155 10:56:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.155 10:56:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:41.155 10:56:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:41.155 10:56:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.155 10:56:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:41.155 10:56:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:41.155 10:56:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.155 10:56:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:41.155 10:56:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:41.155 10:56:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:41.155 10:56:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:41.155 10:56:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:41.155 /dev/nbd0 00:04:41.415 10:56:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:41.415 10:56:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:41.415 10:56:38 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd0 00:04:41.415 10:56:38 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:04:41.415 10:56:38 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:04:41.415 10:56:38 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:04:41.415 10:56:38 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd0 /proc/partitions 00:04:41.415 10:56:38 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:04:41.415 10:56:38 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:04:41.415 10:56:38 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:04:41.415 10:56:38 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:41.415 1+0 records in 00:04:41.415 1+0 records out 00:04:41.415 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00018706 s, 21.9 MB/s 00:04:41.415 10:56:38 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:41.415 10:56:38 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:04:41.415 10:56:38 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:41.415 10:56:38 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:04:41.415 10:56:38 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:04:41.415 10:56:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:41.415 10:56:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:41.415 10:56:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:41.415 /dev/nbd1 00:04:41.415 10:56:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:41.415 10:56:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:41.415 10:56:38 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd1 00:04:41.415 10:56:38 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:04:41.415 10:56:38 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:04:41.415 10:56:38 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:04:41.415 10:56:38 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd1 /proc/partitions 00:04:41.415 10:56:38 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:04:41.415 10:56:38 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:04:41.415 10:56:38 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:04:41.415 10:56:38 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:41.415 1+0 records in 00:04:41.415 1+0 records out 00:04:41.415 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00018975 s, 21.6 MB/s 00:04:41.415 10:56:38 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:41.415 10:56:38 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:04:41.415 10:56:38 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:41.415 10:56:38 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:04:41.415 10:56:38 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:04:41.415 10:56:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:41.415 10:56:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:41.415 10:56:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:41.415 10:56:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.415 10:56:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:41.675 10:56:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:41.675 { 00:04:41.675 "nbd_device": "/dev/nbd0", 00:04:41.675 "bdev_name": "Malloc0" 00:04:41.675 }, 00:04:41.675 { 00:04:41.675 "nbd_device": "/dev/nbd1", 00:04:41.675 "bdev_name": "Malloc1" 00:04:41.675 } 00:04:41.675 ]' 00:04:41.675 10:56:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:41.675 { 00:04:41.675 "nbd_device": "/dev/nbd0", 00:04:41.675 "bdev_name": "Malloc0" 00:04:41.675 }, 00:04:41.675 { 00:04:41.675 "nbd_device": "/dev/nbd1", 00:04:41.675 "bdev_name": "Malloc1" 00:04:41.675 } 00:04:41.675 ]' 00:04:41.675 10:56:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:41.675 10:56:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:41.675 /dev/nbd1' 00:04:41.675 10:56:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:41.675 /dev/nbd1' 00:04:41.675 10:56:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:41.675 10:56:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:41.675 10:56:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:41.675 10:56:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:41.675 10:56:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:41.675 10:56:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:41.675 10:56:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.675 10:56:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:41.675 10:56:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:41.675 10:56:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:41.675 10:56:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:41.675 10:56:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:41.675 256+0 records in 00:04:41.675 256+0 records out 00:04:41.675 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104652 s, 100 MB/s 00:04:41.675 10:56:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:41.675 10:56:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:41.675 256+0 records in 00:04:41.675 256+0 records out 00:04:41.675 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129359 s, 81.1 MB/s 00:04:41.675 10:56:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:41.675 10:56:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:41.675 256+0 records in 00:04:41.675 256+0 records out 00:04:41.675 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142062 s, 73.8 MB/s 00:04:41.675 10:56:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:41.675 10:56:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.675 10:56:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:41.675 10:56:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:41.675 10:56:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:41.675 10:56:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:41.675 10:56:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:41.676 10:56:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:41.676 10:56:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:41.676 10:56:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:41.936 10:56:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:41.936 10:56:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:41.936 10:56:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:41.936 10:56:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.936 10:56:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.936 10:56:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:41.936 10:56:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:41.936 10:56:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:41.936 10:56:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:41.936 10:56:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:41.936 10:56:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:41.936 10:56:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:41.936 10:56:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:41.936 10:56:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:41.936 10:56:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:41.936 10:56:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:41.936 10:56:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:41.936 10:56:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:41.936 10:56:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:42.196 10:56:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:42.196 10:56:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:42.196 10:56:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:42.196 10:56:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:42.196 10:56:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:42.196 10:56:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:42.196 10:56:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:42.196 10:56:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:42.196 10:56:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:42.196 10:56:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:42.196 10:56:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:42.456 10:56:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:42.456 10:56:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:42.456 10:56:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:42.456 10:56:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:42.456 10:56:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:42.456 10:56:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:42.456 10:56:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:42.457 10:56:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:42.457 10:56:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:42.457 10:56:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:42.457 10:56:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:42.457 10:56:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:42.457 10:56:39 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:42.716 10:56:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:42.716 [2024-05-15 10:56:39.969441] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:42.975 [2024-05-15 10:56:40.042342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.975 [2024-05-15 10:56:40.042345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.975 [2024-05-15 10:56:40.084325] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:42.975 [2024-05-15 10:56:40.084367] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:45.502 10:56:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:45.502 10:56:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:45.502 spdk_app_start Round 1 00:04:45.502 10:56:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2075413 /var/tmp/spdk-nbd.sock 00:04:45.502 10:56:42 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 2075413 ']' 00:04:45.502 10:56:42 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:45.502 10:56:42 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:45.502 10:56:42 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:45.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:45.502 10:56:42 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:45.502 10:56:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:45.761 10:56:42 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:45.761 10:56:42 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:04:45.761 10:56:42 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:46.020 Malloc0 00:04:46.020 10:56:43 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:46.279 Malloc1 00:04:46.279 10:56:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:46.279 10:56:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.279 10:56:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:46.279 10:56:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:46.279 10:56:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.279 10:56:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:46.279 10:56:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:46.279 10:56:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.279 10:56:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:46.279 10:56:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:46.279 10:56:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.279 10:56:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:46.279 10:56:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:46.279 10:56:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:46.279 10:56:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:46.279 10:56:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:46.279 /dev/nbd0 00:04:46.279 10:56:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:46.279 10:56:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:46.279 10:56:43 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd0 00:04:46.279 10:56:43 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:04:46.279 10:56:43 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:04:46.279 10:56:43 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:04:46.279 10:56:43 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd0 /proc/partitions 00:04:46.279 10:56:43 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:04:46.279 10:56:43 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:04:46.279 10:56:43 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:04:46.279 10:56:43 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:46.279 1+0 records in 00:04:46.279 1+0 records out 00:04:46.279 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000173607 s, 23.6 MB/s 00:04:46.279 10:56:43 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:46.279 10:56:43 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:04:46.279 10:56:43 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:46.279 10:56:43 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:04:46.279 10:56:43 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:04:46.279 10:56:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:46.279 10:56:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:46.279 10:56:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:46.539 /dev/nbd1 00:04:46.539 10:56:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:46.539 10:56:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:46.539 10:56:43 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd1 00:04:46.539 10:56:43 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:04:46.539 10:56:43 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:04:46.539 10:56:43 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:04:46.539 10:56:43 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd1 /proc/partitions 00:04:46.539 10:56:43 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:04:46.539 10:56:43 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:04:46.539 10:56:43 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:04:46.539 10:56:43 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:46.539 1+0 records in 00:04:46.539 1+0 records out 00:04:46.539 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000201159 s, 20.4 MB/s 00:04:46.539 10:56:43 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:46.539 10:56:43 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:04:46.539 10:56:43 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:46.539 10:56:43 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:04:46.539 10:56:43 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:04:46.539 10:56:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:46.539 10:56:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:46.539 10:56:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:46.539 10:56:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.539 10:56:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:46.798 10:56:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:46.798 { 00:04:46.798 "nbd_device": "/dev/nbd0", 00:04:46.798 "bdev_name": "Malloc0" 00:04:46.798 }, 00:04:46.798 { 00:04:46.798 "nbd_device": "/dev/nbd1", 00:04:46.798 "bdev_name": "Malloc1" 00:04:46.798 } 00:04:46.798 ]' 00:04:46.798 10:56:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:46.798 { 00:04:46.798 "nbd_device": "/dev/nbd0", 00:04:46.798 "bdev_name": "Malloc0" 00:04:46.798 }, 00:04:46.798 { 00:04:46.798 "nbd_device": "/dev/nbd1", 00:04:46.798 "bdev_name": "Malloc1" 00:04:46.798 } 00:04:46.798 ]' 00:04:46.798 10:56:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:46.798 10:56:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:46.798 /dev/nbd1' 00:04:46.798 10:56:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:46.798 /dev/nbd1' 00:04:46.798 10:56:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:46.798 10:56:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:46.798 10:56:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:46.799 10:56:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:46.799 10:56:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:46.799 10:56:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:46.799 10:56:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.799 10:56:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:46.799 10:56:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:46.799 10:56:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:46.799 10:56:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:46.799 10:56:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:46.799 256+0 records in 00:04:46.799 256+0 records out 00:04:46.799 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102999 s, 102 MB/s 00:04:46.799 10:56:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:46.799 10:56:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:46.799 256+0 records in 00:04:46.799 256+0 records out 00:04:46.799 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136518 s, 76.8 MB/s 00:04:46.799 10:56:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:46.799 10:56:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:46.799 256+0 records in 00:04:46.799 256+0 records out 00:04:46.799 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0144938 s, 72.3 MB/s 00:04:46.799 10:56:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:46.799 10:56:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.799 10:56:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:46.799 10:56:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:46.799 10:56:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:46.799 10:56:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:46.799 10:56:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:46.799 10:56:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:46.799 10:56:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:46.799 10:56:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:46.799 10:56:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:46.799 10:56:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:46.799 10:56:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:46.799 10:56:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.799 10:56:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.799 10:56:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:46.799 10:56:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:46.799 10:56:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:46.799 10:56:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:47.057 10:56:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:47.057 10:56:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:47.058 10:56:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:47.058 10:56:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:47.058 10:56:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:47.058 10:56:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:47.058 10:56:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:47.058 10:56:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:47.058 10:56:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:47.058 10:56:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:47.317 10:56:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:47.317 10:56:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:47.317 10:56:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:47.317 10:56:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:47.317 10:56:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:47.317 10:56:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:47.317 10:56:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:47.317 10:56:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:47.317 10:56:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:47.317 10:56:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.317 10:56:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:47.576 10:56:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:47.576 10:56:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:47.576 10:56:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:47.576 10:56:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:47.576 10:56:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:47.576 10:56:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:47.576 10:56:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:47.576 10:56:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:47.576 10:56:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:47.576 10:56:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:47.576 10:56:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:47.576 10:56:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:47.576 10:56:44 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:47.836 10:56:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:47.836 [2024-05-15 10:56:45.056934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:48.095 [2024-05-15 10:56:45.126040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.095 [2024-05-15 10:56:45.126043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.095 [2024-05-15 10:56:45.168655] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:48.095 [2024-05-15 10:56:45.168699] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:50.632 10:56:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:50.632 10:56:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:50.632 spdk_app_start Round 2 00:04:50.632 10:56:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2075413 /var/tmp/spdk-nbd.sock 00:04:50.632 10:56:47 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 2075413 ']' 00:04:50.632 10:56:47 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:50.632 10:56:47 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:50.632 10:56:47 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:50.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:50.632 10:56:47 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:50.632 10:56:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:50.891 10:56:48 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:50.891 10:56:48 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:04:50.891 10:56:48 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:51.150 Malloc0 00:04:51.150 10:56:48 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:51.150 Malloc1 00:04:51.150 10:56:48 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:51.150 10:56:48 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.150 10:56:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:51.150 10:56:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:51.150 10:56:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.150 10:56:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:51.150 10:56:48 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:51.150 10:56:48 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.150 10:56:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:51.150 10:56:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:51.150 10:56:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.150 10:56:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:51.150 10:56:48 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:51.150 10:56:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:51.150 10:56:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:51.150 10:56:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:51.410 /dev/nbd0 00:04:51.410 10:56:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:51.410 10:56:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:51.410 10:56:48 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd0 00:04:51.410 10:56:48 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:04:51.410 10:56:48 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:04:51.410 10:56:48 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:04:51.410 10:56:48 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd0 /proc/partitions 00:04:51.410 10:56:48 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:04:51.410 10:56:48 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:04:51.410 10:56:48 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:04:51.410 10:56:48 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:51.410 1+0 records in 00:04:51.410 1+0 records out 00:04:51.410 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00017229 s, 23.8 MB/s 00:04:51.410 10:56:48 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:51.410 10:56:48 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:04:51.410 10:56:48 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:51.410 10:56:48 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:04:51.410 10:56:48 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:04:51.410 10:56:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:51.410 10:56:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:51.410 10:56:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:51.669 /dev/nbd1 00:04:51.669 10:56:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:51.669 10:56:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:51.669 10:56:48 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd1 00:04:51.669 10:56:48 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:04:51.669 10:56:48 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:04:51.669 10:56:48 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:04:51.669 10:56:48 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd1 /proc/partitions 00:04:51.669 10:56:48 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:04:51.669 10:56:48 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:04:51.669 10:56:48 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:04:51.669 10:56:48 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:51.669 1+0 records in 00:04:51.669 1+0 records out 00:04:51.669 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000131038 s, 31.3 MB/s 00:04:51.669 10:56:48 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:51.669 10:56:48 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:04:51.669 10:56:48 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:51.669 10:56:48 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:04:51.669 10:56:48 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:04:51.669 10:56:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:51.669 10:56:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:51.669 10:56:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:51.669 10:56:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.669 10:56:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:51.929 10:56:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:51.929 { 00:04:51.929 "nbd_device": "/dev/nbd0", 00:04:51.929 "bdev_name": "Malloc0" 00:04:51.929 }, 00:04:51.929 { 00:04:51.929 "nbd_device": "/dev/nbd1", 00:04:51.929 "bdev_name": "Malloc1" 00:04:51.929 } 00:04:51.929 ]' 00:04:51.929 10:56:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:51.929 { 00:04:51.929 "nbd_device": "/dev/nbd0", 00:04:51.929 "bdev_name": "Malloc0" 00:04:51.929 }, 00:04:51.929 { 00:04:51.929 "nbd_device": "/dev/nbd1", 00:04:51.929 "bdev_name": "Malloc1" 00:04:51.929 } 00:04:51.929 ]' 00:04:51.929 10:56:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:51.929 10:56:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:51.929 /dev/nbd1' 00:04:51.929 10:56:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:51.929 /dev/nbd1' 00:04:51.929 10:56:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:51.929 10:56:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:51.929 10:56:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:51.929 10:56:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:51.929 10:56:48 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:51.929 10:56:48 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:51.929 10:56:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.929 10:56:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:51.929 10:56:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:51.929 10:56:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:51.929 10:56:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:51.929 10:56:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:51.929 256+0 records in 00:04:51.929 256+0 records out 00:04:51.929 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103555 s, 101 MB/s 00:04:51.929 10:56:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:51.929 10:56:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:51.929 256+0 records in 00:04:51.929 256+0 records out 00:04:51.929 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0132554 s, 79.1 MB/s 00:04:51.929 10:56:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:51.929 10:56:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:51.929 256+0 records in 00:04:51.929 256+0 records out 00:04:51.929 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014096 s, 74.4 MB/s 00:04:51.929 10:56:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:51.929 10:56:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.929 10:56:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:51.929 10:56:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:51.929 10:56:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:51.929 10:56:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:51.929 10:56:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:51.929 10:56:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:51.929 10:56:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:51.929 10:56:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:51.929 10:56:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:51.929 10:56:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:51.929 10:56:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:51.929 10:56:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.929 10:56:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.929 10:56:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:51.929 10:56:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:51.929 10:56:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:51.929 10:56:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:52.189 10:56:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:52.189 10:56:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:52.189 10:56:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:52.189 10:56:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:52.189 10:56:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:52.189 10:56:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:52.189 10:56:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:52.189 10:56:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:52.189 10:56:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:52.189 10:56:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:52.189 10:56:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:52.189 10:56:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:52.189 10:56:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:52.189 10:56:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:52.189 10:56:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:52.189 10:56:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:52.189 10:56:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:52.189 10:56:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:52.189 10:56:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:52.189 10:56:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.448 10:56:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:52.448 10:56:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:52.448 10:56:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:52.448 10:56:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:52.448 10:56:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:52.448 10:56:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:52.448 10:56:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:52.448 10:56:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:52.448 10:56:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:52.448 10:56:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:52.448 10:56:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:52.448 10:56:49 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:52.448 10:56:49 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:52.448 10:56:49 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:52.706 10:56:49 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:52.964 [2024-05-15 10:56:50.068361] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:52.964 [2024-05-15 10:56:50.144904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:52.964 [2024-05-15 10:56:50.144906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.964 [2024-05-15 10:56:50.186226] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:52.964 [2024-05-15 10:56:50.186275] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:56.253 10:56:52 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2075413 /var/tmp/spdk-nbd.sock 00:04:56.253 10:56:52 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 2075413 ']' 00:04:56.253 10:56:52 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:56.253 10:56:52 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:56.253 10:56:52 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:56.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:56.253 10:56:52 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:56.253 10:56:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:56.253 10:56:53 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:56.253 10:56:53 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:04:56.253 10:56:53 event.app_repeat -- event/event.sh@39 -- # killprocess 2075413 00:04:56.253 10:56:53 event.app_repeat -- common/autotest_common.sh@947 -- # '[' -z 2075413 ']' 00:04:56.253 10:56:53 event.app_repeat -- common/autotest_common.sh@951 -- # kill -0 2075413 00:04:56.253 10:56:53 event.app_repeat -- common/autotest_common.sh@952 -- # uname 00:04:56.253 10:56:53 event.app_repeat -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:56.253 10:56:53 event.app_repeat -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2075413 00:04:56.253 10:56:53 event.app_repeat -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:56.253 10:56:53 event.app_repeat -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:56.253 10:56:53 event.app_repeat -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2075413' 00:04:56.253 killing process with pid 2075413 00:04:56.253 10:56:53 event.app_repeat -- common/autotest_common.sh@966 -- # kill 2075413 00:04:56.253 10:56:53 event.app_repeat -- common/autotest_common.sh@971 -- # wait 2075413 00:04:56.253 spdk_app_start is called in Round 0. 00:04:56.253 Shutdown signal received, stop current app iteration 00:04:56.253 Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 reinitialization... 00:04:56.253 spdk_app_start is called in Round 1. 00:04:56.253 Shutdown signal received, stop current app iteration 00:04:56.253 Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 reinitialization... 00:04:56.253 spdk_app_start is called in Round 2. 00:04:56.253 Shutdown signal received, stop current app iteration 00:04:56.253 Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 reinitialization... 00:04:56.253 spdk_app_start is called in Round 3. 00:04:56.253 Shutdown signal received, stop current app iteration 00:04:56.253 10:56:53 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:56.253 10:56:53 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:56.253 00:04:56.253 real 0m16.253s 00:04:56.253 user 0m35.075s 00:04:56.253 sys 0m2.354s 00:04:56.253 10:56:53 event.app_repeat -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:56.253 10:56:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:56.253 ************************************ 00:04:56.253 END TEST app_repeat 00:04:56.253 ************************************ 00:04:56.253 10:56:53 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:56.253 10:56:53 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:56.253 10:56:53 event -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:56.253 10:56:53 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:56.253 10:56:53 event -- common/autotest_common.sh@10 -- # set +x 00:04:56.253 ************************************ 00:04:56.253 START TEST cpu_locks 00:04:56.253 ************************************ 00:04:56.253 10:56:53 event.cpu_locks -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:56.254 * Looking for test storage... 00:04:56.254 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:56.254 10:56:53 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:56.254 10:56:53 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:56.254 10:56:53 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:56.254 10:56:53 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:56.254 10:56:53 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:56.254 10:56:53 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:56.254 10:56:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:56.254 ************************************ 00:04:56.254 START TEST default_locks 00:04:56.254 ************************************ 00:04:56.254 10:56:53 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # default_locks 00:04:56.254 10:56:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2078399 00:04:56.254 10:56:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2078399 00:04:56.254 10:56:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:56.254 10:56:53 event.cpu_locks.default_locks -- common/autotest_common.sh@828 -- # '[' -z 2078399 ']' 00:04:56.254 10:56:53 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.254 10:56:53 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:56.254 10:56:53 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.254 10:56:53 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:56.254 10:56:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:56.254 [2024-05-15 10:56:53.503382] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:04:56.254 [2024-05-15 10:56:53.503421] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2078399 ] 00:04:56.512 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.512 [2024-05-15 10:56:53.556920] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.512 [2024-05-15 10:56:53.628187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.080 10:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:57.080 10:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@861 -- # return 0 00:04:57.080 10:56:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2078399 00:04:57.080 10:56:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2078399 00:04:57.080 10:56:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:57.340 lslocks: write error 00:04:57.340 10:56:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2078399 00:04:57.340 10:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@947 -- # '[' -z 2078399 ']' 00:04:57.340 10:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # kill -0 2078399 00:04:57.340 10:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # uname 00:04:57.340 10:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:57.340 10:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2078399 00:04:57.340 10:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:57.340 10:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:57.340 10:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2078399' 00:04:57.340 killing process with pid 2078399 00:04:57.340 10:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # kill 2078399 00:04:57.340 10:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # wait 2078399 00:04:57.599 10:56:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2078399 00:04:57.599 10:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@649 -- # local es=0 00:04:57.599 10:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 2078399 00:04:57.599 10:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:04:57.599 10:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:57.599 10:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:04:57.599 10:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:57.599 10:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # waitforlisten 2078399 00:04:57.599 10:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@828 -- # '[' -z 2078399 ']' 00:04:57.599 10:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.599 10:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:57.599 10:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.599 10:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:57.599 10:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:57.599 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 843: kill: (2078399) - No such process 00:04:57.599 ERROR: process (pid: 2078399) is no longer running 00:04:57.599 10:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:57.599 10:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@861 -- # return 1 00:04:57.599 10:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # es=1 00:04:57.599 10:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:04:57.599 10:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:04:57.599 10:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:04:57.599 10:56:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:57.599 10:56:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:57.599 10:56:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:57.599 10:56:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:57.599 00:04:57.599 real 0m1.407s 00:04:57.599 user 0m1.465s 00:04:57.599 sys 0m0.441s 00:04:57.599 10:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:57.599 10:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:57.599 ************************************ 00:04:57.599 END TEST default_locks 00:04:57.599 ************************************ 00:04:57.859 10:56:54 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:57.859 10:56:54 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:57.859 10:56:54 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:57.859 10:56:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:57.859 ************************************ 00:04:57.859 START TEST default_locks_via_rpc 00:04:57.859 ************************************ 00:04:57.859 10:56:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # default_locks_via_rpc 00:04:57.859 10:56:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2078663 00:04:57.859 10:56:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2078663 00:04:57.859 10:56:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:57.859 10:56:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 2078663 ']' 00:04:57.859 10:56:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.859 10:56:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:57.859 10:56:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.859 10:56:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:57.859 10:56:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.859 [2024-05-15 10:56:54.984636] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:04:57.859 [2024-05-15 10:56:54.984680] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2078663 ] 00:04:57.859 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.860 [2024-05-15 10:56:55.037133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.860 [2024-05-15 10:56:55.113136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.797 10:56:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:58.797 10:56:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:04:58.797 10:56:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:58.797 10:56:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:58.797 10:56:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.797 10:56:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:58.797 10:56:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:58.797 10:56:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:58.797 10:56:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:58.797 10:56:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:58.797 10:56:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:58.797 10:56:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:58.797 10:56:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.797 10:56:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:58.797 10:56:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2078663 00:04:58.797 10:56:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2078663 00:04:58.797 10:56:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:59.056 10:56:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2078663 00:04:59.056 10:56:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@947 -- # '[' -z 2078663 ']' 00:04:59.056 10:56:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # kill -0 2078663 00:04:59.056 10:56:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # uname 00:04:59.056 10:56:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:59.056 10:56:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2078663 00:04:59.056 10:56:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:59.056 10:56:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:59.056 10:56:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2078663' 00:04:59.056 killing process with pid 2078663 00:04:59.056 10:56:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # kill 2078663 00:04:59.056 10:56:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # wait 2078663 00:04:59.626 00:04:59.626 real 0m1.669s 00:04:59.626 user 0m1.764s 00:04:59.626 sys 0m0.523s 00:04:59.626 10:56:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:59.626 10:56:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.626 ************************************ 00:04:59.626 END TEST default_locks_via_rpc 00:04:59.626 ************************************ 00:04:59.626 10:56:56 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:59.626 10:56:56 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:59.626 10:56:56 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:59.626 10:56:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:59.626 ************************************ 00:04:59.626 START TEST non_locking_app_on_locked_coremask 00:04:59.626 ************************************ 00:04:59.626 10:56:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # non_locking_app_on_locked_coremask 00:04:59.626 10:56:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:59.626 10:56:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2078929 00:04:59.626 10:56:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2078929 /var/tmp/spdk.sock 00:04:59.626 10:56:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 2078929 ']' 00:04:59.626 10:56:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.626 10:56:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:59.626 10:56:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.626 10:56:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:59.626 10:56:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:59.626 [2024-05-15 10:56:56.720525] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:04:59.626 [2024-05-15 10:56:56.720567] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2078929 ] 00:04:59.626 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.626 [2024-05-15 10:56:56.774146] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.626 [2024-05-15 10:56:56.842978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.564 10:56:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:00.564 10:56:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 0 00:05:00.564 10:56:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2079158 00:05:00.564 10:56:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2079158 /var/tmp/spdk2.sock 00:05:00.564 10:56:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:00.564 10:56:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 2079158 ']' 00:05:00.564 10:56:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:00.564 10:56:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:00.564 10:56:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:00.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:00.564 10:56:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:00.564 10:56:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:00.564 [2024-05-15 10:56:57.566868] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:00.564 [2024-05-15 10:56:57.566919] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2079158 ] 00:05:00.564 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.564 [2024-05-15 10:56:57.643503] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:00.564 [2024-05-15 10:56:57.643532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.564 [2024-05-15 10:56:57.787657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.165 10:56:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:01.165 10:56:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 0 00:05:01.165 10:56:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2078929 00:05:01.165 10:56:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2078929 00:05:01.165 10:56:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:01.767 lslocks: write error 00:05:01.767 10:56:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2078929 00:05:01.767 10:56:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' -z 2078929 ']' 00:05:01.767 10:56:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # kill -0 2078929 00:05:01.767 10:56:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # uname 00:05:01.767 10:56:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:01.767 10:56:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2078929 00:05:01.767 10:56:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:01.767 10:56:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:01.767 10:56:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2078929' 00:05:01.767 killing process with pid 2078929 00:05:01.767 10:56:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # kill 2078929 00:05:01.767 10:56:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # wait 2078929 00:05:02.336 10:56:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2079158 00:05:02.336 10:56:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' -z 2079158 ']' 00:05:02.336 10:56:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # kill -0 2079158 00:05:02.336 10:56:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # uname 00:05:02.336 10:56:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:02.336 10:56:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2079158 00:05:02.336 10:56:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:02.336 10:56:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:02.336 10:56:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2079158' 00:05:02.336 killing process with pid 2079158 00:05:02.336 10:56:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # kill 2079158 00:05:02.336 10:56:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # wait 2079158 00:05:02.905 00:05:02.905 real 0m3.251s 00:05:02.905 user 0m3.483s 00:05:02.905 sys 0m0.891s 00:05:02.905 10:56:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:02.905 10:56:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:02.905 ************************************ 00:05:02.905 END TEST non_locking_app_on_locked_coremask 00:05:02.905 ************************************ 00:05:02.905 10:56:59 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:02.905 10:56:59 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:02.905 10:56:59 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:02.905 10:56:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:02.905 ************************************ 00:05:02.905 START TEST locking_app_on_unlocked_coremask 00:05:02.905 ************************************ 00:05:02.905 10:56:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # locking_app_on_unlocked_coremask 00:05:02.905 10:56:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2079555 00:05:02.905 10:56:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2079555 /var/tmp/spdk.sock 00:05:02.905 10:56:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:02.905 10:56:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@828 -- # '[' -z 2079555 ']' 00:05:02.905 10:56:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.905 10:56:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:02.905 10:56:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.905 10:56:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:02.905 10:56:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:02.905 [2024-05-15 10:57:00.047017] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:02.905 [2024-05-15 10:57:00.047066] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2079555 ] 00:05:02.905 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.905 [2024-05-15 10:57:00.100063] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:02.905 [2024-05-15 10:57:00.100101] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.164 [2024-05-15 10:57:00.185855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.733 10:57:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:03.734 10:57:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@861 -- # return 0 00:05:03.734 10:57:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2079732 00:05:03.734 10:57:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2079732 /var/tmp/spdk2.sock 00:05:03.734 10:57:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:03.734 10:57:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@828 -- # '[' -z 2079732 ']' 00:05:03.734 10:57:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:03.734 10:57:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:03.734 10:57:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:03.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:03.734 10:57:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:03.734 10:57:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:03.734 [2024-05-15 10:57:00.897287] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:03.734 [2024-05-15 10:57:00.897333] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2079732 ] 00:05:03.734 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.734 [2024-05-15 10:57:00.975314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.992 [2024-05-15 10:57:01.124961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.646 10:57:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:04.646 10:57:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@861 -- # return 0 00:05:04.646 10:57:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2079732 00:05:04.646 10:57:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:04.646 10:57:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2079732 00:05:05.214 lslocks: write error 00:05:05.214 10:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2079555 00:05:05.214 10:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@947 -- # '[' -z 2079555 ']' 00:05:05.214 10:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # kill -0 2079555 00:05:05.214 10:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # uname 00:05:05.214 10:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:05.214 10:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2079555 00:05:05.214 10:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:05.214 10:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:05.214 10:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2079555' 00:05:05.214 killing process with pid 2079555 00:05:05.214 10:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # kill 2079555 00:05:05.214 10:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # wait 2079555 00:05:05.782 10:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2079732 00:05:05.782 10:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@947 -- # '[' -z 2079732 ']' 00:05:05.782 10:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # kill -0 2079732 00:05:05.782 10:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # uname 00:05:05.782 10:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:05.782 10:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2079732 00:05:05.782 10:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:05.782 10:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:05.782 10:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2079732' 00:05:05.782 killing process with pid 2079732 00:05:05.782 10:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # kill 2079732 00:05:05.782 10:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # wait 2079732 00:05:06.349 00:05:06.349 real 0m3.340s 00:05:06.349 user 0m3.583s 00:05:06.349 sys 0m0.938s 00:05:06.349 10:57:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:06.349 10:57:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:06.349 ************************************ 00:05:06.349 END TEST locking_app_on_unlocked_coremask 00:05:06.349 ************************************ 00:05:06.349 10:57:03 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:06.349 10:57:03 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:06.349 10:57:03 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:06.349 10:57:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:06.349 ************************************ 00:05:06.349 START TEST locking_app_on_locked_coremask 00:05:06.349 ************************************ 00:05:06.349 10:57:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # locking_app_on_locked_coremask 00:05:06.349 10:57:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2080283 00:05:06.349 10:57:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2080283 /var/tmp/spdk.sock 00:05:06.349 10:57:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:06.349 10:57:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 2080283 ']' 00:05:06.349 10:57:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.350 10:57:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:06.350 10:57:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.350 10:57:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:06.350 10:57:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:06.350 [2024-05-15 10:57:03.443495] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:06.350 [2024-05-15 10:57:03.443535] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2080283 ] 00:05:06.350 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.350 [2024-05-15 10:57:03.496457] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.350 [2024-05-15 10:57:03.575542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.287 10:57:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:07.287 10:57:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 0 00:05:07.287 10:57:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:07.287 10:57:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2080509 00:05:07.287 10:57:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2080509 /var/tmp/spdk2.sock 00:05:07.287 10:57:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@649 -- # local es=0 00:05:07.287 10:57:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 2080509 /var/tmp/spdk2.sock 00:05:07.287 10:57:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:05:07.287 10:57:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:07.287 10:57:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:05:07.287 10:57:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:07.287 10:57:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # waitforlisten 2080509 /var/tmp/spdk2.sock 00:05:07.287 10:57:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 2080509 ']' 00:05:07.287 10:57:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:07.287 10:57:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:07.287 10:57:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:07.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:07.287 10:57:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:07.287 10:57:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:07.287 [2024-05-15 10:57:04.273454] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:07.287 [2024-05-15 10:57:04.273497] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2080509 ] 00:05:07.287 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.287 [2024-05-15 10:57:04.348551] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2080283 has claimed it. 00:05:07.287 [2024-05-15 10:57:04.348585] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:07.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 843: kill: (2080509) - No such process 00:05:07.856 ERROR: process (pid: 2080509) is no longer running 00:05:07.856 10:57:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:07.856 10:57:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 1 00:05:07.856 10:57:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # es=1 00:05:07.856 10:57:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:07.856 10:57:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:07.856 10:57:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:07.856 10:57:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2080283 00:05:07.856 10:57:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:07.856 10:57:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2080283 00:05:08.115 lslocks: write error 00:05:08.115 10:57:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2080283 00:05:08.115 10:57:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' -z 2080283 ']' 00:05:08.115 10:57:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # kill -0 2080283 00:05:08.115 10:57:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # uname 00:05:08.115 10:57:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:08.115 10:57:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2080283 00:05:08.115 10:57:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:08.115 10:57:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:08.115 10:57:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2080283' 00:05:08.115 killing process with pid 2080283 00:05:08.115 10:57:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # kill 2080283 00:05:08.115 10:57:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # wait 2080283 00:05:08.684 00:05:08.684 real 0m2.295s 00:05:08.684 user 0m2.526s 00:05:08.684 sys 0m0.592s 00:05:08.684 10:57:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:08.684 10:57:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:08.684 ************************************ 00:05:08.684 END TEST locking_app_on_locked_coremask 00:05:08.684 ************************************ 00:05:08.684 10:57:05 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:08.684 10:57:05 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:08.684 10:57:05 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:08.684 10:57:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:08.684 ************************************ 00:05:08.684 START TEST locking_overlapped_coremask 00:05:08.684 ************************************ 00:05:08.684 10:57:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # locking_overlapped_coremask 00:05:08.684 10:57:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2080986 00:05:08.684 10:57:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2080986 /var/tmp/spdk.sock 00:05:08.684 10:57:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:08.684 10:57:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@828 -- # '[' -z 2080986 ']' 00:05:08.684 10:57:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.684 10:57:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:08.684 10:57:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.684 10:57:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:08.684 10:57:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:08.684 [2024-05-15 10:57:05.800147] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:08.684 [2024-05-15 10:57:05.800193] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2080986 ] 00:05:08.684 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.684 [2024-05-15 10:57:05.852940] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:08.684 [2024-05-15 10:57:05.934635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.684 [2024-05-15 10:57:05.934709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:08.684 [2024-05-15 10:57:05.934711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.622 10:57:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:09.622 10:57:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@861 -- # return 0 00:05:09.622 10:57:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2081164 00:05:09.622 10:57:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2081164 /var/tmp/spdk2.sock 00:05:09.622 10:57:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:09.622 10:57:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@649 -- # local es=0 00:05:09.622 10:57:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 2081164 /var/tmp/spdk2.sock 00:05:09.622 10:57:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:05:09.622 10:57:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:09.622 10:57:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:05:09.622 10:57:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:09.622 10:57:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # waitforlisten 2081164 /var/tmp/spdk2.sock 00:05:09.622 10:57:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@828 -- # '[' -z 2081164 ']' 00:05:09.622 10:57:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:09.622 10:57:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:09.622 10:57:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:09.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:09.622 10:57:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:09.622 10:57:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:09.622 [2024-05-15 10:57:06.644458] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:09.622 [2024-05-15 10:57:06.644507] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2081164 ] 00:05:09.622 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.622 [2024-05-15 10:57:06.723266] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2080986 has claimed it. 00:05:09.622 [2024-05-15 10:57:06.723306] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:10.190 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 843: kill: (2081164) - No such process 00:05:10.190 ERROR: process (pid: 2081164) is no longer running 00:05:10.190 10:57:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:10.190 10:57:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@861 -- # return 1 00:05:10.190 10:57:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # es=1 00:05:10.190 10:57:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:10.190 10:57:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:10.190 10:57:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:10.190 10:57:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:10.190 10:57:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:10.190 10:57:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:10.190 10:57:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:10.190 10:57:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2080986 00:05:10.190 10:57:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@947 -- # '[' -z 2080986 ']' 00:05:10.190 10:57:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # kill -0 2080986 00:05:10.190 10:57:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # uname 00:05:10.190 10:57:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:10.190 10:57:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2080986 00:05:10.190 10:57:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:10.190 10:57:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:10.190 10:57:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2080986' 00:05:10.190 killing process with pid 2080986 00:05:10.190 10:57:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # kill 2080986 00:05:10.190 10:57:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # wait 2080986 00:05:10.449 00:05:10.449 real 0m1.920s 00:05:10.449 user 0m5.390s 00:05:10.449 sys 0m0.379s 00:05:10.449 10:57:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:10.449 10:57:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.449 ************************************ 00:05:10.449 END TEST locking_overlapped_coremask 00:05:10.449 ************************************ 00:05:10.449 10:57:07 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:10.449 10:57:07 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:10.449 10:57:07 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:10.449 10:57:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:10.709 ************************************ 00:05:10.709 START TEST locking_overlapped_coremask_via_rpc 00:05:10.709 ************************************ 00:05:10.709 10:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # locking_overlapped_coremask_via_rpc 00:05:10.709 10:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2081441 00:05:10.709 10:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2081441 /var/tmp/spdk.sock 00:05:10.709 10:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:10.709 10:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 2081441 ']' 00:05:10.709 10:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.709 10:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:10.709 10:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.709 10:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:10.709 10:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.709 [2024-05-15 10:57:07.794542] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:10.709 [2024-05-15 10:57:07.794584] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2081441 ] 00:05:10.709 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.709 [2024-05-15 10:57:07.847276] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:10.709 [2024-05-15 10:57:07.847302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:10.709 [2024-05-15 10:57:07.916581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.709 [2024-05-15 10:57:07.916678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:10.709 [2024-05-15 10:57:07.916680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.646 10:57:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:11.646 10:57:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:05:11.646 10:57:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2081674 00:05:11.646 10:57:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2081674 /var/tmp/spdk2.sock 00:05:11.646 10:57:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:11.646 10:57:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 2081674 ']' 00:05:11.646 10:57:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:11.646 10:57:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:11.646 10:57:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:11.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:11.646 10:57:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:11.646 10:57:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.646 [2024-05-15 10:57:08.639659] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:11.646 [2024-05-15 10:57:08.639711] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2081674 ] 00:05:11.647 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.647 [2024-05-15 10:57:08.714070] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:11.647 [2024-05-15 10:57:08.714099] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:11.647 [2024-05-15 10:57:08.866168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:11.647 [2024-05-15 10:57:08.866279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:11.647 [2024-05-15 10:57:08.866280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:12.215 10:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:12.215 10:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:05:12.215 10:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:12.215 10:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:12.215 10:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.215 10:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:12.215 10:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:12.215 10:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@649 -- # local es=0 00:05:12.215 10:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:12.215 10:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:05:12.216 10:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:12.216 10:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:05:12.216 10:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:12.216 10:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:12.216 10:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:12.216 10:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.216 [2024-05-15 10:57:09.454234] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2081441 has claimed it. 00:05:12.216 request: 00:05:12.216 { 00:05:12.216 "method": "framework_enable_cpumask_locks", 00:05:12.216 "req_id": 1 00:05:12.216 } 00:05:12.216 Got JSON-RPC error response 00:05:12.216 response: 00:05:12.216 { 00:05:12.216 "code": -32603, 00:05:12.216 "message": "Failed to claim CPU core: 2" 00:05:12.216 } 00:05:12.216 10:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:05:12.216 10:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # es=1 00:05:12.216 10:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:12.216 10:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:12.216 10:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:12.216 10:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2081441 /var/tmp/spdk.sock 00:05:12.216 10:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 2081441 ']' 00:05:12.216 10:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.216 10:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:12.216 10:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.216 10:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:12.216 10:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.475 10:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:12.475 10:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:05:12.475 10:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2081674 /var/tmp/spdk2.sock 00:05:12.475 10:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 2081674 ']' 00:05:12.475 10:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:12.475 10:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:12.475 10:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:12.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:12.475 10:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:12.475 10:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.735 10:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:12.735 10:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:05:12.735 10:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:12.735 10:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:12.735 10:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:12.735 10:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:12.735 00:05:12.735 real 0m2.102s 00:05:12.735 user 0m0.871s 00:05:12.735 sys 0m0.168s 00:05:12.735 10:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:12.735 10:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.735 ************************************ 00:05:12.735 END TEST locking_overlapped_coremask_via_rpc 00:05:12.735 ************************************ 00:05:12.735 10:57:09 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:12.735 10:57:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2081441 ]] 00:05:12.735 10:57:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2081441 00:05:12.735 10:57:09 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 2081441 ']' 00:05:12.735 10:57:09 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 2081441 00:05:12.735 10:57:09 event.cpu_locks -- common/autotest_common.sh@952 -- # uname 00:05:12.735 10:57:09 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:12.735 10:57:09 event.cpu_locks -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2081441 00:05:12.735 10:57:09 event.cpu_locks -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:12.735 10:57:09 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:12.735 10:57:09 event.cpu_locks -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2081441' 00:05:12.735 killing process with pid 2081441 00:05:12.735 10:57:09 event.cpu_locks -- common/autotest_common.sh@966 -- # kill 2081441 00:05:12.735 10:57:09 event.cpu_locks -- common/autotest_common.sh@971 -- # wait 2081441 00:05:13.304 10:57:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2081674 ]] 00:05:13.304 10:57:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2081674 00:05:13.304 10:57:10 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 2081674 ']' 00:05:13.304 10:57:10 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 2081674 00:05:13.304 10:57:10 event.cpu_locks -- common/autotest_common.sh@952 -- # uname 00:05:13.304 10:57:10 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:13.304 10:57:10 event.cpu_locks -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2081674 00:05:13.304 10:57:10 event.cpu_locks -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:05:13.304 10:57:10 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:05:13.304 10:57:10 event.cpu_locks -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2081674' 00:05:13.304 killing process with pid 2081674 00:05:13.304 10:57:10 event.cpu_locks -- common/autotest_common.sh@966 -- # kill 2081674 00:05:13.304 10:57:10 event.cpu_locks -- common/autotest_common.sh@971 -- # wait 2081674 00:05:13.563 10:57:10 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:13.563 10:57:10 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:13.563 10:57:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2081441 ]] 00:05:13.563 10:57:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2081441 00:05:13.563 10:57:10 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 2081441 ']' 00:05:13.563 10:57:10 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 2081441 00:05:13.563 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 951: kill: (2081441) - No such process 00:05:13.563 10:57:10 event.cpu_locks -- common/autotest_common.sh@974 -- # echo 'Process with pid 2081441 is not found' 00:05:13.563 Process with pid 2081441 is not found 00:05:13.563 10:57:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2081674 ]] 00:05:13.563 10:57:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2081674 00:05:13.563 10:57:10 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 2081674 ']' 00:05:13.563 10:57:10 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 2081674 00:05:13.563 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 951: kill: (2081674) - No such process 00:05:13.563 10:57:10 event.cpu_locks -- common/autotest_common.sh@974 -- # echo 'Process with pid 2081674 is not found' 00:05:13.563 Process with pid 2081674 is not found 00:05:13.563 10:57:10 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:13.563 00:05:13.563 real 0m17.337s 00:05:13.563 user 0m29.788s 00:05:13.563 sys 0m4.824s 00:05:13.563 10:57:10 event.cpu_locks -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:13.563 10:57:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:13.563 ************************************ 00:05:13.563 END TEST cpu_locks 00:05:13.563 ************************************ 00:05:13.563 00:05:13.563 real 0m41.313s 00:05:13.563 user 1m17.866s 00:05:13.563 sys 0m8.081s 00:05:13.563 10:57:10 event -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:13.563 10:57:10 event -- common/autotest_common.sh@10 -- # set +x 00:05:13.563 ************************************ 00:05:13.563 END TEST event 00:05:13.563 ************************************ 00:05:13.563 10:57:10 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:13.563 10:57:10 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:13.563 10:57:10 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:13.563 10:57:10 -- common/autotest_common.sh@10 -- # set +x 00:05:13.563 ************************************ 00:05:13.563 START TEST thread 00:05:13.563 ************************************ 00:05:13.563 10:57:10 thread -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:13.822 * Looking for test storage... 00:05:13.822 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:13.822 10:57:10 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:13.822 10:57:10 thread -- common/autotest_common.sh@1098 -- # '[' 8 -le 1 ']' 00:05:13.823 10:57:10 thread -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:13.823 10:57:10 thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.823 ************************************ 00:05:13.823 START TEST thread_poller_perf 00:05:13.823 ************************************ 00:05:13.823 10:57:10 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:13.823 [2024-05-15 10:57:10.900711] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:13.823 [2024-05-15 10:57:10.900779] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2082077 ] 00:05:13.823 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.823 [2024-05-15 10:57:10.957581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.823 [2024-05-15 10:57:11.036122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.823 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:15.202 ====================================== 00:05:15.202 busy:2306230280 (cyc) 00:05:15.202 total_run_count: 411000 00:05:15.202 tsc_hz: 2300000000 (cyc) 00:05:15.202 ====================================== 00:05:15.202 poller_cost: 5611 (cyc), 2439 (nsec) 00:05:15.202 00:05:15.202 real 0m1.252s 00:05:15.202 user 0m1.172s 00:05:15.202 sys 0m0.076s 00:05:15.202 10:57:12 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:15.202 10:57:12 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:15.202 ************************************ 00:05:15.202 END TEST thread_poller_perf 00:05:15.202 ************************************ 00:05:15.202 10:57:12 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:15.202 10:57:12 thread -- common/autotest_common.sh@1098 -- # '[' 8 -le 1 ']' 00:05:15.202 10:57:12 thread -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:15.202 10:57:12 thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.202 ************************************ 00:05:15.202 START TEST thread_poller_perf 00:05:15.202 ************************************ 00:05:15.202 10:57:12 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:15.202 [2024-05-15 10:57:12.228484] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:15.202 [2024-05-15 10:57:12.228550] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2082330 ] 00:05:15.202 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.202 [2024-05-15 10:57:12.286557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.202 [2024-05-15 10:57:12.361862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.202 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:16.581 ====================================== 00:05:16.581 busy:2301729954 (cyc) 00:05:16.581 total_run_count: 5422000 00:05:16.581 tsc_hz: 2300000000 (cyc) 00:05:16.581 ====================================== 00:05:16.581 poller_cost: 424 (cyc), 184 (nsec) 00:05:16.581 00:05:16.581 real 0m1.245s 00:05:16.581 user 0m1.172s 00:05:16.581 sys 0m0.070s 00:05:16.581 10:57:13 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:16.581 10:57:13 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:16.581 ************************************ 00:05:16.581 END TEST thread_poller_perf 00:05:16.581 ************************************ 00:05:16.581 10:57:13 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:16.581 00:05:16.581 real 0m2.728s 00:05:16.581 user 0m2.427s 00:05:16.581 sys 0m0.305s 00:05:16.581 10:57:13 thread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:16.581 10:57:13 thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.581 ************************************ 00:05:16.581 END TEST thread 00:05:16.581 ************************************ 00:05:16.581 10:57:13 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:16.581 10:57:13 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:16.581 10:57:13 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:16.581 10:57:13 -- common/autotest_common.sh@10 -- # set +x 00:05:16.581 ************************************ 00:05:16.581 START TEST accel 00:05:16.581 ************************************ 00:05:16.581 10:57:13 accel -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:16.581 * Looking for test storage... 00:05:16.581 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:16.581 10:57:13 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:16.581 10:57:13 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:16.581 10:57:13 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:16.581 10:57:13 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2082666 00:05:16.581 10:57:13 accel -- accel/accel.sh@63 -- # waitforlisten 2082666 00:05:16.581 10:57:13 accel -- common/autotest_common.sh@828 -- # '[' -z 2082666 ']' 00:05:16.581 10:57:13 accel -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.581 10:57:13 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:16.581 10:57:13 accel -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:16.581 10:57:13 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:16.581 10:57:13 accel -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.581 10:57:13 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:16.581 10:57:13 accel -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:16.581 10:57:13 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:16.581 10:57:13 accel -- common/autotest_common.sh@10 -- # set +x 00:05:16.581 10:57:13 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:16.581 10:57:13 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:16.581 10:57:13 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:16.581 10:57:13 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:16.581 10:57:13 accel -- accel/accel.sh@41 -- # jq -r . 00:05:16.581 [2024-05-15 10:57:13.690588] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:16.581 [2024-05-15 10:57:13.690640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2082666 ] 00:05:16.581 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.581 [2024-05-15 10:57:13.743582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.581 [2024-05-15 10:57:13.823973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.519 10:57:14 accel -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:17.519 10:57:14 accel -- common/autotest_common.sh@861 -- # return 0 00:05:17.519 10:57:14 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:17.519 10:57:14 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:17.519 10:57:14 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:17.519 10:57:14 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:17.519 10:57:14 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:17.519 10:57:14 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:17.519 10:57:14 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:17.519 10:57:14 accel -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:17.519 10:57:14 accel -- common/autotest_common.sh@10 -- # set +x 00:05:17.519 10:57:14 accel -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:17.519 10:57:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.519 10:57:14 accel -- accel/accel.sh@72 -- # IFS== 00:05:17.519 10:57:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:17.519 10:57:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.519 10:57:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.519 10:57:14 accel -- accel/accel.sh@72 -- # IFS== 00:05:17.519 10:57:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:17.519 10:57:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.519 10:57:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.519 10:57:14 accel -- accel/accel.sh@72 -- # IFS== 00:05:17.519 10:57:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:17.519 10:57:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.519 10:57:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.519 10:57:14 accel -- accel/accel.sh@72 -- # IFS== 00:05:17.519 10:57:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:17.519 10:57:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.519 10:57:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.519 10:57:14 accel -- accel/accel.sh@72 -- # IFS== 00:05:17.519 10:57:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:17.519 10:57:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.519 10:57:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.519 10:57:14 accel -- accel/accel.sh@72 -- # IFS== 00:05:17.519 10:57:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:17.519 10:57:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.519 10:57:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.519 10:57:14 accel -- accel/accel.sh@72 -- # IFS== 00:05:17.519 10:57:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:17.519 10:57:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.519 10:57:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.519 10:57:14 accel -- accel/accel.sh@72 -- # IFS== 00:05:17.519 10:57:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:17.519 10:57:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.519 10:57:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.519 10:57:14 accel -- accel/accel.sh@72 -- # IFS== 00:05:17.519 10:57:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:17.519 10:57:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.519 10:57:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.519 10:57:14 accel -- accel/accel.sh@72 -- # IFS== 00:05:17.519 10:57:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:17.519 10:57:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.519 10:57:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.519 10:57:14 accel -- accel/accel.sh@72 -- # IFS== 00:05:17.519 10:57:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:17.519 10:57:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.519 10:57:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.519 10:57:14 accel -- accel/accel.sh@72 -- # IFS== 00:05:17.519 10:57:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:17.519 10:57:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.519 10:57:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.519 10:57:14 accel -- accel/accel.sh@72 -- # IFS== 00:05:17.519 10:57:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:17.519 10:57:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.519 10:57:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.519 10:57:14 accel -- accel/accel.sh@72 -- # IFS== 00:05:17.519 10:57:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:17.519 10:57:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.519 10:57:14 accel -- accel/accel.sh@75 -- # killprocess 2082666 00:05:17.519 10:57:14 accel -- common/autotest_common.sh@947 -- # '[' -z 2082666 ']' 00:05:17.519 10:57:14 accel -- common/autotest_common.sh@951 -- # kill -0 2082666 00:05:17.519 10:57:14 accel -- common/autotest_common.sh@952 -- # uname 00:05:17.519 10:57:14 accel -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:17.519 10:57:14 accel -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2082666 00:05:17.519 10:57:14 accel -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:17.519 10:57:14 accel -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:17.519 10:57:14 accel -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2082666' 00:05:17.519 killing process with pid 2082666 00:05:17.519 10:57:14 accel -- common/autotest_common.sh@966 -- # kill 2082666 00:05:17.520 10:57:14 accel -- common/autotest_common.sh@971 -- # wait 2082666 00:05:17.779 10:57:14 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:17.779 10:57:14 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:17.779 10:57:14 accel -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:05:17.779 10:57:14 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:17.779 10:57:14 accel -- common/autotest_common.sh@10 -- # set +x 00:05:17.779 10:57:14 accel.accel_help -- common/autotest_common.sh@1122 -- # accel_perf -h 00:05:17.779 10:57:14 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:17.779 10:57:14 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:17.779 10:57:14 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:17.779 10:57:14 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:17.779 10:57:14 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:17.779 10:57:14 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:17.779 10:57:14 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:17.779 10:57:14 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:17.779 10:57:14 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:17.779 10:57:14 accel.accel_help -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:17.779 10:57:14 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:17.779 10:57:15 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:17.779 10:57:15 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:05:17.779 10:57:15 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:17.779 10:57:15 accel -- common/autotest_common.sh@10 -- # set +x 00:05:18.039 ************************************ 00:05:18.039 START TEST accel_missing_filename 00:05:18.039 ************************************ 00:05:18.039 10:57:15 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w compress 00:05:18.039 10:57:15 accel.accel_missing_filename -- common/autotest_common.sh@649 -- # local es=0 00:05:18.039 10:57:15 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:18.039 10:57:15 accel.accel_missing_filename -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:05:18.039 10:57:15 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:18.039 10:57:15 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # type -t accel_perf 00:05:18.039 10:57:15 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:18.039 10:57:15 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress 00:05:18.039 10:57:15 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:18.039 10:57:15 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:18.039 10:57:15 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:18.039 10:57:15 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:18.039 10:57:15 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:18.039 10:57:15 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:18.039 10:57:15 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:18.039 10:57:15 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:18.039 10:57:15 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:18.039 [2024-05-15 10:57:15.081630] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:18.039 [2024-05-15 10:57:15.081687] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2082975 ] 00:05:18.039 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.039 [2024-05-15 10:57:15.131015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.039 [2024-05-15 10:57:15.204128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.039 [2024-05-15 10:57:15.245516] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:18.039 [2024-05-15 10:57:15.305399] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:05:18.299 A filename is required. 00:05:18.299 10:57:15 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # es=234 00:05:18.299 10:57:15 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:18.299 10:57:15 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # es=106 00:05:18.299 10:57:15 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # case "$es" in 00:05:18.299 10:57:15 accel.accel_missing_filename -- common/autotest_common.sh@669 -- # es=1 00:05:18.299 10:57:15 accel.accel_missing_filename -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:18.299 00:05:18.299 real 0m0.337s 00:05:18.299 user 0m0.271s 00:05:18.299 sys 0m0.106s 00:05:18.299 10:57:15 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:18.299 10:57:15 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:18.299 ************************************ 00:05:18.299 END TEST accel_missing_filename 00:05:18.299 ************************************ 00:05:18.299 10:57:15 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:18.299 10:57:15 accel -- common/autotest_common.sh@1098 -- # '[' 10 -le 1 ']' 00:05:18.299 10:57:15 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:18.299 10:57:15 accel -- common/autotest_common.sh@10 -- # set +x 00:05:18.299 ************************************ 00:05:18.299 START TEST accel_compress_verify 00:05:18.299 ************************************ 00:05:18.299 10:57:15 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:18.299 10:57:15 accel.accel_compress_verify -- common/autotest_common.sh@649 -- # local es=0 00:05:18.299 10:57:15 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:18.299 10:57:15 accel.accel_compress_verify -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:05:18.299 10:57:15 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:18.299 10:57:15 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # type -t accel_perf 00:05:18.299 10:57:15 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:18.299 10:57:15 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:18.299 10:57:15 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:18.299 10:57:15 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:18.299 10:57:15 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:18.299 10:57:15 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:18.299 10:57:15 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:18.299 10:57:15 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:18.299 10:57:15 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:18.299 10:57:15 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:18.299 10:57:15 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:18.299 [2024-05-15 10:57:15.483322] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:18.299 [2024-05-15 10:57:15.483393] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2083062 ] 00:05:18.299 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.299 [2024-05-15 10:57:15.540206] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.559 [2024-05-15 10:57:15.613895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.559 [2024-05-15 10:57:15.655067] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:18.559 [2024-05-15 10:57:15.714870] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:05:18.559 00:05:18.559 Compression does not support the verify option, aborting. 00:05:18.559 10:57:15 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # es=161 00:05:18.559 10:57:15 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:18.559 10:57:15 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # es=33 00:05:18.559 10:57:15 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # case "$es" in 00:05:18.559 10:57:15 accel.accel_compress_verify -- common/autotest_common.sh@669 -- # es=1 00:05:18.559 10:57:15 accel.accel_compress_verify -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:18.559 00:05:18.559 real 0m0.357s 00:05:18.559 user 0m0.288s 00:05:18.559 sys 0m0.110s 00:05:18.559 10:57:15 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:18.559 10:57:15 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:18.559 ************************************ 00:05:18.559 END TEST accel_compress_verify 00:05:18.559 ************************************ 00:05:18.819 10:57:15 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:18.819 10:57:15 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:05:18.819 10:57:15 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:18.819 10:57:15 accel -- common/autotest_common.sh@10 -- # set +x 00:05:18.819 ************************************ 00:05:18.819 START TEST accel_wrong_workload 00:05:18.819 ************************************ 00:05:18.819 10:57:15 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w foobar 00:05:18.819 10:57:15 accel.accel_wrong_workload -- common/autotest_common.sh@649 -- # local es=0 00:05:18.819 10:57:15 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:18.819 10:57:15 accel.accel_wrong_workload -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:05:18.819 10:57:15 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:18.819 10:57:15 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # type -t accel_perf 00:05:18.819 10:57:15 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:18.819 10:57:15 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w foobar 00:05:18.819 10:57:15 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:18.819 10:57:15 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:18.819 10:57:15 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:18.819 10:57:15 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:18.819 10:57:15 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:18.819 10:57:15 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:18.819 10:57:15 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:18.819 10:57:15 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:18.819 10:57:15 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:18.819 Unsupported workload type: foobar 00:05:18.819 [2024-05-15 10:57:15.912561] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:18.819 accel_perf options: 00:05:18.819 [-h help message] 00:05:18.819 [-q queue depth per core] 00:05:18.819 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:18.819 [-T number of threads per core 00:05:18.819 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:18.819 [-t time in seconds] 00:05:18.819 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:18.819 [ dif_verify, , dif_generate, dif_generate_copy 00:05:18.819 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:18.819 [-l for compress/decompress workloads, name of uncompressed input file 00:05:18.819 [-S for crc32c workload, use this seed value (default 0) 00:05:18.819 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:18.819 [-f for fill workload, use this BYTE value (default 255) 00:05:18.819 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:18.819 [-y verify result if this switch is on] 00:05:18.819 [-a tasks to allocate per core (default: same value as -q)] 00:05:18.819 Can be used to spread operations across a wider range of memory. 00:05:18.819 10:57:15 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # es=1 00:05:18.819 10:57:15 accel.accel_wrong_workload -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:18.819 10:57:15 accel.accel_wrong_workload -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:18.819 10:57:15 accel.accel_wrong_workload -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:18.819 00:05:18.819 real 0m0.033s 00:05:18.819 user 0m0.022s 00:05:18.819 sys 0m0.011s 00:05:18.819 10:57:15 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:18.819 10:57:15 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:18.819 ************************************ 00:05:18.819 END TEST accel_wrong_workload 00:05:18.819 ************************************ 00:05:18.819 Error: writing output failed: Broken pipe 00:05:18.819 10:57:15 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:18.819 10:57:15 accel -- common/autotest_common.sh@1098 -- # '[' 10 -le 1 ']' 00:05:18.819 10:57:15 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:18.819 10:57:15 accel -- common/autotest_common.sh@10 -- # set +x 00:05:18.819 ************************************ 00:05:18.819 START TEST accel_negative_buffers 00:05:18.819 ************************************ 00:05:18.819 10:57:15 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:18.819 10:57:15 accel.accel_negative_buffers -- common/autotest_common.sh@649 -- # local es=0 00:05:18.819 10:57:15 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:18.819 10:57:15 accel.accel_negative_buffers -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:05:18.819 10:57:15 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:18.819 10:57:15 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # type -t accel_perf 00:05:18.819 10:57:15 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:18.819 10:57:15 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w xor -y -x -1 00:05:18.819 10:57:15 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:18.819 10:57:15 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:18.819 10:57:15 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:18.819 10:57:15 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:18.819 10:57:15 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:18.819 10:57:15 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:18.819 10:57:15 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:18.819 10:57:15 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:18.819 10:57:15 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:18.819 -x option must be non-negative. 00:05:18.819 [2024-05-15 10:57:16.014714] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:18.819 accel_perf options: 00:05:18.819 [-h help message] 00:05:18.819 [-q queue depth per core] 00:05:18.819 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:18.819 [-T number of threads per core 00:05:18.819 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:18.819 [-t time in seconds] 00:05:18.819 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:18.819 [ dif_verify, , dif_generate, dif_generate_copy 00:05:18.819 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:18.819 [-l for compress/decompress workloads, name of uncompressed input file 00:05:18.819 [-S for crc32c workload, use this seed value (default 0) 00:05:18.819 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:18.819 [-f for fill workload, use this BYTE value (default 255) 00:05:18.819 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:18.819 [-y verify result if this switch is on] 00:05:18.819 [-a tasks to allocate per core (default: same value as -q)] 00:05:18.819 Can be used to spread operations across a wider range of memory. 00:05:18.819 10:57:16 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # es=1 00:05:18.819 10:57:16 accel.accel_negative_buffers -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:18.819 10:57:16 accel.accel_negative_buffers -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:18.819 10:57:16 accel.accel_negative_buffers -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:18.819 00:05:18.819 real 0m0.030s 00:05:18.819 user 0m0.021s 00:05:18.819 sys 0m0.009s 00:05:18.819 10:57:16 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:18.819 10:57:16 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:18.819 ************************************ 00:05:18.819 END TEST accel_negative_buffers 00:05:18.819 ************************************ 00:05:18.819 Error: writing output failed: Broken pipe 00:05:18.819 10:57:16 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:18.819 10:57:16 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:05:18.819 10:57:16 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:18.819 10:57:16 accel -- common/autotest_common.sh@10 -- # set +x 00:05:19.079 ************************************ 00:05:19.079 START TEST accel_crc32c 00:05:19.079 ************************************ 00:05:19.079 10:57:16 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:19.079 10:57:16 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:19.079 10:57:16 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:19.079 10:57:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.079 10:57:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.079 10:57:16 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:19.079 10:57:16 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:19.079 10:57:16 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:19.079 10:57:16 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:19.079 10:57:16 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:19.079 10:57:16 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:19.079 10:57:16 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:19.079 10:57:16 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:19.079 10:57:16 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:19.079 10:57:16 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:19.079 [2024-05-15 10:57:16.116545] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:19.079 [2024-05-15 10:57:16.116611] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2083128 ] 00:05:19.079 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.079 [2024-05-15 10:57:16.172069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.079 [2024-05-15 10:57:16.246509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.080 10:57:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.459 10:57:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:20.459 10:57:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.459 10:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.459 10:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.459 10:57:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:20.459 10:57:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.459 10:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.459 10:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.459 10:57:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:20.459 10:57:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.459 10:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.459 10:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.459 10:57:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:20.459 10:57:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.459 10:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.459 10:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.459 10:57:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:20.459 10:57:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.459 10:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.459 10:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.459 10:57:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:20.459 10:57:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.459 10:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.459 10:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.459 10:57:17 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:20.459 10:57:17 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:20.459 10:57:17 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:20.459 00:05:20.459 real 0m1.362s 00:05:20.459 user 0m1.263s 00:05:20.459 sys 0m0.113s 00:05:20.459 10:57:17 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:20.459 10:57:17 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:20.459 ************************************ 00:05:20.459 END TEST accel_crc32c 00:05:20.459 ************************************ 00:05:20.459 10:57:17 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:20.459 10:57:17 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:05:20.459 10:57:17 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:20.459 10:57:17 accel -- common/autotest_common.sh@10 -- # set +x 00:05:20.459 ************************************ 00:05:20.459 START TEST accel_crc32c_C2 00:05:20.459 ************************************ 00:05:20.459 10:57:17 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:20.459 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:20.459 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:20.459 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.459 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.459 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:20.459 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:20.459 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:20.459 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:20.459 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:20.459 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:20.459 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:20.459 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:20.459 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:20.459 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:20.459 [2024-05-15 10:57:17.544354] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:20.459 [2024-05-15 10:57:17.544410] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2083382 ] 00:05:20.459 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.459 [2024-05-15 10:57:17.601518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.459 [2024-05-15 10:57:17.675665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.459 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:20.459 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.459 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.459 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.459 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:20.459 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.459 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.459 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.459 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:20.459 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.459 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.459 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.459 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:20.459 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.459 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.459 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.460 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:20.460 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.460 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.460 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.460 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:20.460 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.460 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:20.460 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.460 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.460 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:20.460 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.460 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.460 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.460 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:20.460 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.460 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.460 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.460 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:20.719 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.719 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.719 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.719 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:20.719 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.719 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:20.719 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.719 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.719 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:20.719 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.719 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.719 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.719 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:20.719 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.719 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.719 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.719 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:20.719 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.719 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.719 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.719 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:20.719 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.719 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.719 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.719 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:20.719 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.719 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.719 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.719 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:20.719 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.719 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.719 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.719 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:20.719 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.719 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.719 10:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:21.697 10:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:21.697 10:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.697 10:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:21.697 10:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:21.697 10:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:21.697 10:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.697 10:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:21.697 10:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:21.697 10:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:21.697 10:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.697 10:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:21.697 10:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:21.697 10:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:21.697 10:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.697 10:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:21.697 10:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:21.697 10:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:21.697 10:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.697 10:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:21.697 10:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:21.697 10:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:21.697 10:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.697 10:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:21.697 10:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:21.697 10:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:21.697 10:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:21.697 10:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:21.697 00:05:21.697 real 0m1.363s 00:05:21.697 user 0m1.265s 00:05:21.697 sys 0m0.112s 00:05:21.697 10:57:18 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:21.697 10:57:18 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:21.697 ************************************ 00:05:21.697 END TEST accel_crc32c_C2 00:05:21.697 ************************************ 00:05:21.697 10:57:18 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:21.697 10:57:18 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:05:21.697 10:57:18 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:21.697 10:57:18 accel -- common/autotest_common.sh@10 -- # set +x 00:05:21.697 ************************************ 00:05:21.697 START TEST accel_copy 00:05:21.697 ************************************ 00:05:21.697 10:57:18 accel.accel_copy -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w copy -y 00:05:21.697 10:57:18 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:21.697 10:57:18 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:21.697 10:57:18 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:21.697 10:57:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:21.697 10:57:18 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:21.697 10:57:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:21.697 10:57:18 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:21.697 10:57:18 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:21.697 10:57:18 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:21.697 10:57:18 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:21.697 10:57:18 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:21.697 10:57:18 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:21.697 10:57:18 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:21.697 10:57:18 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:21.697 [2024-05-15 10:57:18.946090] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:21.697 [2024-05-15 10:57:18.946125] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2083639 ] 00:05:21.956 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.956 [2024-05-15 10:57:18.999348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.956 [2024-05-15 10:57:19.072334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.956 10:57:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:21.956 10:57:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:21.956 10:57:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:21.956 10:57:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:21.956 10:57:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:21.956 10:57:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:21.956 10:57:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:21.956 10:57:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:21.956 10:57:19 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:21.956 10:57:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:21.956 10:57:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:21.956 10:57:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:21.956 10:57:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:21.956 10:57:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:21.956 10:57:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:21.956 10:57:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:21.956 10:57:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:21.956 10:57:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:21.956 10:57:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:21.956 10:57:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:21.956 10:57:19 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:21.956 10:57:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:21.956 10:57:19 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:21.956 10:57:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:21.956 10:57:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:21.956 10:57:19 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:21.956 10:57:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:21.956 10:57:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:21.956 10:57:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:21.956 10:57:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:21.956 10:57:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:21.956 10:57:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:21.957 10:57:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:21.957 10:57:19 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:21.957 10:57:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:21.957 10:57:19 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:21.957 10:57:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:21.957 10:57:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:21.957 10:57:19 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:21.957 10:57:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:21.957 10:57:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:21.957 10:57:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:21.957 10:57:19 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:21.957 10:57:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:21.957 10:57:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:21.957 10:57:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:21.957 10:57:19 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:21.957 10:57:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:21.957 10:57:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:21.957 10:57:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:21.957 10:57:19 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:21.957 10:57:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:21.957 10:57:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:21.957 10:57:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:21.957 10:57:19 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:21.957 10:57:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:21.957 10:57:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:21.957 10:57:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:21.957 10:57:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:21.957 10:57:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:21.957 10:57:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:21.957 10:57:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:21.957 10:57:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:21.957 10:57:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:21.957 10:57:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:21.957 10:57:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:23.336 10:57:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:23.336 10:57:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:23.336 10:57:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:23.336 10:57:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:23.336 10:57:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:23.336 10:57:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:23.336 10:57:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:23.336 10:57:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:23.336 10:57:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:23.336 10:57:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:23.336 10:57:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:23.336 10:57:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:23.336 10:57:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:23.336 10:57:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:23.336 10:57:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:23.336 10:57:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:23.336 10:57:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:23.336 10:57:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:23.336 10:57:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:23.336 10:57:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:23.336 10:57:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:23.336 10:57:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:23.336 10:57:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:23.336 10:57:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:23.336 10:57:20 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:23.336 10:57:20 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:23.336 10:57:20 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:23.336 00:05:23.336 real 0m1.343s 00:05:23.336 user 0m1.245s 00:05:23.336 sys 0m0.111s 00:05:23.336 10:57:20 accel.accel_copy -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:23.336 10:57:20 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:23.336 ************************************ 00:05:23.336 END TEST accel_copy 00:05:23.336 ************************************ 00:05:23.336 10:57:20 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:23.336 10:57:20 accel -- common/autotest_common.sh@1098 -- # '[' 13 -le 1 ']' 00:05:23.336 10:57:20 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:23.336 10:57:20 accel -- common/autotest_common.sh@10 -- # set +x 00:05:23.336 ************************************ 00:05:23.336 START TEST accel_fill 00:05:23.336 ************************************ 00:05:23.336 10:57:20 accel.accel_fill -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:23.336 10:57:20 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:23.336 10:57:20 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:23.336 10:57:20 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:23.336 10:57:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.336 10:57:20 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:23.336 10:57:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.336 10:57:20 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:23.336 10:57:20 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:23.336 10:57:20 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:23.336 10:57:20 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:23.336 10:57:20 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:23.336 10:57:20 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:23.336 10:57:20 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:23.336 10:57:20 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:23.336 [2024-05-15 10:57:20.345405] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:23.336 [2024-05-15 10:57:20.345441] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2083900 ] 00:05:23.336 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.337 [2024-05-15 10:57:20.397760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.337 [2024-05-15 10:57:20.476556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.337 10:57:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:24.717 10:57:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:24.717 10:57:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:24.717 10:57:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:24.717 10:57:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:24.717 10:57:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:24.717 10:57:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:24.717 10:57:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:24.717 10:57:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:24.717 10:57:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:24.717 10:57:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:24.717 10:57:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:24.717 10:57:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:24.717 10:57:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:24.717 10:57:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:24.717 10:57:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:24.717 10:57:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:24.717 10:57:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:24.717 10:57:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:24.717 10:57:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:24.717 10:57:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:24.717 10:57:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:24.717 10:57:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:24.717 10:57:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:24.717 10:57:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:24.717 10:57:21 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:24.717 10:57:21 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:24.717 10:57:21 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:24.717 00:05:24.717 real 0m1.348s 00:05:24.717 user 0m1.254s 00:05:24.717 sys 0m0.108s 00:05:24.717 10:57:21 accel.accel_fill -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:24.717 10:57:21 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:24.717 ************************************ 00:05:24.717 END TEST accel_fill 00:05:24.717 ************************************ 00:05:24.717 10:57:21 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:24.717 10:57:21 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:05:24.717 10:57:21 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:24.717 10:57:21 accel -- common/autotest_common.sh@10 -- # set +x 00:05:24.717 ************************************ 00:05:24.717 START TEST accel_copy_crc32c 00:05:24.717 ************************************ 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w copy_crc32c -y 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:24.717 [2024-05-15 10:57:21.774756] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:24.717 [2024-05-15 10:57:21.774821] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2084172 ] 00:05:24.717 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.717 [2024-05-15 10:57:21.830488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.717 [2024-05-15 10:57:21.902300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.717 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.718 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.718 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:24.718 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.718 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:24.718 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.718 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.718 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:24.718 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.718 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.718 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.718 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:24.718 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.718 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.718 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.718 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:24.718 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.718 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.718 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.718 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:24.718 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.718 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.718 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.718 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:24.718 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.718 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.718 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.718 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:24.718 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.718 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.718 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.718 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:24.718 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.718 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.718 10:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.093 10:57:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:26.093 10:57:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.093 10:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.093 10:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.093 10:57:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:26.093 10:57:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.093 10:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.093 10:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.093 10:57:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:26.093 10:57:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.093 10:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.093 10:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.093 10:57:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:26.093 10:57:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.093 10:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.093 10:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.093 10:57:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:26.093 10:57:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.093 10:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.093 10:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.093 10:57:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:26.093 10:57:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.093 10:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.093 10:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.093 10:57:23 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:26.093 10:57:23 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:26.093 10:57:23 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:26.093 00:05:26.093 real 0m1.359s 00:05:26.093 user 0m1.257s 00:05:26.093 sys 0m0.116s 00:05:26.093 10:57:23 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:26.093 10:57:23 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:26.093 ************************************ 00:05:26.093 END TEST accel_copy_crc32c 00:05:26.093 ************************************ 00:05:26.093 10:57:23 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:26.093 10:57:23 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:05:26.093 10:57:23 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:26.093 10:57:23 accel -- common/autotest_common.sh@10 -- # set +x 00:05:26.093 ************************************ 00:05:26.093 START TEST accel_copy_crc32c_C2 00:05:26.093 ************************************ 00:05:26.093 10:57:23 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:26.093 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:26.093 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:26.093 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:26.093 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.093 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.093 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:26.093 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:26.093 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:26.093 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:26.093 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:26.093 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:26.093 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:26.093 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:26.093 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:26.093 [2024-05-15 10:57:23.186415] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:26.093 [2024-05-15 10:57:23.186451] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2084452 ] 00:05:26.093 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.093 [2024-05-15 10:57:23.238556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.093 [2024-05-15 10:57:23.310382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.093 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.093 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.093 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.093 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.093 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.093 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.093 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.093 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.093 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:26.093 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.093 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.093 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.093 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.093 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.093 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.093 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.093 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.093 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.093 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.093 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.093 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:26.093 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.093 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:26.093 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.093 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.093 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:26.093 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.093 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.093 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.093 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:26.093 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.093 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.093 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.351 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:26.351 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.351 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.351 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.351 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.351 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.351 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.351 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.351 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:26.351 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.351 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:26.351 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.351 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.351 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:26.351 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.351 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.351 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.351 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:26.351 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.351 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.351 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.351 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:26.351 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.351 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.351 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.351 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:26.351 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.351 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.351 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.351 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:26.351 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.351 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.351 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.351 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.351 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.351 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.351 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.351 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.351 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.351 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.351 10:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.286 10:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:27.286 10:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.286 10:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.286 10:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.286 10:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:27.286 10:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.286 10:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.286 10:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.286 10:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:27.286 10:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.286 10:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.286 10:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.286 10:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:27.286 10:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.286 10:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.286 10:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.286 10:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:27.286 10:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.287 10:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.287 10:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.287 10:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:27.287 10:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.287 10:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.287 10:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.287 10:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:27.287 10:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:27.287 10:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:27.287 00:05:27.287 real 0m1.341s 00:05:27.287 user 0m1.241s 00:05:27.287 sys 0m0.113s 00:05:27.287 10:57:24 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:27.287 10:57:24 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:27.287 ************************************ 00:05:27.287 END TEST accel_copy_crc32c_C2 00:05:27.287 ************************************ 00:05:27.287 10:57:24 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:27.287 10:57:24 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:05:27.287 10:57:24 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:27.287 10:57:24 accel -- common/autotest_common.sh@10 -- # set +x 00:05:27.546 ************************************ 00:05:27.546 START TEST accel_dualcast 00:05:27.546 ************************************ 00:05:27.546 10:57:24 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dualcast -y 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:27.546 [2024-05-15 10:57:24.613321] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:27.546 [2024-05-15 10:57:24.613387] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2084729 ] 00:05:27.546 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.546 [2024-05-15 10:57:24.669979] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.546 [2024-05-15 10:57:24.741945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:27.546 10:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:28.922 10:57:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:28.922 10:57:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:28.923 10:57:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:28.923 10:57:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:28.923 10:57:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:28.923 10:57:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:28.923 10:57:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:28.923 10:57:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:28.923 10:57:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:28.923 10:57:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:28.923 10:57:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:28.923 10:57:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:28.923 10:57:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:28.923 10:57:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:28.923 10:57:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:28.923 10:57:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:28.923 10:57:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:28.923 10:57:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:28.923 10:57:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:28.923 10:57:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:28.923 10:57:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:28.923 10:57:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:28.923 10:57:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:28.923 10:57:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:28.923 10:57:25 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:28.923 10:57:25 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:28.923 10:57:25 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:28.923 00:05:28.923 real 0m1.361s 00:05:28.923 user 0m1.255s 00:05:28.923 sys 0m0.119s 00:05:28.923 10:57:25 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:28.923 10:57:25 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:28.923 ************************************ 00:05:28.923 END TEST accel_dualcast 00:05:28.923 ************************************ 00:05:28.923 10:57:25 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:28.923 10:57:25 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:05:28.923 10:57:25 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:28.923 10:57:25 accel -- common/autotest_common.sh@10 -- # set +x 00:05:28.923 ************************************ 00:05:28.923 START TEST accel_compare 00:05:28.923 ************************************ 00:05:28.923 10:57:26 accel.accel_compare -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w compare -y 00:05:28.923 10:57:26 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:28.923 10:57:26 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:28.923 10:57:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:28.923 10:57:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:28.923 10:57:26 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:28.923 10:57:26 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:28.923 10:57:26 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:28.923 10:57:26 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:28.923 10:57:26 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:28.923 10:57:26 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:28.923 10:57:26 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:28.923 10:57:26 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:28.923 10:57:26 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:28.923 10:57:26 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:28.923 [2024-05-15 10:57:26.040327] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:28.923 [2024-05-15 10:57:26.040391] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2085004 ] 00:05:28.923 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.923 [2024-05-15 10:57:26.094782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.923 [2024-05-15 10:57:26.166618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:29.182 10:57:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:30.119 10:57:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:30.119 10:57:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:30.119 10:57:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:30.119 10:57:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:30.119 10:57:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:30.119 10:57:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:30.119 10:57:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:30.119 10:57:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:30.119 10:57:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:30.119 10:57:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:30.119 10:57:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:30.119 10:57:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:30.119 10:57:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:30.119 10:57:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:30.119 10:57:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:30.119 10:57:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:30.119 10:57:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:30.119 10:57:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:30.119 10:57:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:30.119 10:57:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:30.119 10:57:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:30.119 10:57:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:30.119 10:57:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:30.119 10:57:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:30.119 10:57:27 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:30.119 10:57:27 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:30.119 10:57:27 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:30.119 00:05:30.119 real 0m1.358s 00:05:30.119 user 0m1.263s 00:05:30.119 sys 0m0.108s 00:05:30.119 10:57:27 accel.accel_compare -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:30.119 10:57:27 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:30.119 ************************************ 00:05:30.119 END TEST accel_compare 00:05:30.119 ************************************ 00:05:30.379 10:57:27 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:30.379 10:57:27 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:05:30.379 10:57:27 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:30.379 10:57:27 accel -- common/autotest_common.sh@10 -- # set +x 00:05:30.379 ************************************ 00:05:30.379 START TEST accel_xor 00:05:30.379 ************************************ 00:05:30.379 10:57:27 accel.accel_xor -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w xor -y 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:30.379 [2024-05-15 10:57:27.462878] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:30.379 [2024-05-15 10:57:27.462940] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2085305 ] 00:05:30.379 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.379 [2024-05-15 10:57:27.518576] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.379 [2024-05-15 10:57:27.590700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.379 10:57:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.380 10:57:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.380 10:57:27 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:30.380 10:57:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.380 10:57:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.380 10:57:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.380 10:57:27 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:30.638 10:57:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.638 10:57:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.638 10:57:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.638 10:57:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:30.638 10:57:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.638 10:57:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.638 10:57:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.638 10:57:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:30.638 10:57:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.638 10:57:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.638 10:57:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.575 10:57:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:31.575 10:57:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.575 10:57:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.575 10:57:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.575 10:57:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:31.575 10:57:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.575 10:57:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.575 10:57:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.575 10:57:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:31.575 10:57:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.575 10:57:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.575 10:57:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.575 10:57:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:31.575 10:57:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.575 10:57:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.575 10:57:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.575 10:57:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:31.575 10:57:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.575 10:57:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.575 10:57:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.575 10:57:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:31.575 10:57:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.575 10:57:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.575 10:57:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.575 10:57:28 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:31.575 10:57:28 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:31.575 10:57:28 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:31.575 00:05:31.575 real 0m1.360s 00:05:31.575 user 0m1.264s 00:05:31.575 sys 0m0.110s 00:05:31.575 10:57:28 accel.accel_xor -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:31.575 10:57:28 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:31.575 ************************************ 00:05:31.575 END TEST accel_xor 00:05:31.575 ************************************ 00:05:31.575 10:57:28 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:31.575 10:57:28 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:05:31.575 10:57:28 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:31.575 10:57:28 accel -- common/autotest_common.sh@10 -- # set +x 00:05:31.838 ************************************ 00:05:31.838 START TEST accel_xor 00:05:31.838 ************************************ 00:05:31.838 10:57:28 accel.accel_xor -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w xor -y -x 3 00:05:31.838 10:57:28 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:31.838 10:57:28 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:31.838 10:57:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.838 10:57:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.838 10:57:28 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:31.838 10:57:28 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:31.838 10:57:28 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:31.838 10:57:28 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:31.838 10:57:28 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:31.838 10:57:28 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:31.838 10:57:28 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:31.838 10:57:28 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:31.838 10:57:28 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:31.838 10:57:28 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:31.838 [2024-05-15 10:57:28.888134] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:31.838 [2024-05-15 10:57:28.888210] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2085577 ] 00:05:31.838 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.838 [2024-05-15 10:57:28.942911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.838 [2024-05-15 10:57:29.014256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:31.838 10:57:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.839 10:57:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.839 10:57:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.839 10:57:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:31.839 10:57:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.839 10:57:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.839 10:57:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:33.216 10:57:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:33.216 10:57:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:33.216 10:57:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:33.216 10:57:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:33.216 10:57:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:33.216 10:57:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:33.216 10:57:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:33.216 10:57:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:33.216 10:57:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:33.216 10:57:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:33.216 10:57:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:33.217 10:57:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:33.217 10:57:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:33.217 10:57:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:33.217 10:57:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:33.217 10:57:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:33.217 10:57:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:33.217 10:57:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:33.217 10:57:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:33.217 10:57:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:33.217 10:57:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:33.217 10:57:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:33.217 10:57:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:33.217 10:57:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:33.217 10:57:30 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:33.217 10:57:30 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:33.217 10:57:30 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:33.217 00:05:33.217 real 0m1.361s 00:05:33.217 user 0m1.265s 00:05:33.217 sys 0m0.110s 00:05:33.217 10:57:30 accel.accel_xor -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:33.217 10:57:30 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:33.217 ************************************ 00:05:33.217 END TEST accel_xor 00:05:33.217 ************************************ 00:05:33.217 10:57:30 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:33.217 10:57:30 accel -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:05:33.217 10:57:30 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:33.217 10:57:30 accel -- common/autotest_common.sh@10 -- # set +x 00:05:33.217 ************************************ 00:05:33.217 START TEST accel_dif_verify 00:05:33.217 ************************************ 00:05:33.217 10:57:30 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dif_verify 00:05:33.217 10:57:30 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:33.217 10:57:30 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:33.217 10:57:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:33.217 10:57:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:33.217 10:57:30 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:33.217 10:57:30 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:33.217 10:57:30 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:33.217 10:57:30 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:33.217 10:57:30 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:33.217 10:57:30 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.217 10:57:30 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.217 10:57:30 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:33.217 10:57:30 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:33.217 10:57:30 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:33.217 [2024-05-15 10:57:30.305884] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:33.217 [2024-05-15 10:57:30.305950] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2085843 ] 00:05:33.217 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.217 [2024-05-15 10:57:30.360859] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.217 [2024-05-15 10:57:30.434113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.217 10:57:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:33.217 10:57:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:33.217 10:57:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:33.217 10:57:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:33.217 10:57:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:33.217 10:57:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:33.217 10:57:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:33.217 10:57:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:33.217 10:57:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:33.217 10:57:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:33.217 10:57:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:33.217 10:57:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:33.217 10:57:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:33.217 10:57:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:33.217 10:57:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:33.217 10:57:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:33.217 10:57:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:33.217 10:57:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:33.217 10:57:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:33.217 10:57:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:33.217 10:57:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:33.217 10:57:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:33.217 10:57:30 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:33.217 10:57:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:33.217 10:57:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:33.217 10:57:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:33.217 10:57:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:33.217 10:57:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:33.217 10:57:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:33.217 10:57:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:33.217 10:57:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:33.217 10:57:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:33.476 10:57:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:33.476 10:57:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:33.476 10:57:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:33.476 10:57:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:33.476 10:57:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:33.476 10:57:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:33.476 10:57:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:33.476 10:57:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:33.476 10:57:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:33.476 10:57:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:33.476 10:57:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:33.476 10:57:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:33.476 10:57:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:33.476 10:57:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:33.476 10:57:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:33.476 10:57:30 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:33.476 10:57:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:33.476 10:57:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:33.476 10:57:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:33.476 10:57:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:33.476 10:57:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:33.476 10:57:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:33.476 10:57:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:33.476 10:57:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:33.476 10:57:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:33.476 10:57:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:33.476 10:57:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:33.476 10:57:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:33.476 10:57:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:33.476 10:57:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:33.476 10:57:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:33.476 10:57:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:33.476 10:57:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:33.476 10:57:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:33.476 10:57:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:33.476 10:57:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:33.476 10:57:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:33.476 10:57:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:33.476 10:57:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:33.476 10:57:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:33.476 10:57:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:33.476 10:57:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:33.476 10:57:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:33.476 10:57:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:33.476 10:57:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:33.476 10:57:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:34.413 10:57:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:34.414 10:57:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:34.414 10:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:34.414 10:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:34.414 10:57:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:34.414 10:57:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:34.414 10:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:34.414 10:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:34.414 10:57:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:34.414 10:57:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:34.414 10:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:34.414 10:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:34.414 10:57:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:34.414 10:57:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:34.414 10:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:34.414 10:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:34.414 10:57:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:34.414 10:57:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:34.414 10:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:34.414 10:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:34.414 10:57:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:34.414 10:57:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:34.414 10:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:34.414 10:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:34.414 10:57:31 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:34.414 10:57:31 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:34.414 10:57:31 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:34.414 00:05:34.414 real 0m1.361s 00:05:34.414 user 0m1.260s 00:05:34.414 sys 0m0.115s 00:05:34.414 10:57:31 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:34.414 10:57:31 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:34.414 ************************************ 00:05:34.414 END TEST accel_dif_verify 00:05:34.414 ************************************ 00:05:34.414 10:57:31 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:34.414 10:57:31 accel -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:05:34.414 10:57:31 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:34.414 10:57:31 accel -- common/autotest_common.sh@10 -- # set +x 00:05:34.674 ************************************ 00:05:34.674 START TEST accel_dif_generate 00:05:34.674 ************************************ 00:05:34.674 10:57:31 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dif_generate 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:34.674 [2024-05-15 10:57:31.729399] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:34.674 [2024-05-15 10:57:31.729460] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2086097 ] 00:05:34.674 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.674 [2024-05-15 10:57:31.785418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.674 [2024-05-15 10:57:31.857289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:34.674 10:57:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:36.050 10:57:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:36.050 10:57:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:36.050 10:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:36.050 10:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:36.050 10:57:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:36.050 10:57:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:36.050 10:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:36.050 10:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:36.050 10:57:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:36.050 10:57:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:36.050 10:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:36.050 10:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:36.050 10:57:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:36.050 10:57:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:36.050 10:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:36.050 10:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:36.050 10:57:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:36.050 10:57:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:36.050 10:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:36.050 10:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:36.050 10:57:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:36.050 10:57:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:36.050 10:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:36.050 10:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:36.050 10:57:33 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:36.050 10:57:33 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:36.050 10:57:33 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:36.050 00:05:36.050 real 0m1.359s 00:05:36.050 user 0m1.251s 00:05:36.050 sys 0m0.124s 00:05:36.050 10:57:33 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:36.050 10:57:33 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:36.050 ************************************ 00:05:36.050 END TEST accel_dif_generate 00:05:36.050 ************************************ 00:05:36.050 10:57:33 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:36.050 10:57:33 accel -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:05:36.050 10:57:33 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:36.050 10:57:33 accel -- common/autotest_common.sh@10 -- # set +x 00:05:36.050 ************************************ 00:05:36.050 START TEST accel_dif_generate_copy 00:05:36.050 ************************************ 00:05:36.050 10:57:33 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dif_generate_copy 00:05:36.050 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:36.050 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:36.050 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.050 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.050 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:36.050 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:36.050 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:36.050 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:36.050 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:36.050 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.050 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.050 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:36.050 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:36.050 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:36.050 [2024-05-15 10:57:33.155478] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:36.050 [2024-05-15 10:57:33.155543] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2086344 ] 00:05:36.050 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.050 [2024-05-15 10:57:33.211192] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.050 [2024-05-15 10:57:33.283218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.308 10:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.244 10:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:37.244 10:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.244 10:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.244 10:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.244 10:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:37.244 10:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.244 10:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.244 10:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.244 10:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:37.244 10:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.244 10:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.244 10:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.244 10:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:37.244 10:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.244 10:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.244 10:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.244 10:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:37.244 10:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.244 10:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.244 10:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.244 10:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:37.244 10:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.244 10:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.244 10:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.244 10:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:37.244 10:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:37.244 10:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:37.244 00:05:37.244 real 0m1.360s 00:05:37.244 user 0m1.256s 00:05:37.244 sys 0m0.118s 00:05:37.244 10:57:34 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:37.244 10:57:34 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:37.244 ************************************ 00:05:37.244 END TEST accel_dif_generate_copy 00:05:37.244 ************************************ 00:05:37.505 10:57:34 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:37.505 10:57:34 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:37.505 10:57:34 accel -- common/autotest_common.sh@1098 -- # '[' 8 -le 1 ']' 00:05:37.505 10:57:34 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:37.505 10:57:34 accel -- common/autotest_common.sh@10 -- # set +x 00:05:37.505 ************************************ 00:05:37.505 START TEST accel_comp 00:05:37.505 ************************************ 00:05:37.505 10:57:34 accel.accel_comp -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:37.505 [2024-05-15 10:57:34.556360] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:37.505 [2024-05-15 10:57:34.556395] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2086596 ] 00:05:37.505 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.505 [2024-05-15 10:57:34.608158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.505 [2024-05-15 10:57:34.678970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:37.505 10:57:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:38.885 10:57:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:38.885 10:57:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.885 10:57:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:38.885 10:57:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:38.885 10:57:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:38.885 10:57:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.885 10:57:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:38.885 10:57:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:38.885 10:57:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:38.885 10:57:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.885 10:57:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:38.885 10:57:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:38.885 10:57:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:38.885 10:57:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.885 10:57:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:38.885 10:57:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:38.885 10:57:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:38.885 10:57:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.885 10:57:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:38.885 10:57:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:38.885 10:57:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:38.885 10:57:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.885 10:57:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:38.885 10:57:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:38.885 10:57:35 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:38.885 10:57:35 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:38.885 10:57:35 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:38.885 00:05:38.885 real 0m1.340s 00:05:38.885 user 0m1.246s 00:05:38.885 sys 0m0.108s 00:05:38.885 10:57:35 accel.accel_comp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:38.885 10:57:35 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:38.885 ************************************ 00:05:38.885 END TEST accel_comp 00:05:38.885 ************************************ 00:05:38.885 10:57:35 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:38.885 10:57:35 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:05:38.885 10:57:35 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:38.885 10:57:35 accel -- common/autotest_common.sh@10 -- # set +x 00:05:38.885 ************************************ 00:05:38.885 START TEST accel_decomp 00:05:38.885 ************************************ 00:05:38.885 10:57:35 accel.accel_decomp -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:38.885 10:57:35 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:38.885 10:57:35 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:38.885 10:57:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:38.885 10:57:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:38.885 10:57:35 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:38.885 10:57:35 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:38.885 10:57:35 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:38.885 10:57:35 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:38.885 10:57:35 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:38.885 10:57:35 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.885 10:57:35 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.885 10:57:35 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:38.885 10:57:35 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:38.885 10:57:35 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:38.885 [2024-05-15 10:57:35.968668] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:38.885 [2024-05-15 10:57:35.968712] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2086850 ] 00:05:38.885 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.885 [2024-05-15 10:57:36.022190] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.885 [2024-05-15 10:57:36.094141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:38.885 10:57:36 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:39.145 10:57:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:39.145 10:57:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:39.145 10:57:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:39.145 10:57:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:39.145 10:57:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:39.145 10:57:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:39.145 10:57:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:39.145 10:57:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:39.145 10:57:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:39.145 10:57:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:39.145 10:57:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:39.145 10:57:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:39.145 10:57:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:39.145 10:57:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:39.145 10:57:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:40.084 10:57:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:40.084 10:57:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.084 10:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:40.084 10:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:40.084 10:57:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:40.084 10:57:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.084 10:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:40.084 10:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:40.084 10:57:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:40.084 10:57:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.084 10:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:40.084 10:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:40.084 10:57:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:40.084 10:57:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.084 10:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:40.084 10:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:40.084 10:57:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:40.084 10:57:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.084 10:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:40.084 10:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:40.084 10:57:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:40.084 10:57:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.084 10:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:40.084 10:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:40.084 10:57:37 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:40.084 10:57:37 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:40.084 10:57:37 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:40.084 00:05:40.084 real 0m1.357s 00:05:40.084 user 0m1.265s 00:05:40.084 sys 0m0.106s 00:05:40.084 10:57:37 accel.accel_decomp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:40.084 10:57:37 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:40.084 ************************************ 00:05:40.084 END TEST accel_decomp 00:05:40.084 ************************************ 00:05:40.084 10:57:37 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:40.084 10:57:37 accel -- common/autotest_common.sh@1098 -- # '[' 11 -le 1 ']' 00:05:40.084 10:57:37 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:40.084 10:57:37 accel -- common/autotest_common.sh@10 -- # set +x 00:05:40.344 ************************************ 00:05:40.344 START TEST accel_decmop_full 00:05:40.344 ************************************ 00:05:40.344 10:57:37 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:40.344 10:57:37 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:05:40.344 10:57:37 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:05:40.344 10:57:37 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:40.344 10:57:37 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:40.344 10:57:37 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:40.344 10:57:37 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:05:40.345 [2024-05-15 10:57:37.395667] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:40.345 [2024-05-15 10:57:37.395716] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2087096 ] 00:05:40.345 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.345 [2024-05-15 10:57:37.449470] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.345 [2024-05-15 10:57:37.521566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:40.345 10:57:37 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:41.722 10:57:38 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:41.722 10:57:38 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:41.722 10:57:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:41.722 10:57:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:41.722 10:57:38 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:41.722 10:57:38 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:41.722 10:57:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:41.722 10:57:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:41.722 10:57:38 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:41.722 10:57:38 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:41.722 10:57:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:41.722 10:57:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:41.722 10:57:38 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:41.722 10:57:38 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:41.722 10:57:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:41.722 10:57:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:41.722 10:57:38 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:41.722 10:57:38 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:41.722 10:57:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:41.722 10:57:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:41.722 10:57:38 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:41.722 10:57:38 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:41.723 10:57:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:41.723 10:57:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:41.723 10:57:38 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:41.723 10:57:38 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:41.723 10:57:38 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:41.723 00:05:41.723 real 0m1.364s 00:05:41.723 user 0m1.270s 00:05:41.723 sys 0m0.108s 00:05:41.723 10:57:38 accel.accel_decmop_full -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:41.723 10:57:38 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:05:41.723 ************************************ 00:05:41.723 END TEST accel_decmop_full 00:05:41.723 ************************************ 00:05:41.723 10:57:38 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:41.723 10:57:38 accel -- common/autotest_common.sh@1098 -- # '[' 11 -le 1 ']' 00:05:41.723 10:57:38 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:41.723 10:57:38 accel -- common/autotest_common.sh@10 -- # set +x 00:05:41.723 ************************************ 00:05:41.723 START TEST accel_decomp_mcore 00:05:41.723 ************************************ 00:05:41.723 10:57:38 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:41.723 10:57:38 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:41.723 10:57:38 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:41.723 10:57:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.723 10:57:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.723 10:57:38 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:41.723 10:57:38 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:41.723 10:57:38 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:41.723 10:57:38 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:41.723 10:57:38 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:41.723 10:57:38 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.723 10:57:38 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.723 10:57:38 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:41.723 10:57:38 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:41.723 10:57:38 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:41.723 [2024-05-15 10:57:38.821556] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:41.723 [2024-05-15 10:57:38.821600] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2087349 ] 00:05:41.723 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.723 [2024-05-15 10:57:38.875495] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:41.723 [2024-05-15 10:57:38.949713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.723 [2024-05-15 10:57:38.949811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:41.723 [2024-05-15 10:57:38.949885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:41.723 [2024-05-15 10:57:38.949888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.983 10:57:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:41.983 10:57:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.983 10:57:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.983 10:57:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.983 10:57:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:41.983 10:57:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.983 10:57:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.983 10:57:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.983 10:57:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:41.983 10:57:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.983 10:57:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.983 10:57:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.983 10:57:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:41.983 10:57:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.983 10:57:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.983 10:57:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.983 10:57:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:41.983 10:57:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.983 10:57:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.983 10:57:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.983 10:57:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:41.983 10:57:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.983 10:57:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.983 10:57:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.983 10:57:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:41.983 10:57:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.983 10:57:38 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:41.983 10:57:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.983 10:57:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.983 10:57:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:41.983 10:57:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.983 10:57:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.983 10:57:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.983 10:57:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:41.983 10:57:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.983 10:57:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.983 10:57:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.983 10:57:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:05:41.983 10:57:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.983 10:57:39 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:41.983 10:57:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.983 10:57:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.983 10:57:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:41.983 10:57:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.983 10:57:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.983 10:57:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.983 10:57:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:41.983 10:57:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.983 10:57:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.983 10:57:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.983 10:57:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:41.983 10:57:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.983 10:57:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.983 10:57:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.983 10:57:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:05:41.983 10:57:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.983 10:57:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.983 10:57:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.983 10:57:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:41.983 10:57:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.983 10:57:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.983 10:57:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.983 10:57:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:41.983 10:57:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.983 10:57:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.983 10:57:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.983 10:57:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:41.983 10:57:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.983 10:57:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.983 10:57:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.983 10:57:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:41.983 10:57:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.983 10:57:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.983 10:57:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:42.921 10:57:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:42.921 10:57:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:42.921 10:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:42.921 10:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:42.921 10:57:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:42.921 10:57:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:42.921 10:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:42.921 10:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:42.921 10:57:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:42.921 10:57:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:42.921 10:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:42.921 10:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:42.921 10:57:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:42.921 10:57:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:42.921 10:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:42.921 10:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:42.921 10:57:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:42.921 10:57:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:42.921 10:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:42.921 10:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:42.921 10:57:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:42.921 10:57:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:42.921 10:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:42.921 10:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:42.921 10:57:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:42.921 10:57:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:42.921 10:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:42.921 10:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:42.921 10:57:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:42.921 10:57:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:42.921 10:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:42.921 10:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:42.921 10:57:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:42.921 10:57:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:42.921 10:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:42.921 10:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:42.921 10:57:40 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:42.921 10:57:40 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:42.921 10:57:40 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:42.921 00:05:42.921 real 0m1.371s 00:05:42.921 user 0m4.600s 00:05:42.921 sys 0m0.117s 00:05:42.921 10:57:40 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:42.921 10:57:40 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:42.921 ************************************ 00:05:42.921 END TEST accel_decomp_mcore 00:05:42.921 ************************************ 00:05:43.181 10:57:40 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:43.181 10:57:40 accel -- common/autotest_common.sh@1098 -- # '[' 13 -le 1 ']' 00:05:43.181 10:57:40 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:43.181 10:57:40 accel -- common/autotest_common.sh@10 -- # set +x 00:05:43.181 ************************************ 00:05:43.181 START TEST accel_decomp_full_mcore 00:05:43.181 ************************************ 00:05:43.181 10:57:40 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:43.181 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:43.181 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:43.181 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.181 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.181 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:43.181 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:43.181 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:43.181 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:43.181 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:43.181 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.181 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.181 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:43.181 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:43.181 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:43.181 [2024-05-15 10:57:40.263985] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:43.181 [2024-05-15 10:57:40.264049] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2087600 ] 00:05:43.181 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.181 [2024-05-15 10:57:40.321008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:43.181 [2024-05-15 10:57:40.396954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.181 [2024-05-15 10:57:40.397048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:43.181 [2024-05-15 10:57:40.397114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:43.181 [2024-05-15 10:57:40.397115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.181 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:43.181 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.181 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.181 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.181 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:43.181 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.181 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.181 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.181 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:43.181 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.181 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.181 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.181 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:43.181 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.181 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.181 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.181 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:43.181 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.181 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.181 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.181 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:43.181 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.181 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.181 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.181 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:43.181 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.181 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:43.451 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.451 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.451 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:43.451 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.451 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.451 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.451 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:43.451 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.451 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.451 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.451 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:05:43.451 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.451 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:43.451 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.451 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.451 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:43.451 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.451 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.451 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.451 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:43.451 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.451 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.451 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.451 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:43.451 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.451 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.451 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.451 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:05:43.451 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.451 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.451 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.451 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:43.451 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.451 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.451 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.451 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:43.451 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.451 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.451 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.451 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:43.451 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.451 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.451 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.451 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:43.451 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.451 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.451 10:57:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.432 10:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:44.432 10:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.432 10:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.432 10:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.432 10:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:44.432 10:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.432 10:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.432 10:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.432 10:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:44.432 10:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.432 10:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.432 10:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.432 10:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:44.432 10:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.432 10:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.432 10:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.432 10:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:44.432 10:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.432 10:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.432 10:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.432 10:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:44.432 10:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.432 10:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.432 10:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.432 10:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:44.432 10:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.432 10:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.432 10:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.432 10:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:44.432 10:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.432 10:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.432 10:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.432 10:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:44.432 10:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.432 10:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.432 10:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.432 10:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:44.432 10:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:44.432 10:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:44.432 00:05:44.432 real 0m1.388s 00:05:44.432 user 0m4.643s 00:05:44.432 sys 0m0.123s 00:05:44.432 10:57:41 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:44.432 10:57:41 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:44.432 ************************************ 00:05:44.432 END TEST accel_decomp_full_mcore 00:05:44.432 ************************************ 00:05:44.432 10:57:41 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:44.432 10:57:41 accel -- common/autotest_common.sh@1098 -- # '[' 11 -le 1 ']' 00:05:44.432 10:57:41 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:44.432 10:57:41 accel -- common/autotest_common.sh@10 -- # set +x 00:05:44.432 ************************************ 00:05:44.432 START TEST accel_decomp_mthread 00:05:44.432 ************************************ 00:05:44.432 10:57:41 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:44.432 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:44.432 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:44.432 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.432 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.432 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:44.432 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:44.432 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:44.691 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:44.691 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:44.691 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.691 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.691 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:44.691 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:44.691 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:44.691 [2024-05-15 10:57:41.721585] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:44.691 [2024-05-15 10:57:41.721665] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2087860 ] 00:05:44.691 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.691 [2024-05-15 10:57:41.777462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.691 [2024-05-15 10:57:41.849757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.691 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:44.691 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.691 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.691 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.691 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:44.691 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.691 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.691 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.691 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:44.691 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.691 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.691 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.691 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:44.691 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.691 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.691 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.692 10:57:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.071 10:57:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:46.071 10:57:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.071 10:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.071 10:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.071 10:57:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:46.071 10:57:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.071 10:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.071 10:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.071 10:57:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:46.071 10:57:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.071 10:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.071 10:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.071 10:57:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:46.071 10:57:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.071 10:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.071 10:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.071 10:57:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:46.071 10:57:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.071 10:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.071 10:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.071 10:57:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:46.071 10:57:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.071 10:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.071 10:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.071 10:57:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:46.071 10:57:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.071 10:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.071 10:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.071 10:57:43 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:46.071 10:57:43 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:46.071 10:57:43 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:46.071 00:05:46.071 real 0m1.366s 00:05:46.071 user 0m1.264s 00:05:46.071 sys 0m0.115s 00:05:46.071 10:57:43 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:46.071 10:57:43 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:46.071 ************************************ 00:05:46.071 END TEST accel_decomp_mthread 00:05:46.071 ************************************ 00:05:46.071 10:57:43 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:46.071 10:57:43 accel -- common/autotest_common.sh@1098 -- # '[' 13 -le 1 ']' 00:05:46.071 10:57:43 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:46.071 10:57:43 accel -- common/autotest_common.sh@10 -- # set +x 00:05:46.071 ************************************ 00:05:46.071 START TEST accel_decomp_full_mthread 00:05:46.071 ************************************ 00:05:46.071 10:57:43 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:46.071 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:46.071 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:46.071 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.071 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.071 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:46.071 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:46.071 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:46.071 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:46.071 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:46.071 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.071 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.071 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:46.071 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:46.071 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:46.071 [2024-05-15 10:57:43.154865] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:46.071 [2024-05-15 10:57:43.154925] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2088105 ] 00:05:46.071 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.071 [2024-05-15 10:57:43.210913] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.071 [2024-05-15 10:57:43.282094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.071 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:46.071 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.071 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.071 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.071 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:46.071 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.071 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.071 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.071 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:46.071 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.071 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.071 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.071 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:46.071 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.071 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.071 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.071 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:46.071 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.072 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.072 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.072 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:46.072 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.072 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.072 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.072 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:46.072 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.072 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:46.072 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.072 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.072 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:46.072 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.072 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.072 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.072 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:46.072 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.072 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.072 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.072 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:05:46.072 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.072 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:46.072 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.072 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.072 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:46.072 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.072 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.072 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.072 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:46.072 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.072 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.072 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.072 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:46.331 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.331 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.331 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.331 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:05:46.331 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.331 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.331 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.331 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:46.331 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.331 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.331 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.331 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:46.331 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.331 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.331 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.331 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:46.331 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.331 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.331 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.331 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:46.331 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.331 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.331 10:57:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.268 10:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:47.268 10:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.268 10:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.268 10:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.268 10:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:47.268 10:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.268 10:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.268 10:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.268 10:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:47.268 10:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.268 10:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.268 10:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.268 10:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:47.268 10:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.268 10:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.268 10:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.268 10:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:47.268 10:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.268 10:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.268 10:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.268 10:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:47.268 10:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.268 10:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.268 10:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.268 10:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:47.268 10:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.268 10:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.268 10:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.268 10:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:47.268 10:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:47.268 10:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:47.268 00:05:47.268 real 0m1.390s 00:05:47.268 user 0m1.288s 00:05:47.268 sys 0m0.116s 00:05:47.268 10:57:44 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:47.268 10:57:44 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:47.268 ************************************ 00:05:47.268 END TEST accel_decomp_full_mthread 00:05:47.268 ************************************ 00:05:47.528 10:57:44 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:05:47.528 10:57:44 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:47.528 10:57:44 accel -- accel/accel.sh@137 -- # build_accel_config 00:05:47.528 10:57:44 accel -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:05:47.528 10:57:44 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:47.528 10:57:44 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:47.528 10:57:44 accel -- common/autotest_common.sh@10 -- # set +x 00:05:47.528 10:57:44 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:47.528 10:57:44 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.528 10:57:44 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.528 10:57:44 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:47.528 10:57:44 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:47.528 10:57:44 accel -- accel/accel.sh@41 -- # jq -r . 00:05:47.528 ************************************ 00:05:47.528 START TEST accel_dif_functional_tests 00:05:47.528 ************************************ 00:05:47.528 10:57:44 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:47.528 [2024-05-15 10:57:44.636072] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:47.528 [2024-05-15 10:57:44.636110] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2088357 ] 00:05:47.528 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.528 [2024-05-15 10:57:44.688371] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:47.528 [2024-05-15 10:57:44.761963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.528 [2024-05-15 10:57:44.761985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:47.528 [2024-05-15 10:57:44.761987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.787 00:05:47.787 00:05:47.787 CUnit - A unit testing framework for C - Version 2.1-3 00:05:47.787 http://cunit.sourceforge.net/ 00:05:47.787 00:05:47.787 00:05:47.787 Suite: accel_dif 00:05:47.787 Test: verify: DIF generated, GUARD check ...passed 00:05:47.787 Test: verify: DIF generated, APPTAG check ...passed 00:05:47.787 Test: verify: DIF generated, REFTAG check ...passed 00:05:47.787 Test: verify: DIF not generated, GUARD check ...[2024-05-15 10:57:44.829843] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:47.787 [2024-05-15 10:57:44.829883] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:47.787 passed 00:05:47.787 Test: verify: DIF not generated, APPTAG check ...[2024-05-15 10:57:44.829911] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:47.787 [2024-05-15 10:57:44.829946] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:47.787 passed 00:05:47.787 Test: verify: DIF not generated, REFTAG check ...[2024-05-15 10:57:44.829962] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:47.787 [2024-05-15 10:57:44.829981] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:47.787 passed 00:05:47.787 Test: verify: APPTAG correct, APPTAG check ...passed 00:05:47.787 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-15 10:57:44.830026] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:47.787 passed 00:05:47.787 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:47.787 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:47.787 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:47.787 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-15 10:57:44.830133] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:47.787 passed 00:05:47.787 Test: generate copy: DIF generated, GUARD check ...passed 00:05:47.787 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:47.787 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:47.787 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:05:47.787 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:47.787 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:47.787 Test: generate copy: iovecs-len validate ...[2024-05-15 10:57:44.830302] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:47.787 passed 00:05:47.787 Test: generate copy: buffer alignment validate ...passed 00:05:47.787 00:05:47.787 Run Summary: Type Total Ran Passed Failed Inactive 00:05:47.787 suites 1 1 n/a 0 0 00:05:47.787 tests 20 20 20 0 0 00:05:47.787 asserts 204 204 204 0 n/a 00:05:47.787 00:05:47.787 Elapsed time = 0.002 seconds 00:05:47.787 00:05:47.787 real 0m0.430s 00:05:47.787 user 0m0.610s 00:05:47.787 sys 0m0.135s 00:05:47.787 10:57:45 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:47.787 10:57:45 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:05:47.787 ************************************ 00:05:47.787 END TEST accel_dif_functional_tests 00:05:47.787 ************************************ 00:05:48.046 00:05:48.047 real 0m31.499s 00:05:48.047 user 0m35.407s 00:05:48.047 sys 0m4.112s 00:05:48.047 10:57:45 accel -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:48.047 10:57:45 accel -- common/autotest_common.sh@10 -- # set +x 00:05:48.047 ************************************ 00:05:48.047 END TEST accel 00:05:48.047 ************************************ 00:05:48.047 10:57:45 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:48.047 10:57:45 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:48.047 10:57:45 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:48.047 10:57:45 -- common/autotest_common.sh@10 -- # set +x 00:05:48.047 ************************************ 00:05:48.047 START TEST accel_rpc 00:05:48.047 ************************************ 00:05:48.047 10:57:45 accel_rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:48.047 * Looking for test storage... 00:05:48.047 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:48.047 10:57:45 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:48.047 10:57:45 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2088501 00:05:48.047 10:57:45 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2088501 00:05:48.047 10:57:45 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:48.047 10:57:45 accel_rpc -- common/autotest_common.sh@828 -- # '[' -z 2088501 ']' 00:05:48.047 10:57:45 accel_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.047 10:57:45 accel_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:48.047 10:57:45 accel_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.047 10:57:45 accel_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:48.047 10:57:45 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.047 [2024-05-15 10:57:45.248438] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:48.047 [2024-05-15 10:57:45.248486] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2088501 ] 00:05:48.047 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.047 [2024-05-15 10:57:45.302426] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.306 [2024-05-15 10:57:45.383850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.875 10:57:46 accel_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:48.875 10:57:46 accel_rpc -- common/autotest_common.sh@861 -- # return 0 00:05:48.875 10:57:46 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:48.875 10:57:46 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:48.875 10:57:46 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:48.875 10:57:46 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:48.875 10:57:46 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:48.875 10:57:46 accel_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:48.875 10:57:46 accel_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:48.875 10:57:46 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.875 ************************************ 00:05:48.875 START TEST accel_assign_opcode 00:05:48.875 ************************************ 00:05:48.875 10:57:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # accel_assign_opcode_test_suite 00:05:48.875 10:57:46 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:48.875 10:57:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:48.875 10:57:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:48.875 [2024-05-15 10:57:46.089924] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:48.875 10:57:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:48.875 10:57:46 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:48.875 10:57:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:48.875 10:57:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:48.875 [2024-05-15 10:57:46.097938] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:48.875 10:57:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:48.875 10:57:46 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:48.875 10:57:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:48.875 10:57:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:49.135 10:57:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:49.135 10:57:46 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:49.135 10:57:46 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:49.135 10:57:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:49.135 10:57:46 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:05:49.135 10:57:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:49.135 10:57:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:49.135 software 00:05:49.135 00:05:49.135 real 0m0.235s 00:05:49.135 user 0m0.047s 00:05:49.135 sys 0m0.010s 00:05:49.135 10:57:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:49.135 10:57:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:49.135 ************************************ 00:05:49.135 END TEST accel_assign_opcode 00:05:49.135 ************************************ 00:05:49.135 10:57:46 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2088501 00:05:49.135 10:57:46 accel_rpc -- common/autotest_common.sh@947 -- # '[' -z 2088501 ']' 00:05:49.135 10:57:46 accel_rpc -- common/autotest_common.sh@951 -- # kill -0 2088501 00:05:49.135 10:57:46 accel_rpc -- common/autotest_common.sh@952 -- # uname 00:05:49.135 10:57:46 accel_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:49.135 10:57:46 accel_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2088501 00:05:49.135 10:57:46 accel_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:49.135 10:57:46 accel_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:49.135 10:57:46 accel_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2088501' 00:05:49.135 killing process with pid 2088501 00:05:49.135 10:57:46 accel_rpc -- common/autotest_common.sh@966 -- # kill 2088501 00:05:49.135 10:57:46 accel_rpc -- common/autotest_common.sh@971 -- # wait 2088501 00:05:49.702 00:05:49.702 real 0m1.624s 00:05:49.702 user 0m1.705s 00:05:49.702 sys 0m0.420s 00:05:49.702 10:57:46 accel_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:49.702 10:57:46 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.702 ************************************ 00:05:49.702 END TEST accel_rpc 00:05:49.702 ************************************ 00:05:49.702 10:57:46 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:49.702 10:57:46 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:49.702 10:57:46 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:49.702 10:57:46 -- common/autotest_common.sh@10 -- # set +x 00:05:49.702 ************************************ 00:05:49.702 START TEST app_cmdline 00:05:49.702 ************************************ 00:05:49.702 10:57:46 app_cmdline -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:49.702 * Looking for test storage... 00:05:49.702 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:49.702 10:57:46 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:49.702 10:57:46 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2088949 00:05:49.702 10:57:46 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2088949 00:05:49.702 10:57:46 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:49.702 10:57:46 app_cmdline -- common/autotest_common.sh@828 -- # '[' -z 2088949 ']' 00:05:49.702 10:57:46 app_cmdline -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.702 10:57:46 app_cmdline -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:49.702 10:57:46 app_cmdline -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.702 10:57:46 app_cmdline -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:49.702 10:57:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:49.702 [2024-05-15 10:57:46.938614] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:49.702 [2024-05-15 10:57:46.938661] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2088949 ] 00:05:49.702 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.960 [2024-05-15 10:57:46.992160] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.960 [2024-05-15 10:57:47.065172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.529 10:57:47 app_cmdline -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:50.529 10:57:47 app_cmdline -- common/autotest_common.sh@861 -- # return 0 00:05:50.529 10:57:47 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:50.788 { 00:05:50.788 "version": "SPDK v24.05-pre git sha1 01f10b8a3", 00:05:50.788 "fields": { 00:05:50.788 "major": 24, 00:05:50.788 "minor": 5, 00:05:50.788 "patch": 0, 00:05:50.788 "suffix": "-pre", 00:05:50.788 "commit": "01f10b8a3" 00:05:50.788 } 00:05:50.788 } 00:05:50.788 10:57:47 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:50.788 10:57:47 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:50.788 10:57:47 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:50.788 10:57:47 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:50.788 10:57:47 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:50.788 10:57:47 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:50.788 10:57:47 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:50.788 10:57:47 app_cmdline -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:50.788 10:57:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:50.788 10:57:47 app_cmdline -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:50.788 10:57:47 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:50.788 10:57:47 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:50.788 10:57:47 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:50.788 10:57:47 app_cmdline -- common/autotest_common.sh@649 -- # local es=0 00:05:50.788 10:57:47 app_cmdline -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:50.788 10:57:47 app_cmdline -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:50.788 10:57:47 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:50.788 10:57:47 app_cmdline -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:50.788 10:57:47 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:50.788 10:57:47 app_cmdline -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:50.788 10:57:47 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:50.788 10:57:47 app_cmdline -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:50.788 10:57:47 app_cmdline -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:50.788 10:57:47 app_cmdline -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:51.048 request: 00:05:51.048 { 00:05:51.048 "method": "env_dpdk_get_mem_stats", 00:05:51.048 "req_id": 1 00:05:51.048 } 00:05:51.048 Got JSON-RPC error response 00:05:51.048 response: 00:05:51.048 { 00:05:51.048 "code": -32601, 00:05:51.048 "message": "Method not found" 00:05:51.048 } 00:05:51.048 10:57:48 app_cmdline -- common/autotest_common.sh@652 -- # es=1 00:05:51.048 10:57:48 app_cmdline -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:51.048 10:57:48 app_cmdline -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:51.048 10:57:48 app_cmdline -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:51.048 10:57:48 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2088949 00:05:51.048 10:57:48 app_cmdline -- common/autotest_common.sh@947 -- # '[' -z 2088949 ']' 00:05:51.048 10:57:48 app_cmdline -- common/autotest_common.sh@951 -- # kill -0 2088949 00:05:51.048 10:57:48 app_cmdline -- common/autotest_common.sh@952 -- # uname 00:05:51.048 10:57:48 app_cmdline -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:51.048 10:57:48 app_cmdline -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2088949 00:05:51.048 10:57:48 app_cmdline -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:51.048 10:57:48 app_cmdline -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:51.048 10:57:48 app_cmdline -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2088949' 00:05:51.048 killing process with pid 2088949 00:05:51.048 10:57:48 app_cmdline -- common/autotest_common.sh@966 -- # kill 2088949 00:05:51.048 10:57:48 app_cmdline -- common/autotest_common.sh@971 -- # wait 2088949 00:05:51.307 00:05:51.307 real 0m1.717s 00:05:51.307 user 0m2.055s 00:05:51.307 sys 0m0.420s 00:05:51.307 10:57:48 app_cmdline -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:51.307 10:57:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:51.307 ************************************ 00:05:51.307 END TEST app_cmdline 00:05:51.307 ************************************ 00:05:51.307 10:57:48 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:51.307 10:57:48 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:51.307 10:57:48 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:51.307 10:57:48 -- common/autotest_common.sh@10 -- # set +x 00:05:51.567 ************************************ 00:05:51.567 START TEST version 00:05:51.567 ************************************ 00:05:51.567 10:57:48 version -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:51.567 * Looking for test storage... 00:05:51.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:51.567 10:57:48 version -- app/version.sh@17 -- # get_header_version major 00:05:51.567 10:57:48 version -- app/version.sh@14 -- # cut -f2 00:05:51.567 10:57:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:51.567 10:57:48 version -- app/version.sh@14 -- # tr -d '"' 00:05:51.567 10:57:48 version -- app/version.sh@17 -- # major=24 00:05:51.567 10:57:48 version -- app/version.sh@18 -- # get_header_version minor 00:05:51.567 10:57:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:51.567 10:57:48 version -- app/version.sh@14 -- # cut -f2 00:05:51.567 10:57:48 version -- app/version.sh@14 -- # tr -d '"' 00:05:51.567 10:57:48 version -- app/version.sh@18 -- # minor=5 00:05:51.567 10:57:48 version -- app/version.sh@19 -- # get_header_version patch 00:05:51.567 10:57:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:51.567 10:57:48 version -- app/version.sh@14 -- # cut -f2 00:05:51.567 10:57:48 version -- app/version.sh@14 -- # tr -d '"' 00:05:51.567 10:57:48 version -- app/version.sh@19 -- # patch=0 00:05:51.567 10:57:48 version -- app/version.sh@20 -- # get_header_version suffix 00:05:51.567 10:57:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:51.567 10:57:48 version -- app/version.sh@14 -- # cut -f2 00:05:51.567 10:57:48 version -- app/version.sh@14 -- # tr -d '"' 00:05:51.567 10:57:48 version -- app/version.sh@20 -- # suffix=-pre 00:05:51.567 10:57:48 version -- app/version.sh@22 -- # version=24.5 00:05:51.567 10:57:48 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:51.567 10:57:48 version -- app/version.sh@28 -- # version=24.5rc0 00:05:51.567 10:57:48 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:51.567 10:57:48 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:51.567 10:57:48 version -- app/version.sh@30 -- # py_version=24.5rc0 00:05:51.567 10:57:48 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:05:51.567 00:05:51.567 real 0m0.150s 00:05:51.567 user 0m0.086s 00:05:51.567 sys 0m0.098s 00:05:51.567 10:57:48 version -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:51.567 10:57:48 version -- common/autotest_common.sh@10 -- # set +x 00:05:51.567 ************************************ 00:05:51.567 END TEST version 00:05:51.567 ************************************ 00:05:51.567 10:57:48 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:05:51.567 10:57:48 -- spdk/autotest.sh@194 -- # uname -s 00:05:51.567 10:57:48 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:51.567 10:57:48 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:51.567 10:57:48 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:51.567 10:57:48 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:51.567 10:57:48 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:05:51.567 10:57:48 -- spdk/autotest.sh@256 -- # timing_exit lib 00:05:51.567 10:57:48 -- common/autotest_common.sh@727 -- # xtrace_disable 00:05:51.567 10:57:48 -- common/autotest_common.sh@10 -- # set +x 00:05:51.567 10:57:48 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:05:51.567 10:57:48 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:05:51.567 10:57:48 -- spdk/autotest.sh@275 -- # '[' 1 -eq 1 ']' 00:05:51.567 10:57:48 -- spdk/autotest.sh@276 -- # export NET_TYPE 00:05:51.567 10:57:48 -- spdk/autotest.sh@279 -- # '[' tcp = rdma ']' 00:05:51.567 10:57:48 -- spdk/autotest.sh@282 -- # '[' tcp = tcp ']' 00:05:51.567 10:57:48 -- spdk/autotest.sh@283 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:51.567 10:57:48 -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:05:51.567 10:57:48 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:51.567 10:57:48 -- common/autotest_common.sh@10 -- # set +x 00:05:51.827 ************************************ 00:05:51.827 START TEST nvmf_tcp 00:05:51.827 ************************************ 00:05:51.827 10:57:48 nvmf_tcp -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:51.827 * Looking for test storage... 00:05:51.827 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:51.827 10:57:48 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:51.827 10:57:48 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:51.827 10:57:48 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:51.827 10:57:48 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:05:51.827 10:57:48 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:51.827 10:57:48 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:51.827 10:57:48 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:51.827 10:57:48 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:51.827 10:57:48 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:51.827 10:57:48 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:51.827 10:57:48 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:51.827 10:57:48 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:51.827 10:57:48 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:51.827 10:57:48 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:51.827 10:57:48 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:51.827 10:57:48 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:51.827 10:57:48 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:51.827 10:57:48 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:51.827 10:57:48 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:51.827 10:57:48 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:51.827 10:57:48 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:51.827 10:57:48 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:51.827 10:57:48 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:51.827 10:57:48 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:51.827 10:57:48 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.827 10:57:48 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.827 10:57:48 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.827 10:57:48 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:05:51.828 10:57:48 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.828 10:57:48 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:05:51.828 10:57:48 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:51.828 10:57:48 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:51.828 10:57:48 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:51.828 10:57:48 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:51.828 10:57:48 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:51.828 10:57:48 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:51.828 10:57:48 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:51.828 10:57:48 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:51.828 10:57:48 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:51.828 10:57:48 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:05:51.828 10:57:48 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:05:51.828 10:57:48 nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:05:51.828 10:57:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:51.828 10:57:48 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:05:51.828 10:57:48 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:51.828 10:57:48 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:05:51.828 10:57:48 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:51.828 10:57:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:51.828 ************************************ 00:05:51.828 START TEST nvmf_example 00:05:51.828 ************************************ 00:05:51.828 10:57:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:51.828 * Looking for test storage... 00:05:51.828 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:51.828 10:57:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:51.828 10:57:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:05:52.088 10:57:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:52.088 10:57:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:52.088 10:57:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:52.088 10:57:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:52.088 10:57:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:52.088 10:57:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:52.088 10:57:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:52.088 10:57:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:52.088 10:57:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:52.088 10:57:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:52.088 10:57:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:52.088 10:57:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:52.088 10:57:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:52.088 10:57:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:52.088 10:57:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:52.088 10:57:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:52.088 10:57:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:52.088 10:57:49 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:52.088 10:57:49 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:52.088 10:57:49 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:52.088 10:57:49 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.088 10:57:49 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.088 10:57:49 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.088 10:57:49 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:05:52.088 10:57:49 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.088 10:57:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:05:52.088 10:57:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:52.088 10:57:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:52.088 10:57:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:52.088 10:57:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:52.088 10:57:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:52.088 10:57:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:52.088 10:57:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:52.088 10:57:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:52.088 10:57:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:05:52.088 10:57:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:05:52.088 10:57:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:05:52.088 10:57:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:05:52.088 10:57:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:05:52.088 10:57:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:05:52.088 10:57:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:05:52.088 10:57:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:05:52.088 10:57:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@721 -- # xtrace_disable 00:05:52.088 10:57:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:52.088 10:57:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:05:52.088 10:57:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:52.088 10:57:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:52.088 10:57:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:52.088 10:57:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:52.088 10:57:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:52.089 10:57:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:52.089 10:57:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:52.089 10:57:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:52.089 10:57:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:52.089 10:57:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:52.089 10:57:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:05:52.089 10:57:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:57.366 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:57.366 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:05:57.366 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:57.366 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:57.366 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:57.366 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:57.366 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:57.366 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:05:57.366 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:57.366 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:05:57.366 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:05:57.366 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:05:57.366 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:05:57.366 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:05:57.366 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:05:57.366 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:57.366 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:57.366 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:57.366 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:57.366 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:57.366 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:57.366 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:57.366 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:57.366 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:57.366 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:57.366 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:57.366 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:57.366 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:57.366 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:57.366 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:57.366 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:57.366 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:57.366 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:57.366 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:57.366 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:57.366 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:57.366 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:57.366 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:57.366 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:57.366 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:57.366 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:57.366 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:57.366 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:57.366 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:57.366 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:57.366 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:57.366 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:57.366 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:57.366 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:57.366 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:57.367 Found net devices under 0000:86:00.0: cvl_0_0 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:57.367 Found net devices under 0000:86:00.1: cvl_0_1 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:57.367 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:57.367 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:05:57.367 00:05:57.367 --- 10.0.0.2 ping statistics --- 00:05:57.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:57.367 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:57.367 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:57.367 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:05:57.367 00:05:57.367 --- 10.0.0.1 ping statistics --- 00:05:57.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:57.367 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@721 -- # xtrace_disable 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2092347 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2092347 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@828 -- # '[' -z 2092347 ']' 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:57.367 10:57:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:57.627 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.194 10:57:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:58.194 10:57:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@861 -- # return 0 00:05:58.194 10:57:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:05:58.194 10:57:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@727 -- # xtrace_disable 00:05:58.194 10:57:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:58.194 10:57:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:58.194 10:57:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:58.194 10:57:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:58.453 10:57:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:58.453 10:57:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:05:58.453 10:57:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:58.453 10:57:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:58.453 10:57:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:58.453 10:57:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:05:58.453 10:57:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:58.453 10:57:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:58.453 10:57:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:58.453 10:57:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:58.453 10:57:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:05:58.453 10:57:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:05:58.453 10:57:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:58.453 10:57:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:58.453 10:57:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:58.453 10:57:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:58.453 10:57:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:58.453 10:57:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:58.453 10:57:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:58.453 10:57:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:05:58.453 10:57:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:05:58.453 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.663 Initializing NVMe Controllers 00:06:10.663 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:10.663 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:10.663 Initialization complete. Launching workers. 00:06:10.663 ======================================================== 00:06:10.663 Latency(us) 00:06:10.663 Device Information : IOPS MiB/s Average min max 00:06:10.663 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18327.52 71.59 3494.17 686.39 15489.37 00:06:10.663 ======================================================== 00:06:10.663 Total : 18327.52 71.59 3494.17 686.39 15489.37 00:06:10.663 00:06:10.663 10:58:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:10.663 10:58:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:10.663 10:58:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:10.663 10:58:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:06:10.663 10:58:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:10.663 10:58:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:06:10.663 10:58:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:10.663 10:58:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:10.663 rmmod nvme_tcp 00:06:10.663 rmmod nvme_fabrics 00:06:10.663 rmmod nvme_keyring 00:06:10.663 10:58:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:10.663 10:58:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:06:10.663 10:58:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:06:10.663 10:58:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2092347 ']' 00:06:10.663 10:58:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2092347 00:06:10.663 10:58:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@947 -- # '[' -z 2092347 ']' 00:06:10.663 10:58:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # kill -0 2092347 00:06:10.663 10:58:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # uname 00:06:10.663 10:58:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:10.663 10:58:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2092347 00:06:10.663 10:58:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # process_name=nvmf 00:06:10.663 10:58:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@957 -- # '[' nvmf = sudo ']' 00:06:10.663 10:58:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2092347' 00:06:10.663 killing process with pid 2092347 00:06:10.663 10:58:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # kill 2092347 00:06:10.663 10:58:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@971 -- # wait 2092347 00:06:10.663 nvmf threads initialize successfully 00:06:10.663 bdev subsystem init successfully 00:06:10.663 created a nvmf target service 00:06:10.663 create targets's poll groups done 00:06:10.663 all subsystems of target started 00:06:10.663 nvmf target is running 00:06:10.663 all subsystems of target stopped 00:06:10.663 destroy targets's poll groups done 00:06:10.663 destroyed the nvmf target service 00:06:10.663 bdev subsystem finish successfully 00:06:10.663 nvmf threads destroy successfully 00:06:10.663 10:58:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:10.663 10:58:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:10.663 10:58:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:10.663 10:58:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:10.663 10:58:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:10.663 10:58:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:10.663 10:58:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:10.663 10:58:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:10.922 10:58:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:10.922 10:58:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:10.922 10:58:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@727 -- # xtrace_disable 00:06:10.922 10:58:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:11.183 00:06:11.183 real 0m19.198s 00:06:11.183 user 0m46.006s 00:06:11.183 sys 0m5.524s 00:06:11.183 10:58:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:11.183 10:58:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:11.183 ************************************ 00:06:11.183 END TEST nvmf_example 00:06:11.183 ************************************ 00:06:11.183 10:58:08 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:11.183 10:58:08 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:06:11.183 10:58:08 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:11.183 10:58:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:11.183 ************************************ 00:06:11.183 START TEST nvmf_filesystem 00:06:11.183 ************************************ 00:06:11.183 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:11.183 * Looking for test storage... 00:06:11.183 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:11.183 10:58:08 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:11.183 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:11.183 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:06:11.183 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:11.183 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:11.183 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:11.183 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:11.183 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:11.183 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:11.183 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:11.183 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:11.183 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:11.183 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:11.183 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:11.183 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:11.183 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:11.183 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:11.183 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:11.183 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:11.183 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:11.183 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:11.183 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:11.183 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:11.183 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:11.183 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:11.183 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:11.183 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:11.183 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:11.183 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:11.183 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:11.183 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:11.183 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:11.183 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:11.183 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:11.184 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:11.185 10:58:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:11.185 10:58:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:11.185 10:58:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:11.185 10:58:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:11.185 10:58:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:11.185 10:58:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:11.185 10:58:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:11.185 10:58:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:11.185 10:58:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:11.185 10:58:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:11.185 10:58:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:11.185 10:58:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:11.185 10:58:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:11.185 10:58:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:11.185 10:58:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:11.185 #define SPDK_CONFIG_H 00:06:11.185 #define SPDK_CONFIG_APPS 1 00:06:11.185 #define SPDK_CONFIG_ARCH native 00:06:11.185 #undef SPDK_CONFIG_ASAN 00:06:11.185 #undef SPDK_CONFIG_AVAHI 00:06:11.185 #undef SPDK_CONFIG_CET 00:06:11.185 #define SPDK_CONFIG_COVERAGE 1 00:06:11.185 #define SPDK_CONFIG_CROSS_PREFIX 00:06:11.185 #undef SPDK_CONFIG_CRYPTO 00:06:11.185 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:11.185 #undef SPDK_CONFIG_CUSTOMOCF 00:06:11.185 #undef SPDK_CONFIG_DAOS 00:06:11.185 #define SPDK_CONFIG_DAOS_DIR 00:06:11.185 #define SPDK_CONFIG_DEBUG 1 00:06:11.185 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:11.185 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:11.185 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:11.185 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:11.185 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:11.185 #undef SPDK_CONFIG_DPDK_UADK 00:06:11.185 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:11.185 #define SPDK_CONFIG_EXAMPLES 1 00:06:11.185 #undef SPDK_CONFIG_FC 00:06:11.185 #define SPDK_CONFIG_FC_PATH 00:06:11.185 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:11.185 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:11.185 #undef SPDK_CONFIG_FUSE 00:06:11.185 #undef SPDK_CONFIG_FUZZER 00:06:11.185 #define SPDK_CONFIG_FUZZER_LIB 00:06:11.185 #undef SPDK_CONFIG_GOLANG 00:06:11.185 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:11.185 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:11.185 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:11.185 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:06:11.185 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:11.185 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:11.185 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:11.185 #define SPDK_CONFIG_IDXD 1 00:06:11.185 #undef SPDK_CONFIG_IDXD_KERNEL 00:06:11.185 #undef SPDK_CONFIG_IPSEC_MB 00:06:11.185 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:11.185 #define SPDK_CONFIG_ISAL 1 00:06:11.185 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:11.185 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:11.185 #define SPDK_CONFIG_LIBDIR 00:06:11.185 #undef SPDK_CONFIG_LTO 00:06:11.185 #define SPDK_CONFIG_MAX_LCORES 00:06:11.185 #define SPDK_CONFIG_NVME_CUSE 1 00:06:11.185 #undef SPDK_CONFIG_OCF 00:06:11.185 #define SPDK_CONFIG_OCF_PATH 00:06:11.185 #define SPDK_CONFIG_OPENSSL_PATH 00:06:11.185 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:11.185 #define SPDK_CONFIG_PGO_DIR 00:06:11.185 #undef SPDK_CONFIG_PGO_USE 00:06:11.185 #define SPDK_CONFIG_PREFIX /usr/local 00:06:11.185 #undef SPDK_CONFIG_RAID5F 00:06:11.185 #undef SPDK_CONFIG_RBD 00:06:11.185 #define SPDK_CONFIG_RDMA 1 00:06:11.185 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:11.185 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:11.185 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:11.185 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:11.185 #define SPDK_CONFIG_SHARED 1 00:06:11.185 #undef SPDK_CONFIG_SMA 00:06:11.185 #define SPDK_CONFIG_TESTS 1 00:06:11.185 #undef SPDK_CONFIG_TSAN 00:06:11.185 #define SPDK_CONFIG_UBLK 1 00:06:11.185 #define SPDK_CONFIG_UBSAN 1 00:06:11.185 #undef SPDK_CONFIG_UNIT_TESTS 00:06:11.185 #undef SPDK_CONFIG_URING 00:06:11.185 #define SPDK_CONFIG_URING_PATH 00:06:11.185 #undef SPDK_CONFIG_URING_ZNS 00:06:11.185 #undef SPDK_CONFIG_USDT 00:06:11.185 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:11.185 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:11.185 #define SPDK_CONFIG_VFIO_USER 1 00:06:11.185 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:11.186 #define SPDK_CONFIG_VHOST 1 00:06:11.186 #define SPDK_CONFIG_VIRTIO 1 00:06:11.186 #undef SPDK_CONFIG_VTUNE 00:06:11.186 #define SPDK_CONFIG_VTUNE_DIR 00:06:11.186 #define SPDK_CONFIG_WERROR 1 00:06:11.186 #define SPDK_CONFIG_WPDK_DIR 00:06:11.186 #undef SPDK_CONFIG_XNVME 00:06:11.186 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:11.186 10:58:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:11.186 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:11.186 10:58:08 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:11.186 10:58:08 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:11.186 10:58:08 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:11.186 10:58:08 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.186 10:58:08 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.186 10:58:08 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.186 10:58:08 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:11.186 10:58:08 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.186 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:11.186 10:58:08 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:11.186 10:58:08 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:11.186 10:58:08 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:11.186 10:58:08 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:11.186 10:58:08 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:11.186 10:58:08 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:06:11.186 10:58:08 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:11.186 10:58:08 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:11.186 10:58:08 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:06:11.186 10:58:08 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:06:11.186 10:58:08 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:11.186 10:58:08 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:11.186 10:58:08 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:11.186 10:58:08 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:11.186 10:58:08 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:11.186 10:58:08 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:11.186 10:58:08 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:06:11.186 10:58:08 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:11.186 10:58:08 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:11.186 10:58:08 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:11.186 10:58:08 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:11.186 10:58:08 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:11.186 10:58:08 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:11.186 10:58:08 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:11.186 10:58:08 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:11.186 10:58:08 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:06:11.186 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:06:11.186 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:11.186 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:06:11.186 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:11.186 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:06:11.186 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:11.186 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:06:11.186 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:11.186 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:06:11.187 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:11.187 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:06:11.187 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:11.187 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:06:11.187 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:11.187 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:06:11.187 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:11.187 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:06:11.187 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:11.187 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:06:11.187 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:11.187 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:06:11.187 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:11.187 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:06:11.187 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:11.187 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:06:11.187 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:11.187 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:06:11.187 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:11.187 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:06:11.187 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:11.187 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:06:11.187 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:11.187 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:06:11.449 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j96 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 2094769 ]] 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 2094769 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1677 -- # set_test_storage 2147483648 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.bX3yFg 00:06:11.450 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.bX3yFg/tests/target /tmp/spdk.bX3yFg 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=972767232 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4311662592 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=190098341888 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=195974311936 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5875970048 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97983778816 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987153920 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=39185489920 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=39194865664 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9375744 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97986908160 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987158016 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=249856 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=19597426688 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=19597430784 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:06:11.451 * Looking for test storage... 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=190098341888 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=8090562560 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:11.451 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # set -o errtrace 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # shopt -s extdebug 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1681 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # true 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # xtrace_fd 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:11.451 10:58:08 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.452 10:58:08 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.452 10:58:08 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.452 10:58:08 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:11.452 10:58:08 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.452 10:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:06:11.452 10:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:11.452 10:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:11.452 10:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:11.452 10:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:11.452 10:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:11.452 10:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:11.452 10:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:11.452 10:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:11.452 10:58:08 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:11.452 10:58:08 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:11.452 10:58:08 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:06:11.452 10:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:11.452 10:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:11.452 10:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:11.452 10:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:11.452 10:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:11.452 10:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:11.452 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:11.452 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:11.452 10:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:11.452 10:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:11.452 10:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:06:11.452 10:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:16.726 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:16.726 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:16.726 Found net devices under 0000:86:00.0: cvl_0_0 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:16.726 Found net devices under 0000:86:00.1: cvl_0_1 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:16.726 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:16.727 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:16.727 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:16.727 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:16.727 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:06:16.727 00:06:16.727 --- 10.0.0.2 ping statistics --- 00:06:16.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:16.727 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:06:16.727 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:16.727 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:16.727 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:06:16.727 00:06:16.727 --- 10.0.0.1 ping statistics --- 00:06:16.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:16.727 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:06:16.727 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:16.727 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:06:16.727 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:16.727 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:16.727 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:16.727 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:16.727 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:16.727 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:16.727 10:58:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:16.727 10:58:13 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:16.727 10:58:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:06:16.727 10:58:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:16.727 10:58:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:16.727 ************************************ 00:06:16.727 START TEST nvmf_filesystem_no_in_capsule 00:06:16.727 ************************************ 00:06:16.727 10:58:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # nvmf_filesystem_part 0 00:06:16.727 10:58:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:06:16.727 10:58:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:16.727 10:58:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:16.727 10:58:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@721 -- # xtrace_disable 00:06:16.727 10:58:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:16.727 10:58:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2097787 00:06:16.727 10:58:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2097787 00:06:16.727 10:58:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:16.727 10:58:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@828 -- # '[' -z 2097787 ']' 00:06:16.727 10:58:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.727 10:58:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:16.727 10:58:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.727 10:58:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:16.727 10:58:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:16.727 [2024-05-15 10:58:13.935914] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:06:16.727 [2024-05-15 10:58:13.935955] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:16.727 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.986 [2024-05-15 10:58:13.993125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:16.986 [2024-05-15 10:58:14.075595] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:16.986 [2024-05-15 10:58:14.075632] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:16.986 [2024-05-15 10:58:14.075639] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:16.986 [2024-05-15 10:58:14.075646] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:16.986 [2024-05-15 10:58:14.075653] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:16.986 [2024-05-15 10:58:14.075711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.986 [2024-05-15 10:58:14.075729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:16.986 [2024-05-15 10:58:14.075815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:16.986 [2024-05-15 10:58:14.075816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.594 10:58:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:17.594 10:58:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@861 -- # return 0 00:06:17.594 10:58:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:17.594 10:58:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@727 -- # xtrace_disable 00:06:17.594 10:58:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:17.594 10:58:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:17.594 10:58:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:17.594 10:58:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:17.594 10:58:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:17.594 10:58:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:17.594 [2024-05-15 10:58:14.795935] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:17.594 10:58:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:17.594 10:58:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:17.594 10:58:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:17.594 10:58:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:17.862 Malloc1 00:06:17.862 10:58:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:17.862 10:58:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:17.862 10:58:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:17.862 10:58:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:17.862 10:58:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:17.862 10:58:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:17.862 10:58:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:17.862 10:58:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:17.862 10:58:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:17.862 10:58:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:17.862 10:58:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:17.862 10:58:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:17.862 [2024-05-15 10:58:14.945992] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:17.862 [2024-05-15 10:58:14.946230] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:17.862 10:58:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:17.862 10:58:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:17.862 10:58:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_name=Malloc1 00:06:17.862 10:58:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bdev_info 00:06:17.862 10:58:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local bs 00:06:17.862 10:58:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local nb 00:06:17.862 10:58:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:17.862 10:58:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:17.862 10:58:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:17.862 10:58:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:17.862 10:58:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bdev_info='[ 00:06:17.862 { 00:06:17.862 "name": "Malloc1", 00:06:17.862 "aliases": [ 00:06:17.862 "f0eaca79-ec62-4aa2-b4ad-cdaae81884ca" 00:06:17.862 ], 00:06:17.862 "product_name": "Malloc disk", 00:06:17.862 "block_size": 512, 00:06:17.862 "num_blocks": 1048576, 00:06:17.862 "uuid": "f0eaca79-ec62-4aa2-b4ad-cdaae81884ca", 00:06:17.862 "assigned_rate_limits": { 00:06:17.862 "rw_ios_per_sec": 0, 00:06:17.862 "rw_mbytes_per_sec": 0, 00:06:17.862 "r_mbytes_per_sec": 0, 00:06:17.862 "w_mbytes_per_sec": 0 00:06:17.862 }, 00:06:17.862 "claimed": true, 00:06:17.862 "claim_type": "exclusive_write", 00:06:17.862 "zoned": false, 00:06:17.862 "supported_io_types": { 00:06:17.862 "read": true, 00:06:17.862 "write": true, 00:06:17.862 "unmap": true, 00:06:17.862 "write_zeroes": true, 00:06:17.862 "flush": true, 00:06:17.862 "reset": true, 00:06:17.862 "compare": false, 00:06:17.862 "compare_and_write": false, 00:06:17.862 "abort": true, 00:06:17.862 "nvme_admin": false, 00:06:17.862 "nvme_io": false 00:06:17.862 }, 00:06:17.862 "memory_domains": [ 00:06:17.862 { 00:06:17.862 "dma_device_id": "system", 00:06:17.862 "dma_device_type": 1 00:06:17.862 }, 00:06:17.862 { 00:06:17.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:17.862 "dma_device_type": 2 00:06:17.862 } 00:06:17.862 ], 00:06:17.862 "driver_specific": {} 00:06:17.862 } 00:06:17.862 ]' 00:06:17.862 10:58:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .block_size' 00:06:17.862 10:58:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # bs=512 00:06:17.862 10:58:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # jq '.[] .num_blocks' 00:06:17.862 10:58:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # nb=1048576 00:06:17.862 10:58:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bdev_size=512 00:06:17.862 10:58:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # echo 512 00:06:17.862 10:58:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:17.862 10:58:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:19.241 10:58:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:19.241 10:58:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local i=0 00:06:19.241 10:58:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:06:19.241 10:58:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:06:19.241 10:58:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # sleep 2 00:06:21.146 10:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:06:21.146 10:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:06:21.146 10:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:06:21.146 10:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:06:21.146 10:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:06:21.146 10:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # return 0 00:06:21.146 10:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:21.146 10:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:21.146 10:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:21.146 10:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:21.146 10:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:21.146 10:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:21.146 10:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:21.146 10:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:21.146 10:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:21.146 10:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:21.146 10:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:21.405 10:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:21.973 10:58:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:22.910 10:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:22.910 10:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:22.910 10:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:06:22.910 10:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:22.910 10:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:22.910 ************************************ 00:06:22.910 START TEST filesystem_ext4 00:06:22.910 ************************************ 00:06:22.910 10:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:22.910 10:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:22.910 10:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:22.910 10:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:22.910 10:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local fstype=ext4 00:06:22.910 10:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:06:22.910 10:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local i=0 00:06:22.911 10:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local force 00:06:22.911 10:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # '[' ext4 = ext4 ']' 00:06:22.911 10:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # force=-F 00:06:22.911 10:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:22.911 mke2fs 1.46.5 (30-Dec-2021) 00:06:23.170 Discarding device blocks: 0/522240 done 00:06:23.170 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:23.170 Filesystem UUID: d137b37a-44f9-4864-8694-01c590d19147 00:06:23.170 Superblock backups stored on blocks: 00:06:23.170 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:23.170 00:06:23.170 Allocating group tables: 0/64 done 00:06:23.170 Writing inode tables: 0/64 done 00:06:23.429 Creating journal (8192 blocks): done 00:06:23.996 Writing superblocks and filesystem accounting information: 0/64 8/64 done 00:06:23.996 00:06:23.996 10:58:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@942 -- # return 0 00:06:23.996 10:58:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:24.933 10:58:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:24.933 10:58:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:06:24.933 10:58:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:24.933 10:58:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:06:24.933 10:58:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:24.933 10:58:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:24.933 10:58:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2097787 00:06:24.933 10:58:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:24.933 10:58:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:24.933 10:58:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:24.933 10:58:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:24.933 00:06:24.933 real 0m1.891s 00:06:24.933 user 0m0.022s 00:06:24.933 sys 0m0.069s 00:06:24.933 10:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:24.933 10:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:24.933 ************************************ 00:06:24.933 END TEST filesystem_ext4 00:06:24.933 ************************************ 00:06:24.933 10:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:24.933 10:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:06:24.933 10:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:24.933 10:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:24.933 ************************************ 00:06:24.933 START TEST filesystem_btrfs 00:06:24.933 ************************************ 00:06:24.933 10:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:24.933 10:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:24.933 10:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:24.933 10:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:24.933 10:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local fstype=btrfs 00:06:24.933 10:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:06:24.933 10:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local i=0 00:06:24.933 10:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local force 00:06:24.933 10:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # '[' btrfs = ext4 ']' 00:06:24.933 10:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # force=-f 00:06:24.933 10:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:25.192 btrfs-progs v6.6.2 00:06:25.192 See https://btrfs.readthedocs.io for more information. 00:06:25.192 00:06:25.192 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:25.192 NOTE: several default settings have changed in version 5.15, please make sure 00:06:25.192 this does not affect your deployments: 00:06:25.192 - DUP for metadata (-m dup) 00:06:25.192 - enabled no-holes (-O no-holes) 00:06:25.192 - enabled free-space-tree (-R free-space-tree) 00:06:25.192 00:06:25.192 Label: (null) 00:06:25.192 UUID: 53675e26-36ff-47ec-905b-cd7619428e12 00:06:25.192 Node size: 16384 00:06:25.192 Sector size: 4096 00:06:25.192 Filesystem size: 510.00MiB 00:06:25.192 Block group profiles: 00:06:25.192 Data: single 8.00MiB 00:06:25.192 Metadata: DUP 32.00MiB 00:06:25.192 System: DUP 8.00MiB 00:06:25.192 SSD detected: yes 00:06:25.192 Zoned device: no 00:06:25.192 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:25.192 Runtime features: free-space-tree 00:06:25.192 Checksum: crc32c 00:06:25.192 Number of devices: 1 00:06:25.192 Devices: 00:06:25.192 ID SIZE PATH 00:06:25.192 1 510.00MiB /dev/nvme0n1p1 00:06:25.192 00:06:25.192 10:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@942 -- # return 0 00:06:25.192 10:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:26.128 10:58:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:26.128 10:58:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:06:26.128 10:58:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:26.128 10:58:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:06:26.128 10:58:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:26.128 10:58:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:26.128 10:58:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2097787 00:06:26.128 10:58:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:26.128 10:58:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:26.128 10:58:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:26.128 10:58:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:26.128 00:06:26.128 real 0m1.090s 00:06:26.128 user 0m0.025s 00:06:26.128 sys 0m0.125s 00:06:26.128 10:58:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:26.128 10:58:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:26.128 ************************************ 00:06:26.128 END TEST filesystem_btrfs 00:06:26.128 ************************************ 00:06:26.128 10:58:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:26.128 10:58:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:06:26.128 10:58:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:26.128 10:58:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:26.128 ************************************ 00:06:26.128 START TEST filesystem_xfs 00:06:26.128 ************************************ 00:06:26.128 10:58:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create xfs nvme0n1 00:06:26.128 10:58:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:26.128 10:58:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:26.128 10:58:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:26.128 10:58:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local fstype=xfs 00:06:26.128 10:58:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:06:26.128 10:58:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local i=0 00:06:26.128 10:58:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local force 00:06:26.128 10:58:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # '[' xfs = ext4 ']' 00:06:26.128 10:58:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # force=-f 00:06:26.128 10:58:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:26.128 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:26.128 = sectsz=512 attr=2, projid32bit=1 00:06:26.128 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:26.128 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:26.128 data = bsize=4096 blocks=130560, imaxpct=25 00:06:26.128 = sunit=0 swidth=0 blks 00:06:26.128 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:26.128 log =internal log bsize=4096 blocks=16384, version=2 00:06:26.128 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:26.128 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:27.065 Discarding blocks...Done. 00:06:27.065 10:58:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@942 -- # return 0 00:06:27.065 10:58:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:29.600 10:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:29.600 10:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:06:29.600 10:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:29.600 10:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:06:29.600 10:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:06:29.600 10:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:29.600 10:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2097787 00:06:29.600 10:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:29.600 10:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:29.600 10:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:29.600 10:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:29.600 00:06:29.600 real 0m3.397s 00:06:29.600 user 0m0.030s 00:06:29.600 sys 0m0.067s 00:06:29.600 10:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:29.600 10:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:29.600 ************************************ 00:06:29.600 END TEST filesystem_xfs 00:06:29.600 ************************************ 00:06:29.600 10:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:29.600 10:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:29.600 10:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:29.600 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:29.600 10:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:29.600 10:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # local i=0 00:06:29.600 10:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:06:29.600 10:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:29.600 10:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:06:29.600 10:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:29.600 10:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1228 -- # return 0 00:06:29.600 10:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:29.600 10:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:29.600 10:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:29.600 10:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:29.600 10:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:29.600 10:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2097787 00:06:29.600 10:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@947 -- # '[' -z 2097787 ']' 00:06:29.600 10:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # kill -0 2097787 00:06:29.600 10:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # uname 00:06:29.600 10:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:29.600 10:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2097787 00:06:29.859 10:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:29.859 10:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:29.859 10:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2097787' 00:06:29.859 killing process with pid 2097787 00:06:29.859 10:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # kill 2097787 00:06:29.859 [2024-05-15 10:58:26.903225] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:29.859 10:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # wait 2097787 00:06:30.119 10:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:30.119 00:06:30.119 real 0m13.383s 00:06:30.119 user 0m52.562s 00:06:30.119 sys 0m1.226s 00:06:30.119 10:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:30.119 10:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:30.119 ************************************ 00:06:30.119 END TEST nvmf_filesystem_no_in_capsule 00:06:30.119 ************************************ 00:06:30.119 10:58:27 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:30.119 10:58:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:06:30.119 10:58:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:30.119 10:58:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:30.119 ************************************ 00:06:30.119 START TEST nvmf_filesystem_in_capsule 00:06:30.119 ************************************ 00:06:30.119 10:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # nvmf_filesystem_part 4096 00:06:30.119 10:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:30.119 10:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:30.119 10:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:30.119 10:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@721 -- # xtrace_disable 00:06:30.119 10:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:30.119 10:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2100319 00:06:30.119 10:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2100319 00:06:30.119 10:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:30.119 10:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@828 -- # '[' -z 2100319 ']' 00:06:30.119 10:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.119 10:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:30.119 10:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.119 10:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:30.119 10:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:30.378 [2024-05-15 10:58:27.386918] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:06:30.378 [2024-05-15 10:58:27.386963] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:30.378 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.378 [2024-05-15 10:58:27.436336] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:30.378 [2024-05-15 10:58:27.509593] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:30.378 [2024-05-15 10:58:27.509632] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:30.378 [2024-05-15 10:58:27.509639] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:30.378 [2024-05-15 10:58:27.509645] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:30.378 [2024-05-15 10:58:27.509650] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:30.378 [2024-05-15 10:58:27.509701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.378 [2024-05-15 10:58:27.509800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.378 [2024-05-15 10:58:27.509883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:30.378 [2024-05-15 10:58:27.509884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.946 10:58:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:30.946 10:58:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@861 -- # return 0 00:06:31.206 10:58:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:31.206 10:58:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@727 -- # xtrace_disable 00:06:31.206 10:58:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:31.206 10:58:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:31.206 10:58:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:31.206 10:58:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:31.206 10:58:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:31.206 10:58:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:31.206 [2024-05-15 10:58:28.251310] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:31.206 10:58:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:31.206 10:58:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:31.206 10:58:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:31.206 10:58:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:31.206 Malloc1 00:06:31.206 10:58:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:31.206 10:58:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:31.206 10:58:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:31.206 10:58:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:31.206 10:58:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:31.206 10:58:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:31.206 10:58:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:31.206 10:58:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:31.206 10:58:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:31.206 10:58:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:31.206 10:58:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:31.206 10:58:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:31.206 [2024-05-15 10:58:28.399532] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:31.206 [2024-05-15 10:58:28.399774] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:31.206 10:58:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:31.206 10:58:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:31.206 10:58:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_name=Malloc1 00:06:31.206 10:58:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bdev_info 00:06:31.206 10:58:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local bs 00:06:31.206 10:58:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local nb 00:06:31.206 10:58:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:31.206 10:58:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:31.206 10:58:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:31.206 10:58:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:31.206 10:58:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bdev_info='[ 00:06:31.206 { 00:06:31.206 "name": "Malloc1", 00:06:31.206 "aliases": [ 00:06:31.206 "6b2954ed-ff6a-4fe1-9f1e-f5c7d30e5361" 00:06:31.206 ], 00:06:31.206 "product_name": "Malloc disk", 00:06:31.206 "block_size": 512, 00:06:31.206 "num_blocks": 1048576, 00:06:31.206 "uuid": "6b2954ed-ff6a-4fe1-9f1e-f5c7d30e5361", 00:06:31.206 "assigned_rate_limits": { 00:06:31.206 "rw_ios_per_sec": 0, 00:06:31.206 "rw_mbytes_per_sec": 0, 00:06:31.206 "r_mbytes_per_sec": 0, 00:06:31.206 "w_mbytes_per_sec": 0 00:06:31.206 }, 00:06:31.206 "claimed": true, 00:06:31.206 "claim_type": "exclusive_write", 00:06:31.206 "zoned": false, 00:06:31.206 "supported_io_types": { 00:06:31.206 "read": true, 00:06:31.206 "write": true, 00:06:31.206 "unmap": true, 00:06:31.206 "write_zeroes": true, 00:06:31.206 "flush": true, 00:06:31.206 "reset": true, 00:06:31.206 "compare": false, 00:06:31.206 "compare_and_write": false, 00:06:31.206 "abort": true, 00:06:31.206 "nvme_admin": false, 00:06:31.206 "nvme_io": false 00:06:31.206 }, 00:06:31.206 "memory_domains": [ 00:06:31.206 { 00:06:31.206 "dma_device_id": "system", 00:06:31.206 "dma_device_type": 1 00:06:31.206 }, 00:06:31.206 { 00:06:31.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:31.206 "dma_device_type": 2 00:06:31.206 } 00:06:31.206 ], 00:06:31.206 "driver_specific": {} 00:06:31.206 } 00:06:31.206 ]' 00:06:31.206 10:58:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .block_size' 00:06:31.206 10:58:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # bs=512 00:06:31.206 10:58:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # jq '.[] .num_blocks' 00:06:31.465 10:58:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # nb=1048576 00:06:31.465 10:58:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bdev_size=512 00:06:31.465 10:58:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # echo 512 00:06:31.465 10:58:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:31.465 10:58:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:32.842 10:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:32.842 10:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local i=0 00:06:32.842 10:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:06:32.842 10:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:06:32.842 10:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # sleep 2 00:06:34.754 10:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:06:34.754 10:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:06:34.754 10:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:06:34.754 10:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:06:34.754 10:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:06:34.754 10:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # return 0 00:06:34.754 10:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:34.754 10:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:34.754 10:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:34.754 10:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:34.754 10:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:34.754 10:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:34.754 10:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:34.754 10:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:34.754 10:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:34.754 10:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:34.754 10:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:35.012 10:58:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:35.578 10:58:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:36.511 10:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:36.511 10:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:36.511 10:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:06:36.511 10:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:36.511 10:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:36.511 ************************************ 00:06:36.511 START TEST filesystem_in_capsule_ext4 00:06:36.511 ************************************ 00:06:36.511 10:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:36.511 10:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:36.511 10:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:36.511 10:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:36.511 10:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local fstype=ext4 00:06:36.511 10:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:06:36.511 10:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local i=0 00:06:36.511 10:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local force 00:06:36.511 10:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # '[' ext4 = ext4 ']' 00:06:36.511 10:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # force=-F 00:06:36.511 10:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:36.511 mke2fs 1.46.5 (30-Dec-2021) 00:06:36.768 Discarding device blocks: 0/522240 done 00:06:36.768 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:36.768 Filesystem UUID: 057ad1ec-d691-4804-b698-ca280a957ee9 00:06:36.768 Superblock backups stored on blocks: 00:06:36.768 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:36.768 00:06:36.768 Allocating group tables: 0/64 done 00:06:36.768 Writing inode tables: 0/64 done 00:06:36.768 Creating journal (8192 blocks): done 00:06:36.768 Writing superblocks and filesystem accounting information: 0/64 done 00:06:36.768 00:06:36.768 10:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@942 -- # return 0 00:06:36.768 10:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:37.703 10:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:37.703 10:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:06:37.703 10:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:37.703 10:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:06:37.703 10:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:37.703 10:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:37.703 10:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2100319 00:06:37.703 10:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:37.703 10:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:37.703 10:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:37.703 10:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:37.703 00:06:37.703 real 0m1.137s 00:06:37.703 user 0m0.017s 00:06:37.703 sys 0m0.069s 00:06:37.703 10:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:37.703 10:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:37.703 ************************************ 00:06:37.703 END TEST filesystem_in_capsule_ext4 00:06:37.703 ************************************ 00:06:37.703 10:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:37.703 10:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:06:37.703 10:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:37.703 10:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:37.703 ************************************ 00:06:37.703 START TEST filesystem_in_capsule_btrfs 00:06:37.703 ************************************ 00:06:37.703 10:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:37.703 10:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:37.703 10:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:37.703 10:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:37.703 10:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local fstype=btrfs 00:06:37.703 10:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:06:37.703 10:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local i=0 00:06:37.703 10:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local force 00:06:37.963 10:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # '[' btrfs = ext4 ']' 00:06:37.963 10:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # force=-f 00:06:37.963 10:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:38.221 btrfs-progs v6.6.2 00:06:38.221 See https://btrfs.readthedocs.io for more information. 00:06:38.221 00:06:38.221 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:38.221 NOTE: several default settings have changed in version 5.15, please make sure 00:06:38.221 this does not affect your deployments: 00:06:38.221 - DUP for metadata (-m dup) 00:06:38.221 - enabled no-holes (-O no-holes) 00:06:38.221 - enabled free-space-tree (-R free-space-tree) 00:06:38.221 00:06:38.221 Label: (null) 00:06:38.221 UUID: f41d51cc-060f-43db-a274-eb111c037442 00:06:38.221 Node size: 16384 00:06:38.221 Sector size: 4096 00:06:38.221 Filesystem size: 510.00MiB 00:06:38.221 Block group profiles: 00:06:38.221 Data: single 8.00MiB 00:06:38.221 Metadata: DUP 32.00MiB 00:06:38.221 System: DUP 8.00MiB 00:06:38.221 SSD detected: yes 00:06:38.221 Zoned device: no 00:06:38.221 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:38.221 Runtime features: free-space-tree 00:06:38.221 Checksum: crc32c 00:06:38.221 Number of devices: 1 00:06:38.221 Devices: 00:06:38.221 ID SIZE PATH 00:06:38.221 1 510.00MiB /dev/nvme0n1p1 00:06:38.221 00:06:38.221 10:58:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@942 -- # return 0 00:06:38.221 10:58:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:39.156 10:58:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:39.156 10:58:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:06:39.156 10:58:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:39.156 10:58:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:06:39.156 10:58:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:39.156 10:58:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:39.156 10:58:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2100319 00:06:39.156 10:58:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:39.156 10:58:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:39.156 10:58:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:39.156 10:58:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:39.156 00:06:39.156 real 0m1.352s 00:06:39.156 user 0m0.020s 00:06:39.156 sys 0m0.130s 00:06:39.156 10:58:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:39.156 10:58:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:39.156 ************************************ 00:06:39.156 END TEST filesystem_in_capsule_btrfs 00:06:39.156 ************************************ 00:06:39.156 10:58:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:39.156 10:58:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:06:39.156 10:58:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:39.156 10:58:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:39.156 ************************************ 00:06:39.156 START TEST filesystem_in_capsule_xfs 00:06:39.156 ************************************ 00:06:39.156 10:58:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create xfs nvme0n1 00:06:39.156 10:58:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:39.156 10:58:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:39.156 10:58:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:39.156 10:58:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local fstype=xfs 00:06:39.156 10:58:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:06:39.156 10:58:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local i=0 00:06:39.156 10:58:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local force 00:06:39.156 10:58:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # '[' xfs = ext4 ']' 00:06:39.156 10:58:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # force=-f 00:06:39.156 10:58:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:39.414 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:39.414 = sectsz=512 attr=2, projid32bit=1 00:06:39.414 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:39.414 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:39.414 data = bsize=4096 blocks=130560, imaxpct=25 00:06:39.414 = sunit=0 swidth=0 blks 00:06:39.414 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:39.414 log =internal log bsize=4096 blocks=16384, version=2 00:06:39.414 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:39.414 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:40.363 Discarding blocks...Done. 00:06:40.363 10:58:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@942 -- # return 0 00:06:40.363 10:58:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:42.933 10:58:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:42.933 10:58:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:06:42.933 10:58:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:42.933 10:58:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:06:42.933 10:58:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:06:42.933 10:58:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:42.933 10:58:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2100319 00:06:42.933 10:58:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:42.933 10:58:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:42.933 10:58:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:42.933 10:58:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:42.933 00:06:42.933 real 0m3.494s 00:06:42.933 user 0m0.029s 00:06:42.933 sys 0m0.067s 00:06:42.933 10:58:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:42.933 10:58:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:42.933 ************************************ 00:06:42.933 END TEST filesystem_in_capsule_xfs 00:06:42.933 ************************************ 00:06:42.933 10:58:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:42.933 10:58:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:42.933 10:58:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:42.933 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:42.933 10:58:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:42.933 10:58:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # local i=0 00:06:42.933 10:58:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:06:42.933 10:58:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:42.933 10:58:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:06:42.933 10:58:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:42.933 10:58:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1228 -- # return 0 00:06:42.933 10:58:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:42.933 10:58:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:42.933 10:58:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:42.933 10:58:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:42.933 10:58:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:42.933 10:58:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2100319 00:06:42.933 10:58:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@947 -- # '[' -z 2100319 ']' 00:06:42.933 10:58:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # kill -0 2100319 00:06:42.933 10:58:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # uname 00:06:42.933 10:58:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:42.933 10:58:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2100319 00:06:42.933 10:58:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:42.933 10:58:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:42.933 10:58:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2100319' 00:06:42.933 killing process with pid 2100319 00:06:42.933 10:58:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # kill 2100319 00:06:42.933 [2024-05-15 10:58:40.179313] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:42.933 10:58:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # wait 2100319 00:06:43.503 10:58:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:43.503 00:06:43.503 real 0m13.204s 00:06:43.503 user 0m51.890s 00:06:43.503 sys 0m1.242s 00:06:43.503 10:58:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:43.503 10:58:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:43.503 ************************************ 00:06:43.503 END TEST nvmf_filesystem_in_capsule 00:06:43.503 ************************************ 00:06:43.503 10:58:40 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:06:43.503 10:58:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:43.503 10:58:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:06:43.503 10:58:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:43.503 10:58:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:06:43.503 10:58:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:43.503 10:58:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:43.503 rmmod nvme_tcp 00:06:43.503 rmmod nvme_fabrics 00:06:43.503 rmmod nvme_keyring 00:06:43.503 10:58:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:43.503 10:58:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:06:43.503 10:58:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:06:43.503 10:58:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:06:43.503 10:58:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:43.503 10:58:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:43.503 10:58:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:43.503 10:58:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:43.503 10:58:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:43.503 10:58:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:43.503 10:58:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:43.503 10:58:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:46.045 10:58:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:46.045 00:06:46.045 real 0m34.434s 00:06:46.045 user 1m46.168s 00:06:46.045 sys 0m6.582s 00:06:46.045 10:58:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:46.045 10:58:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:46.045 ************************************ 00:06:46.045 END TEST nvmf_filesystem 00:06:46.045 ************************************ 00:06:46.045 10:58:42 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:46.045 10:58:42 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:06:46.045 10:58:42 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:46.045 10:58:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:46.045 ************************************ 00:06:46.045 START TEST nvmf_target_discovery 00:06:46.045 ************************************ 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:46.045 * Looking for test storage... 00:06:46.045 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:06:46.045 10:58:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:51.317 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:51.317 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:51.317 Found net devices under 0000:86:00.0: cvl_0_0 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:51.317 Found net devices under 0000:86:00.1: cvl_0_1 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:51.317 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:51.318 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:51.318 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:51.318 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:51.318 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:51.318 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:51.318 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:51.318 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:51.318 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:51.318 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:51.318 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:51.318 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:51.318 10:58:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:51.318 10:58:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:51.318 10:58:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:51.318 10:58:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:51.318 10:58:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:51.318 10:58:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:51.318 10:58:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:51.318 10:58:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:51.318 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:51.318 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:06:51.318 00:06:51.318 --- 10.0.0.2 ping statistics --- 00:06:51.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.318 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:06:51.318 10:58:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:51.318 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:51.318 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:06:51.318 00:06:51.318 --- 10.0.0.1 ping statistics --- 00:06:51.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.318 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:06:51.318 10:58:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:51.318 10:58:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:06:51.318 10:58:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:51.318 10:58:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:51.318 10:58:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:51.318 10:58:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:51.318 10:58:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:51.318 10:58:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:51.318 10:58:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:51.318 10:58:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:06:51.318 10:58:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:51.318 10:58:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@721 -- # xtrace_disable 00:06:51.318 10:58:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.318 10:58:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2106133 00:06:51.318 10:58:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2106133 00:06:51.318 10:58:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:51.318 10:58:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@828 -- # '[' -z 2106133 ']' 00:06:51.318 10:58:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.318 10:58:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:51.318 10:58:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.318 10:58:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:51.318 10:58:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.318 [2024-05-15 10:58:48.249762] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:06:51.318 [2024-05-15 10:58:48.249803] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:51.318 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.318 [2024-05-15 10:58:48.304789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:51.318 [2024-05-15 10:58:48.384724] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:51.318 [2024-05-15 10:58:48.384758] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:51.318 [2024-05-15 10:58:48.384766] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:51.318 [2024-05-15 10:58:48.384772] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:51.318 [2024-05-15 10:58:48.384777] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:51.318 [2024-05-15 10:58:48.384819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.318 [2024-05-15 10:58:48.384917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.318 [2024-05-15 10:58:48.385000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:51.318 [2024-05-15 10:58:48.385001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.885 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:51.885 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@861 -- # return 0 00:06:51.885 10:58:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:51.885 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@727 -- # xtrace_disable 00:06:51.885 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.885 10:58:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:51.885 10:58:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:51.885 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:51.885 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.885 [2024-05-15 10:58:49.109213] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:51.885 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:51.885 10:58:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:06:51.885 10:58:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:51.885 10:58:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:06:51.885 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:51.885 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.885 Null1 00:06:51.885 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:51.885 10:58:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:51.885 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:51.885 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.885 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:51.885 10:58:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:06:51.885 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:51.885 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.885 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:51.885 10:58:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:51.885 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:52.144 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:52.144 [2024-05-15 10:58:49.154558] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:52.144 [2024-05-15 10:58:49.154749] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:52.144 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:52.144 10:58:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:52.144 10:58:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:52.145 Null2 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:52.145 Null3 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:52.145 Null4 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:52.145 10:58:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:06:52.404 00:06:52.404 Discovery Log Number of Records 6, Generation counter 6 00:06:52.404 =====Discovery Log Entry 0====== 00:06:52.404 trtype: tcp 00:06:52.404 adrfam: ipv4 00:06:52.404 subtype: current discovery subsystem 00:06:52.404 treq: not required 00:06:52.404 portid: 0 00:06:52.404 trsvcid: 4420 00:06:52.404 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:52.404 traddr: 10.0.0.2 00:06:52.404 eflags: explicit discovery connections, duplicate discovery information 00:06:52.404 sectype: none 00:06:52.404 =====Discovery Log Entry 1====== 00:06:52.404 trtype: tcp 00:06:52.404 adrfam: ipv4 00:06:52.404 subtype: nvme subsystem 00:06:52.404 treq: not required 00:06:52.404 portid: 0 00:06:52.404 trsvcid: 4420 00:06:52.404 subnqn: nqn.2016-06.io.spdk:cnode1 00:06:52.404 traddr: 10.0.0.2 00:06:52.404 eflags: none 00:06:52.404 sectype: none 00:06:52.404 =====Discovery Log Entry 2====== 00:06:52.404 trtype: tcp 00:06:52.404 adrfam: ipv4 00:06:52.404 subtype: nvme subsystem 00:06:52.404 treq: not required 00:06:52.404 portid: 0 00:06:52.404 trsvcid: 4420 00:06:52.404 subnqn: nqn.2016-06.io.spdk:cnode2 00:06:52.404 traddr: 10.0.0.2 00:06:52.404 eflags: none 00:06:52.404 sectype: none 00:06:52.404 =====Discovery Log Entry 3====== 00:06:52.404 trtype: tcp 00:06:52.404 adrfam: ipv4 00:06:52.404 subtype: nvme subsystem 00:06:52.404 treq: not required 00:06:52.404 portid: 0 00:06:52.404 trsvcid: 4420 00:06:52.404 subnqn: nqn.2016-06.io.spdk:cnode3 00:06:52.404 traddr: 10.0.0.2 00:06:52.404 eflags: none 00:06:52.404 sectype: none 00:06:52.404 =====Discovery Log Entry 4====== 00:06:52.404 trtype: tcp 00:06:52.404 adrfam: ipv4 00:06:52.404 subtype: nvme subsystem 00:06:52.404 treq: not required 00:06:52.404 portid: 0 00:06:52.404 trsvcid: 4420 00:06:52.404 subnqn: nqn.2016-06.io.spdk:cnode4 00:06:52.404 traddr: 10.0.0.2 00:06:52.404 eflags: none 00:06:52.404 sectype: none 00:06:52.404 =====Discovery Log Entry 5====== 00:06:52.404 trtype: tcp 00:06:52.404 adrfam: ipv4 00:06:52.404 subtype: discovery subsystem referral 00:06:52.404 treq: not required 00:06:52.404 portid: 0 00:06:52.404 trsvcid: 4430 00:06:52.404 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:52.404 traddr: 10.0.0.2 00:06:52.404 eflags: none 00:06:52.404 sectype: none 00:06:52.404 10:58:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:06:52.404 Perform nvmf subsystem discovery via RPC 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:52.405 [ 00:06:52.405 { 00:06:52.405 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:06:52.405 "subtype": "Discovery", 00:06:52.405 "listen_addresses": [ 00:06:52.405 { 00:06:52.405 "trtype": "TCP", 00:06:52.405 "adrfam": "IPv4", 00:06:52.405 "traddr": "10.0.0.2", 00:06:52.405 "trsvcid": "4420" 00:06:52.405 } 00:06:52.405 ], 00:06:52.405 "allow_any_host": true, 00:06:52.405 "hosts": [] 00:06:52.405 }, 00:06:52.405 { 00:06:52.405 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:06:52.405 "subtype": "NVMe", 00:06:52.405 "listen_addresses": [ 00:06:52.405 { 00:06:52.405 "trtype": "TCP", 00:06:52.405 "adrfam": "IPv4", 00:06:52.405 "traddr": "10.0.0.2", 00:06:52.405 "trsvcid": "4420" 00:06:52.405 } 00:06:52.405 ], 00:06:52.405 "allow_any_host": true, 00:06:52.405 "hosts": [], 00:06:52.405 "serial_number": "SPDK00000000000001", 00:06:52.405 "model_number": "SPDK bdev Controller", 00:06:52.405 "max_namespaces": 32, 00:06:52.405 "min_cntlid": 1, 00:06:52.405 "max_cntlid": 65519, 00:06:52.405 "namespaces": [ 00:06:52.405 { 00:06:52.405 "nsid": 1, 00:06:52.405 "bdev_name": "Null1", 00:06:52.405 "name": "Null1", 00:06:52.405 "nguid": "8A241C7D41A240809A932F83A215FE41", 00:06:52.405 "uuid": "8a241c7d-41a2-4080-9a93-2f83a215fe41" 00:06:52.405 } 00:06:52.405 ] 00:06:52.405 }, 00:06:52.405 { 00:06:52.405 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:06:52.405 "subtype": "NVMe", 00:06:52.405 "listen_addresses": [ 00:06:52.405 { 00:06:52.405 "trtype": "TCP", 00:06:52.405 "adrfam": "IPv4", 00:06:52.405 "traddr": "10.0.0.2", 00:06:52.405 "trsvcid": "4420" 00:06:52.405 } 00:06:52.405 ], 00:06:52.405 "allow_any_host": true, 00:06:52.405 "hosts": [], 00:06:52.405 "serial_number": "SPDK00000000000002", 00:06:52.405 "model_number": "SPDK bdev Controller", 00:06:52.405 "max_namespaces": 32, 00:06:52.405 "min_cntlid": 1, 00:06:52.405 "max_cntlid": 65519, 00:06:52.405 "namespaces": [ 00:06:52.405 { 00:06:52.405 "nsid": 1, 00:06:52.405 "bdev_name": "Null2", 00:06:52.405 "name": "Null2", 00:06:52.405 "nguid": "2C6B8DE51C4F4696BC6D7D55628D253B", 00:06:52.405 "uuid": "2c6b8de5-1c4f-4696-bc6d-7d55628d253b" 00:06:52.405 } 00:06:52.405 ] 00:06:52.405 }, 00:06:52.405 { 00:06:52.405 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:06:52.405 "subtype": "NVMe", 00:06:52.405 "listen_addresses": [ 00:06:52.405 { 00:06:52.405 "trtype": "TCP", 00:06:52.405 "adrfam": "IPv4", 00:06:52.405 "traddr": "10.0.0.2", 00:06:52.405 "trsvcid": "4420" 00:06:52.405 } 00:06:52.405 ], 00:06:52.405 "allow_any_host": true, 00:06:52.405 "hosts": [], 00:06:52.405 "serial_number": "SPDK00000000000003", 00:06:52.405 "model_number": "SPDK bdev Controller", 00:06:52.405 "max_namespaces": 32, 00:06:52.405 "min_cntlid": 1, 00:06:52.405 "max_cntlid": 65519, 00:06:52.405 "namespaces": [ 00:06:52.405 { 00:06:52.405 "nsid": 1, 00:06:52.405 "bdev_name": "Null3", 00:06:52.405 "name": "Null3", 00:06:52.405 "nguid": "ED84304851FC4212AA85B91C21FDB0E8", 00:06:52.405 "uuid": "ed843048-51fc-4212-aa85-b91c21fdb0e8" 00:06:52.405 } 00:06:52.405 ] 00:06:52.405 }, 00:06:52.405 { 00:06:52.405 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:06:52.405 "subtype": "NVMe", 00:06:52.405 "listen_addresses": [ 00:06:52.405 { 00:06:52.405 "trtype": "TCP", 00:06:52.405 "adrfam": "IPv4", 00:06:52.405 "traddr": "10.0.0.2", 00:06:52.405 "trsvcid": "4420" 00:06:52.405 } 00:06:52.405 ], 00:06:52.405 "allow_any_host": true, 00:06:52.405 "hosts": [], 00:06:52.405 "serial_number": "SPDK00000000000004", 00:06:52.405 "model_number": "SPDK bdev Controller", 00:06:52.405 "max_namespaces": 32, 00:06:52.405 "min_cntlid": 1, 00:06:52.405 "max_cntlid": 65519, 00:06:52.405 "namespaces": [ 00:06:52.405 { 00:06:52.405 "nsid": 1, 00:06:52.405 "bdev_name": "Null4", 00:06:52.405 "name": "Null4", 00:06:52.405 "nguid": "E01BC92827304DBDB25E55301E3BD973", 00:06:52.405 "uuid": "e01bc928-2730-4dbd-b25e-55301e3bd973" 00:06:52.405 } 00:06:52.405 ] 00:06:52.405 } 00:06:52.405 ] 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:52.405 10:58:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:52.405 rmmod nvme_tcp 00:06:52.405 rmmod nvme_fabrics 00:06:52.405 rmmod nvme_keyring 00:06:52.664 10:58:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:52.664 10:58:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:06:52.664 10:58:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:06:52.664 10:58:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2106133 ']' 00:06:52.664 10:58:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2106133 00:06:52.664 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@947 -- # '[' -z 2106133 ']' 00:06:52.664 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # kill -0 2106133 00:06:52.665 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # uname 00:06:52.665 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:52.665 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2106133 00:06:52.665 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:52.665 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:52.665 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2106133' 00:06:52.665 killing process with pid 2106133 00:06:52.665 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # kill 2106133 00:06:52.665 [2024-05-15 10:58:49.736537] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:52.665 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@971 -- # wait 2106133 00:06:52.923 10:58:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:52.923 10:58:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:52.923 10:58:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:52.923 10:58:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:52.923 10:58:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:52.923 10:58:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:52.923 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:52.923 10:58:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:54.838 10:58:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:54.838 00:06:54.838 real 0m9.215s 00:06:54.838 user 0m7.856s 00:06:54.838 sys 0m4.287s 00:06:54.838 10:58:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:54.838 10:58:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:54.838 ************************************ 00:06:54.838 END TEST nvmf_target_discovery 00:06:54.838 ************************************ 00:06:54.838 10:58:52 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:54.838 10:58:52 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:06:54.838 10:58:52 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:54.838 10:58:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:54.838 ************************************ 00:06:54.838 START TEST nvmf_referrals 00:06:54.838 ************************************ 00:06:54.838 10:58:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:55.097 * Looking for test storage... 00:06:55.097 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:55.097 10:58:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:55.097 10:58:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:06:55.097 10:58:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:55.097 10:58:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:55.097 10:58:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:55.097 10:58:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:55.097 10:58:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:55.097 10:58:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:55.097 10:58:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:55.097 10:58:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:55.097 10:58:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:55.097 10:58:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:55.097 10:58:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:55.097 10:58:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:55.097 10:58:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:55.097 10:58:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:55.097 10:58:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:55.097 10:58:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:55.097 10:58:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:55.097 10:58:52 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:55.097 10:58:52 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:55.097 10:58:52 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:55.097 10:58:52 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.097 10:58:52 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.097 10:58:52 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.097 10:58:52 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:06:55.097 10:58:52 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.097 10:58:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:06:55.097 10:58:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:55.097 10:58:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:55.097 10:58:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:55.097 10:58:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:55.097 10:58:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:55.097 10:58:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:55.097 10:58:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:55.097 10:58:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:55.098 10:58:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:06:55.098 10:58:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:06:55.098 10:58:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:06:55.098 10:58:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:06:55.098 10:58:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:06:55.098 10:58:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:06:55.098 10:58:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:06:55.098 10:58:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:55.098 10:58:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:55.098 10:58:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:55.098 10:58:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:55.098 10:58:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:55.098 10:58:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:55.098 10:58:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:55.098 10:58:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:55.098 10:58:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:55.098 10:58:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:55.098 10:58:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:06:55.098 10:58:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:00.371 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:00.371 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:00.371 Found net devices under 0000:86:00.0: cvl_0_0 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:00.371 Found net devices under 0000:86:00.1: cvl_0_1 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:00.371 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:00.371 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:07:00.371 00:07:00.371 --- 10.0.0.2 ping statistics --- 00:07:00.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.371 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:00.371 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:00.371 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:07:00.371 00:07:00.371 --- 10.0.0.1 ping statistics --- 00:07:00.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.371 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:00.371 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:00.372 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:00.372 10:58:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:00.372 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:00.372 10:58:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@721 -- # xtrace_disable 00:07:00.372 10:58:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:00.372 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2109902 00:07:00.372 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2109902 00:07:00.372 10:58:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@828 -- # '[' -z 2109902 ']' 00:07:00.372 10:58:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.372 10:58:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local max_retries=100 00:07:00.372 10:58:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.372 10:58:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@837 -- # xtrace_disable 00:07:00.372 10:58:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:00.372 10:58:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:00.372 [2024-05-15 10:58:57.433333] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:07:00.372 [2024-05-15 10:58:57.433377] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:00.372 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.372 [2024-05-15 10:58:57.490600] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:00.372 [2024-05-15 10:58:57.571131] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:00.372 [2024-05-15 10:58:57.571168] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:00.372 [2024-05-15 10:58:57.571176] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:00.372 [2024-05-15 10:58:57.571182] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:00.372 [2024-05-15 10:58:57.571187] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:00.372 [2024-05-15 10:58:57.571234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.372 [2024-05-15 10:58:57.571250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.372 [2024-05-15 10:58:57.571343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:00.372 [2024-05-15 10:58:57.571344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.310 10:58:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:07:01.310 10:58:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@861 -- # return 0 00:07:01.310 10:58:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:01.310 10:58:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@727 -- # xtrace_disable 00:07:01.310 10:58:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:01.310 10:58:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:01.310 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:01.310 10:58:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:01.310 10:58:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:01.310 [2024-05-15 10:58:58.278095] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:01.310 10:58:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:01.310 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:01.310 10:58:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:01.310 10:58:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:01.310 [2024-05-15 10:58:58.291418] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:01.310 [2024-05-15 10:58:58.291618] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:01.310 10:58:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:01.310 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:01.310 10:58:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:01.310 10:58:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:01.310 10:58:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:01.310 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:01.310 10:58:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:01.310 10:58:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:01.310 10:58:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:01.310 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:01.310 10:58:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:01.310 10:58:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:01.310 10:58:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:01.310 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:01.310 10:58:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:01.311 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:01.311 10:58:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:01.311 10:58:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:01.311 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:01.311 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:01.311 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:01.311 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:01.311 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:01.311 10:58:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:01.311 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:01.311 10:58:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:01.311 10:58:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:01.311 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:01.311 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:01.311 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:01.311 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:01.311 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:01.311 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:01.311 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:01.311 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:01.311 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:01.570 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:01.570 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:01.570 10:58:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:01.570 10:58:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:01.570 10:58:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:01.570 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:01.570 10:58:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:01.570 10:58:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:01.570 10:58:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:01.570 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:01.570 10:58:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:01.570 10:58:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:01.570 10:58:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:01.570 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:01.570 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:01.570 10:58:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:01.570 10:58:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:01.570 10:58:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:01.570 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:01.570 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:01.570 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:01.570 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:01.570 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:01.570 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:01.570 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:01.570 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:01.570 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:01.571 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:01.571 10:58:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:01.571 10:58:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:01.571 10:58:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:01.571 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:01.571 10:58:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:01.571 10:58:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:01.571 10:58:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:01.571 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:01.571 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:01.571 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:01.571 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:01.571 10:58:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:01.571 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:01.571 10:58:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:01.571 10:58:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:01.830 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:01.830 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:01.830 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:01.830 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:01.830 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:01.830 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:01.830 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:01.830 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:01.830 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:01.830 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:01.830 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:01.830 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:01.830 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:01.830 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:01.830 10:58:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:01.830 10:58:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:01.830 10:58:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:01.830 10:58:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:01.830 10:58:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:01.830 10:58:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:01.830 10:58:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:02.090 10:58:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:02.090 10:58:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:02.090 10:58:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:02.090 10:58:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:02.090 10:58:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:02.090 10:58:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:02.090 10:58:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:02.090 10:58:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:02.090 10:58:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:02.090 10:58:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:02.090 10:58:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:02.090 10:58:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:02.090 10:58:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:02.090 10:58:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:02.090 10:58:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:02.090 10:58:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:02.090 10:58:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:02.090 10:58:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:02.090 10:58:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:02.090 10:58:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:02.090 10:58:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:02.349 10:58:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:02.349 10:58:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:02.349 10:58:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:02.349 10:58:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:02.349 10:58:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:02.349 10:58:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:02.349 10:58:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:02.349 10:58:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:02.349 10:58:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:02.349 10:58:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:02.349 10:58:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:02.349 10:58:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:02.349 10:58:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:02.349 10:58:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:02.349 10:58:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:02.349 10:58:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:02.349 10:58:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:02.349 10:58:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:02.349 10:58:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:02.349 10:58:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:02.349 10:58:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:02.349 10:58:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:02.609 10:58:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:02.609 10:58:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:02.609 10:58:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:02.609 10:58:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:02.609 10:58:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:02.609 10:58:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:02.609 10:58:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:02.609 10:58:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:02.609 10:58:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:02.609 10:58:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:02.609 10:58:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:02.609 10:58:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:07:02.609 10:58:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:02.609 10:58:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:07:02.609 10:58:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:02.609 10:58:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:07:02.609 10:58:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:02.609 10:58:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:02.609 rmmod nvme_tcp 00:07:02.609 rmmod nvme_fabrics 00:07:02.609 rmmod nvme_keyring 00:07:02.609 10:58:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:02.609 10:58:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:07:02.609 10:58:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:07:02.609 10:58:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2109902 ']' 00:07:02.609 10:58:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2109902 00:07:02.609 10:58:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@947 -- # '[' -z 2109902 ']' 00:07:02.609 10:58:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # kill -0 2109902 00:07:02.609 10:58:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # uname 00:07:02.609 10:58:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:07:02.609 10:58:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2109902 00:07:02.609 10:58:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:07:02.609 10:58:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:07:02.609 10:58:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2109902' 00:07:02.609 killing process with pid 2109902 00:07:02.609 10:58:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # kill 2109902 00:07:02.609 [2024-05-15 10:58:59.835773] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:02.609 10:58:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@971 -- # wait 2109902 00:07:02.869 10:59:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:02.869 10:59:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:02.869 10:59:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:02.869 10:59:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:02.869 10:59:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:02.869 10:59:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:02.869 10:59:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:02.869 10:59:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.405 10:59:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:05.405 00:07:05.405 real 0m10.008s 00:07:05.405 user 0m11.981s 00:07:05.405 sys 0m4.513s 00:07:05.405 10:59:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:05.405 10:59:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:05.405 ************************************ 00:07:05.405 END TEST nvmf_referrals 00:07:05.405 ************************************ 00:07:05.405 10:59:02 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:05.405 10:59:02 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:07:05.405 10:59:02 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:05.405 10:59:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:05.405 ************************************ 00:07:05.405 START TEST nvmf_connect_disconnect 00:07:05.405 ************************************ 00:07:05.405 10:59:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:05.405 * Looking for test storage... 00:07:05.405 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:05.405 10:59:02 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:05.405 10:59:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:05.405 10:59:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:05.405 10:59:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:05.406 10:59:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:05.406 10:59:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:05.406 10:59:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:05.406 10:59:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:05.406 10:59:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:05.406 10:59:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:05.406 10:59:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:05.406 10:59:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:05.406 10:59:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:05.406 10:59:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:05.406 10:59:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:05.406 10:59:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:05.406 10:59:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:05.406 10:59:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:05.406 10:59:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:05.406 10:59:02 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.406 10:59:02 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.406 10:59:02 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.406 10:59:02 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.406 10:59:02 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.406 10:59:02 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.406 10:59:02 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:05.406 10:59:02 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.406 10:59:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:07:05.406 10:59:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:05.406 10:59:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:05.406 10:59:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:05.406 10:59:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:05.406 10:59:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:05.406 10:59:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:05.406 10:59:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:05.406 10:59:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:05.406 10:59:02 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:05.406 10:59:02 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:05.406 10:59:02 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:05.406 10:59:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:05.406 10:59:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:05.406 10:59:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:05.406 10:59:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:05.406 10:59:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:05.406 10:59:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:05.406 10:59:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:05.406 10:59:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.406 10:59:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:05.406 10:59:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:05.406 10:59:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:07:05.406 10:59:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:10.673 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:10.673 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:10.673 Found net devices under 0000:86:00.0: cvl_0_0 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:10.673 Found net devices under 0000:86:00.1: cvl_0_1 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:10.673 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:10.673 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:07:10.673 00:07:10.673 --- 10.0.0.2 ping statistics --- 00:07:10.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:10.673 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:07:10.673 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:10.673 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:10.674 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:07:10.674 00:07:10.674 --- 10.0.0.1 ping statistics --- 00:07:10.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:10.674 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:07:10.674 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:10.674 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:07:10.674 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:10.674 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:10.674 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:10.674 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:10.674 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:10.674 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:10.674 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:10.674 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:10.674 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:10.674 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@721 -- # xtrace_disable 00:07:10.674 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:10.674 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2113759 00:07:10.674 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2113759 00:07:10.674 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@828 -- # '[' -z 2113759 ']' 00:07:10.674 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.674 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local max_retries=100 00:07:10.674 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.674 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # xtrace_disable 00:07:10.674 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:10.674 10:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:10.674 [2024-05-15 10:59:07.537965] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:07:10.674 [2024-05-15 10:59:07.538006] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:10.674 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.674 [2024-05-15 10:59:07.594361] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:10.674 [2024-05-15 10:59:07.674435] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:10.674 [2024-05-15 10:59:07.674470] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:10.674 [2024-05-15 10:59:07.674477] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:10.674 [2024-05-15 10:59:07.674483] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:10.674 [2024-05-15 10:59:07.674489] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:10.674 [2024-05-15 10:59:07.674536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.674 [2024-05-15 10:59:07.674552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.674 [2024-05-15 10:59:07.674643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:10.674 [2024-05-15 10:59:07.674644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.271 10:59:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:07:11.271 10:59:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@861 -- # return 0 00:07:11.271 10:59:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:11.271 10:59:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@727 -- # xtrace_disable 00:07:11.271 10:59:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:11.271 10:59:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:11.271 10:59:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:11.271 10:59:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:11.271 10:59:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:11.271 [2024-05-15 10:59:08.397078] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:11.271 10:59:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:11.271 10:59:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:11.271 10:59:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:11.271 10:59:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:11.271 10:59:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:11.271 10:59:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:11.271 10:59:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:11.271 10:59:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:11.271 10:59:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:11.271 10:59:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:11.271 10:59:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:11.271 10:59:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:11.271 10:59:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:11.271 10:59:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:11.271 10:59:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:11.271 10:59:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:11.271 10:59:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:11.271 [2024-05-15 10:59:08.452895] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:11.271 [2024-05-15 10:59:08.453124] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:11.271 10:59:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:11.271 10:59:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:07:11.271 10:59:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:07:11.271 10:59:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:07:14.571 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:17.858 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:21.144 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:24.430 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:27.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:27.715 10:59:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:07:27.715 10:59:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:07:27.715 10:59:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:27.715 10:59:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:07:27.715 10:59:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:27.715 10:59:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:07:27.715 10:59:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:27.715 10:59:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:27.715 rmmod nvme_tcp 00:07:27.715 rmmod nvme_fabrics 00:07:27.715 rmmod nvme_keyring 00:07:27.715 10:59:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:27.715 10:59:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:07:27.715 10:59:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:07:27.715 10:59:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2113759 ']' 00:07:27.715 10:59:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2113759 00:07:27.715 10:59:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@947 -- # '[' -z 2113759 ']' 00:07:27.715 10:59:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # kill -0 2113759 00:07:27.715 10:59:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # uname 00:07:27.715 10:59:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:07:27.715 10:59:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2113759 00:07:27.715 10:59:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:07:27.715 10:59:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:07:27.715 10:59:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2113759' 00:07:27.715 killing process with pid 2113759 00:07:27.715 10:59:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # kill 2113759 00:07:27.715 [2024-05-15 10:59:24.781590] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:27.715 10:59:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # wait 2113759 00:07:27.974 10:59:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:27.974 10:59:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:27.974 10:59:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:27.974 10:59:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:27.974 10:59:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:27.974 10:59:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:27.974 10:59:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:27.974 10:59:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.880 10:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:29.880 00:07:29.880 real 0m24.918s 00:07:29.880 user 1m10.256s 00:07:29.880 sys 0m5.179s 00:07:29.880 10:59:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:29.880 10:59:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:29.880 ************************************ 00:07:29.880 END TEST nvmf_connect_disconnect 00:07:29.880 ************************************ 00:07:29.880 10:59:27 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:29.880 10:59:27 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:07:29.880 10:59:27 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:29.880 10:59:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:30.139 ************************************ 00:07:30.139 START TEST nvmf_multitarget 00:07:30.139 ************************************ 00:07:30.140 10:59:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:30.140 * Looking for test storage... 00:07:30.140 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:30.140 10:59:27 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:30.140 10:59:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:07:30.140 10:59:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.140 10:59:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.140 10:59:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.140 10:59:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.140 10:59:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:30.140 10:59:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:30.140 10:59:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.140 10:59:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:30.140 10:59:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.140 10:59:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:30.140 10:59:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:30.140 10:59:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:30.140 10:59:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:30.140 10:59:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:30.140 10:59:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:30.140 10:59:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:30.140 10:59:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:30.140 10:59:27 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.140 10:59:27 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.140 10:59:27 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.140 10:59:27 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.140 10:59:27 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.140 10:59:27 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.140 10:59:27 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:07:30.140 10:59:27 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.140 10:59:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:07:30.140 10:59:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:30.140 10:59:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:30.140 10:59:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:30.140 10:59:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:30.140 10:59:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:30.140 10:59:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:30.140 10:59:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:30.140 10:59:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:30.140 10:59:27 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:30.140 10:59:27 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:07:30.140 10:59:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:30.140 10:59:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:30.140 10:59:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:30.140 10:59:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:30.140 10:59:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:30.140 10:59:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.140 10:59:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:30.140 10:59:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.140 10:59:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:30.140 10:59:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:30.140 10:59:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:07:30.140 10:59:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:35.415 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:35.415 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:07:35.415 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:35.415 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:35.415 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:35.415 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:35.415 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:35.415 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:07:35.415 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:35.415 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:07:35.415 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:07:35.415 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:07:35.415 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:07:35.415 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:07:35.415 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:07:35.415 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:35.415 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:35.415 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:35.415 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:35.415 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:35.415 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:35.415 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:35.415 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:35.415 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:35.415 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:35.415 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:35.415 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:35.415 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:35.415 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:35.415 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:35.415 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:35.415 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:35.415 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:35.415 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:35.415 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:35.415 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:35.415 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:35.415 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.415 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.415 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:35.415 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:35.415 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:35.415 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:35.415 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:35.415 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:35.415 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.415 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.415 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:35.415 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:35.415 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:35.416 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:35.416 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:35.416 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.416 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:35.416 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:35.416 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:35.416 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:35.416 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.416 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:35.416 Found net devices under 0000:86:00.0: cvl_0_0 00:07:35.416 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.416 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:35.416 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.416 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:35.416 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:35.416 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:35.416 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:35.416 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.416 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:35.416 Found net devices under 0000:86:00.1: cvl_0_1 00:07:35.416 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.416 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:35.416 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:07:35.416 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:35.416 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:35.416 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:35.416 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:35.416 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:35.416 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:35.416 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:35.416 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:35.416 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:35.416 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:35.416 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:35.416 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:35.416 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:35.416 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:35.416 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:35.416 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:35.416 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:35.416 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:35.416 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:35.416 10:59:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:35.416 10:59:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:35.416 10:59:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:35.416 10:59:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:35.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:35.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:07:35.416 00:07:35.416 --- 10.0.0.2 ping statistics --- 00:07:35.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.416 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:07:35.416 10:59:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:35.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:35.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:07:35.416 00:07:35.416 --- 10.0.0.1 ping statistics --- 00:07:35.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.416 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:07:35.416 10:59:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:35.416 10:59:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:07:35.416 10:59:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:35.416 10:59:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:35.416 10:59:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:35.416 10:59:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:35.416 10:59:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:35.416 10:59:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:35.416 10:59:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:35.416 10:59:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:07:35.416 10:59:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:35.416 10:59:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@721 -- # xtrace_disable 00:07:35.416 10:59:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:35.416 10:59:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2120145 00:07:35.416 10:59:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2120145 00:07:35.416 10:59:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:35.416 10:59:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@828 -- # '[' -z 2120145 ']' 00:07:35.416 10:59:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.416 10:59:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local max_retries=100 00:07:35.416 10:59:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.416 10:59:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@837 -- # xtrace_disable 00:07:35.416 10:59:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:35.416 [2024-05-15 10:59:32.166650] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:07:35.416 [2024-05-15 10:59:32.166695] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.416 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.416 [2024-05-15 10:59:32.223635] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:35.416 [2024-05-15 10:59:32.305947] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:35.416 [2024-05-15 10:59:32.305982] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:35.416 [2024-05-15 10:59:32.305989] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:35.416 [2024-05-15 10:59:32.305995] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:35.416 [2024-05-15 10:59:32.306000] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:35.416 [2024-05-15 10:59:32.306035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.416 [2024-05-15 10:59:32.306135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.416 [2024-05-15 10:59:32.306210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:35.416 [2024-05-15 10:59:32.306212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.984 10:59:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:07:35.984 10:59:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@861 -- # return 0 00:07:35.984 10:59:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:35.984 10:59:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@727 -- # xtrace_disable 00:07:35.984 10:59:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:35.984 10:59:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:35.984 10:59:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:35.984 10:59:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:35.984 10:59:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:07:35.984 10:59:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:07:35.984 10:59:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:07:35.984 "nvmf_tgt_1" 00:07:35.985 10:59:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:07:36.243 "nvmf_tgt_2" 00:07:36.243 10:59:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:36.243 10:59:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:07:36.243 10:59:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:07:36.243 10:59:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:07:36.502 true 00:07:36.502 10:59:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:07:36.502 true 00:07:36.502 10:59:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:36.502 10:59:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:07:36.502 10:59:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:07:36.502 10:59:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:36.502 10:59:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:07:36.502 10:59:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:36.502 10:59:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:07:36.502 10:59:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:36.502 10:59:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:07:36.502 10:59:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:36.502 10:59:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:36.502 rmmod nvme_tcp 00:07:36.761 rmmod nvme_fabrics 00:07:36.761 rmmod nvme_keyring 00:07:36.761 10:59:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:36.761 10:59:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:07:36.761 10:59:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:07:36.761 10:59:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2120145 ']' 00:07:36.761 10:59:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2120145 00:07:36.761 10:59:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@947 -- # '[' -z 2120145 ']' 00:07:36.761 10:59:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # kill -0 2120145 00:07:36.761 10:59:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # uname 00:07:36.761 10:59:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:07:36.761 10:59:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2120145 00:07:36.761 10:59:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:07:36.761 10:59:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:07:36.761 10:59:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2120145' 00:07:36.761 killing process with pid 2120145 00:07:36.761 10:59:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # kill 2120145 00:07:36.761 10:59:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@971 -- # wait 2120145 00:07:37.020 10:59:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:37.020 10:59:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:37.020 10:59:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:37.020 10:59:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:37.020 10:59:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:37.020 10:59:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:37.020 10:59:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:37.020 10:59:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:38.920 10:59:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:38.920 00:07:38.920 real 0m8.975s 00:07:38.920 user 0m9.020s 00:07:38.920 sys 0m4.126s 00:07:38.920 10:59:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:38.920 10:59:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:38.920 ************************************ 00:07:38.920 END TEST nvmf_multitarget 00:07:38.920 ************************************ 00:07:38.920 10:59:36 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:38.920 10:59:36 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:07:38.920 10:59:36 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:38.920 10:59:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:39.178 ************************************ 00:07:39.178 START TEST nvmf_rpc 00:07:39.178 ************************************ 00:07:39.178 10:59:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:39.178 * Looking for test storage... 00:07:39.178 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:39.178 10:59:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:39.178 10:59:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:07:39.178 10:59:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:39.178 10:59:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:39.178 10:59:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:39.178 10:59:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:39.178 10:59:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:39.178 10:59:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:39.178 10:59:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:39.178 10:59:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:39.178 10:59:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:39.178 10:59:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:39.178 10:59:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:39.178 10:59:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:39.178 10:59:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:39.178 10:59:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:39.178 10:59:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:39.178 10:59:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:39.178 10:59:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:39.178 10:59:36 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:39.178 10:59:36 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:39.178 10:59:36 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:39.178 10:59:36 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.178 10:59:36 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.178 10:59:36 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.178 10:59:36 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:07:39.178 10:59:36 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.178 10:59:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:07:39.178 10:59:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:39.178 10:59:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:39.178 10:59:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:39.178 10:59:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:39.178 10:59:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:39.178 10:59:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:39.178 10:59:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:39.178 10:59:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:39.178 10:59:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:07:39.178 10:59:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:07:39.178 10:59:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:39.178 10:59:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:39.178 10:59:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:39.178 10:59:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:39.178 10:59:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:39.178 10:59:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:39.178 10:59:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:39.178 10:59:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:39.178 10:59:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:39.178 10:59:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:39.178 10:59:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:07:39.178 10:59:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:44.461 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:44.461 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:07:44.461 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:44.461 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:44.461 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:44.461 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:44.461 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:44.461 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:07:44.461 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:44.461 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:07:44.461 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:07:44.461 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:07:44.461 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:07:44.461 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:07:44.461 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:07:44.461 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:44.461 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:44.461 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:44.461 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:44.461 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:44.461 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:44.461 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:44.461 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:44.461 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:44.461 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:44.462 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:44.462 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:44.462 Found net devices under 0000:86:00.0: cvl_0_0 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:44.462 Found net devices under 0000:86:00.1: cvl_0_1 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:44.462 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:44.462 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:07:44.462 00:07:44.462 --- 10.0.0.2 ping statistics --- 00:07:44.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.462 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:44.462 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:44.462 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:07:44.462 00:07:44.462 --- 10.0.0.1 ping statistics --- 00:07:44.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.462 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@721 -- # xtrace_disable 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2123925 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2123925 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@828 -- # '[' -z 2123925 ']' 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:07:44.462 10:59:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:44.462 [2024-05-15 10:59:41.586007] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:07:44.463 [2024-05-15 10:59:41.586048] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:44.463 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.463 [2024-05-15 10:59:41.642619] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:44.463 [2024-05-15 10:59:41.716393] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:44.463 [2024-05-15 10:59:41.716429] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:44.463 [2024-05-15 10:59:41.716437] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:44.463 [2024-05-15 10:59:41.716443] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:44.463 [2024-05-15 10:59:41.716449] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:44.463 [2024-05-15 10:59:41.716490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.769 [2024-05-15 10:59:41.716586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.769 [2024-05-15 10:59:41.716671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:44.769 [2024-05-15 10:59:41.716673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.337 10:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:07:45.337 10:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@861 -- # return 0 00:07:45.337 10:59:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:45.337 10:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@727 -- # xtrace_disable 00:07:45.337 10:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.337 10:59:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:45.337 10:59:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:07:45.337 10:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:45.337 10:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.337 10:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:45.337 10:59:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:07:45.337 "tick_rate": 2300000000, 00:07:45.337 "poll_groups": [ 00:07:45.337 { 00:07:45.337 "name": "nvmf_tgt_poll_group_000", 00:07:45.337 "admin_qpairs": 0, 00:07:45.337 "io_qpairs": 0, 00:07:45.337 "current_admin_qpairs": 0, 00:07:45.337 "current_io_qpairs": 0, 00:07:45.337 "pending_bdev_io": 0, 00:07:45.337 "completed_nvme_io": 0, 00:07:45.337 "transports": [] 00:07:45.337 }, 00:07:45.337 { 00:07:45.337 "name": "nvmf_tgt_poll_group_001", 00:07:45.337 "admin_qpairs": 0, 00:07:45.337 "io_qpairs": 0, 00:07:45.337 "current_admin_qpairs": 0, 00:07:45.337 "current_io_qpairs": 0, 00:07:45.337 "pending_bdev_io": 0, 00:07:45.337 "completed_nvme_io": 0, 00:07:45.337 "transports": [] 00:07:45.337 }, 00:07:45.337 { 00:07:45.337 "name": "nvmf_tgt_poll_group_002", 00:07:45.337 "admin_qpairs": 0, 00:07:45.337 "io_qpairs": 0, 00:07:45.337 "current_admin_qpairs": 0, 00:07:45.337 "current_io_qpairs": 0, 00:07:45.337 "pending_bdev_io": 0, 00:07:45.337 "completed_nvme_io": 0, 00:07:45.337 "transports": [] 00:07:45.337 }, 00:07:45.337 { 00:07:45.337 "name": "nvmf_tgt_poll_group_003", 00:07:45.337 "admin_qpairs": 0, 00:07:45.337 "io_qpairs": 0, 00:07:45.337 "current_admin_qpairs": 0, 00:07:45.337 "current_io_qpairs": 0, 00:07:45.337 "pending_bdev_io": 0, 00:07:45.337 "completed_nvme_io": 0, 00:07:45.337 "transports": [] 00:07:45.337 } 00:07:45.337 ] 00:07:45.337 }' 00:07:45.337 10:59:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:07:45.337 10:59:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:07:45.337 10:59:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:07:45.337 10:59:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:07:45.337 10:59:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:07:45.337 10:59:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:07:45.337 10:59:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:07:45.337 10:59:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:45.337 10:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:45.337 10:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.337 [2024-05-15 10:59:42.551435] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:45.337 10:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:45.337 10:59:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:07:45.337 10:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:45.337 10:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.337 10:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:45.337 10:59:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:07:45.337 "tick_rate": 2300000000, 00:07:45.337 "poll_groups": [ 00:07:45.337 { 00:07:45.337 "name": "nvmf_tgt_poll_group_000", 00:07:45.337 "admin_qpairs": 0, 00:07:45.337 "io_qpairs": 0, 00:07:45.337 "current_admin_qpairs": 0, 00:07:45.337 "current_io_qpairs": 0, 00:07:45.337 "pending_bdev_io": 0, 00:07:45.337 "completed_nvme_io": 0, 00:07:45.337 "transports": [ 00:07:45.337 { 00:07:45.337 "trtype": "TCP" 00:07:45.337 } 00:07:45.337 ] 00:07:45.337 }, 00:07:45.337 { 00:07:45.337 "name": "nvmf_tgt_poll_group_001", 00:07:45.337 "admin_qpairs": 0, 00:07:45.337 "io_qpairs": 0, 00:07:45.337 "current_admin_qpairs": 0, 00:07:45.337 "current_io_qpairs": 0, 00:07:45.337 "pending_bdev_io": 0, 00:07:45.337 "completed_nvme_io": 0, 00:07:45.337 "transports": [ 00:07:45.337 { 00:07:45.337 "trtype": "TCP" 00:07:45.337 } 00:07:45.337 ] 00:07:45.337 }, 00:07:45.337 { 00:07:45.337 "name": "nvmf_tgt_poll_group_002", 00:07:45.337 "admin_qpairs": 0, 00:07:45.337 "io_qpairs": 0, 00:07:45.337 "current_admin_qpairs": 0, 00:07:45.337 "current_io_qpairs": 0, 00:07:45.337 "pending_bdev_io": 0, 00:07:45.337 "completed_nvme_io": 0, 00:07:45.337 "transports": [ 00:07:45.337 { 00:07:45.337 "trtype": "TCP" 00:07:45.337 } 00:07:45.337 ] 00:07:45.337 }, 00:07:45.337 { 00:07:45.337 "name": "nvmf_tgt_poll_group_003", 00:07:45.337 "admin_qpairs": 0, 00:07:45.337 "io_qpairs": 0, 00:07:45.337 "current_admin_qpairs": 0, 00:07:45.337 "current_io_qpairs": 0, 00:07:45.337 "pending_bdev_io": 0, 00:07:45.337 "completed_nvme_io": 0, 00:07:45.337 "transports": [ 00:07:45.337 { 00:07:45.337 "trtype": "TCP" 00:07:45.337 } 00:07:45.337 ] 00:07:45.337 } 00:07:45.337 ] 00:07:45.337 }' 00:07:45.337 10:59:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:07:45.337 10:59:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:45.337 10:59:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:45.338 10:59:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:45.596 10:59:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:07:45.596 10:59:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:07:45.596 10:59:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:45.596 10:59:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:45.596 10:59:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:45.596 10:59:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:07:45.596 10:59:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:07:45.596 10:59:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:07:45.596 10:59:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:07:45.596 10:59:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:07:45.596 10:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:45.596 10:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.596 Malloc1 00:07:45.596 10:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:45.596 10:59:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:45.596 10:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:45.596 10:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.596 10:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:45.596 10:59:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:45.596 10:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:45.596 10:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.596 10:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:45.596 10:59:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:07:45.596 10:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:45.596 10:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.596 10:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:45.596 10:59:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:45.596 10:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:45.596 10:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.596 [2024-05-15 10:59:42.715321] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:45.596 [2024-05-15 10:59:42.715557] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:45.596 10:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:45.596 10:59:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:07:45.596 10:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:07:45.596 10:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:07:45.596 10:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:07:45.596 10:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:45.596 10:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:07:45.596 10:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:45.596 10:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:07:45.596 10:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:45.596 10:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:07:45.596 10:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:07:45.596 10:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:07:45.596 [2024-05-15 10:59:42.743999] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:07:45.596 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:45.596 could not add new controller: failed to write to nvme-fabrics device 00:07:45.596 10:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:07:45.596 10:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:45.596 10:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:45.596 10:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:45.597 10:59:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:45.597 10:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:45.597 10:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.597 10:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:45.597 10:59:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:46.969 10:59:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:07:46.969 10:59:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:07:46.969 10:59:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:07:46.969 10:59:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:07:46.969 10:59:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:07:48.872 10:59:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:07:48.872 10:59:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:07:48.872 10:59:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:07:48.872 10:59:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:07:48.872 10:59:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:07:48.872 10:59:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:07:48.872 10:59:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:48.872 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:48.872 10:59:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:48.872 10:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:07:48.872 10:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:07:48.872 10:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:48.872 10:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:07:48.872 10:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:48.872 10:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:07:48.872 10:59:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:48.872 10:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:48.872 10:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.872 10:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:48.872 10:59:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:48.872 10:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:07:48.872 10:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:48.872 10:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:07:48.872 10:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:48.872 10:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:07:48.872 10:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:48.872 10:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:07:48.872 10:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:48.872 10:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:07:48.872 10:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:07:48.872 10:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:48.872 [2024-05-15 10:59:46.099322] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:07:48.872 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:48.872 could not add new controller: failed to write to nvme-fabrics device 00:07:48.872 10:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:07:48.872 10:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:48.872 10:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:48.872 10:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:48.872 10:59:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:07:48.872 10:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:48.872 10:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.872 10:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:48.872 10:59:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:50.248 10:59:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:07:50.248 10:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:07:50.248 10:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:07:50.248 10:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:07:50.248 10:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:07:52.153 10:59:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:07:52.153 10:59:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:07:52.153 10:59:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:07:52.153 10:59:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:07:52.153 10:59:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:07:52.153 10:59:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:07:52.153 10:59:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:52.153 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:52.153 10:59:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:52.153 10:59:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:07:52.153 10:59:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:07:52.153 10:59:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:52.153 10:59:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:07:52.153 10:59:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:52.153 10:59:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:07:52.153 10:59:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:52.153 10:59:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:52.153 10:59:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.153 10:59:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:52.153 10:59:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:07:52.153 10:59:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:52.153 10:59:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:52.153 10:59:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:52.153 10:59:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.153 10:59:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:52.153 10:59:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:52.153 10:59:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:52.153 10:59:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.153 [2024-05-15 10:59:49.402774] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:52.153 10:59:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:52.153 10:59:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:52.153 10:59:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:52.153 10:59:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.153 10:59:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:52.153 10:59:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:52.153 10:59:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:52.153 10:59:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.412 10:59:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:52.412 10:59:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:53.348 10:59:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:53.348 10:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:07:53.348 10:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:07:53.348 10:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:07:53.348 10:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:07:55.884 10:59:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:07:55.884 10:59:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:07:55.884 10:59:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:07:55.884 10:59:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:07:55.884 10:59:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:07:55.884 10:59:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:07:55.884 10:59:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:55.884 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:55.884 10:59:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:55.884 10:59:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:07:55.884 10:59:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:07:55.884 10:59:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:55.884 10:59:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:07:55.884 10:59:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:55.884 10:59:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:07:55.884 10:59:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:55.884 10:59:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:55.884 10:59:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.884 10:59:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:55.884 10:59:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:55.884 10:59:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:55.884 10:59:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.884 10:59:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:55.884 10:59:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:55.884 10:59:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:55.884 10:59:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:55.884 10:59:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.884 10:59:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:55.884 10:59:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:55.884 10:59:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:55.884 10:59:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.884 [2024-05-15 10:59:52.731195] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:55.884 10:59:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:55.884 10:59:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:55.884 10:59:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:55.884 10:59:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.884 10:59:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:55.884 10:59:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:55.884 10:59:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:55.884 10:59:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.884 10:59:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:55.884 10:59:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:56.821 10:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:56.821 10:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:07:56.821 10:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:07:56.821 10:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:07:56.821 10:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:07:58.725 10:59:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:07:58.725 10:59:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:07:58.725 10:59:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:07:58.725 10:59:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:07:58.725 10:59:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:07:58.725 10:59:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:07:58.725 10:59:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:58.985 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:58.985 10:59:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:58.985 10:59:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:07:58.985 10:59:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:07:58.985 10:59:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:58.985 10:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:07:58.985 10:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:58.985 10:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:07:58.985 10:59:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:58.985 10:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:58.985 10:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.985 10:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:58.985 10:59:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:58.985 10:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:58.985 10:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.985 10:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:58.985 10:59:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:58.985 10:59:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:58.985 10:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:58.985 10:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.985 10:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:58.985 10:59:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:58.985 10:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:58.985 10:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.985 [2024-05-15 10:59:56.056353] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:58.985 10:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:58.985 10:59:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:58.985 10:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:58.985 10:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.985 10:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:58.985 10:59:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:58.985 10:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:58.985 10:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.985 10:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:58.985 10:59:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:00.364 10:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:00.364 10:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:08:00.364 10:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:08:00.364 10:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:08:00.364 10:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:08:02.265 10:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:08:02.265 10:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:08:02.265 10:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:08:02.265 10:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:08:02.265 10:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:08:02.265 10:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:08:02.265 10:59:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:02.265 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:02.265 10:59:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:02.265 10:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:08:02.265 10:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:08:02.265 10:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:02.265 10:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:08:02.265 10:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:02.265 10:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:08:02.265 10:59:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:02.265 10:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:02.265 10:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.265 10:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:02.265 10:59:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:02.265 10:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:02.265 10:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.265 10:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:02.265 10:59:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:02.265 10:59:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:02.265 10:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:02.265 10:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.265 10:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:02.265 10:59:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:02.265 10:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:02.265 10:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.265 [2024-05-15 10:59:59.489141] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:02.265 10:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:02.265 10:59:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:02.265 10:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:02.265 10:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.265 10:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:02.265 10:59:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:02.265 10:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:02.265 10:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.265 10:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:02.265 10:59:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:03.641 11:00:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:03.641 11:00:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:08:03.641 11:00:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:08:03.641 11:00:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:08:03.641 11:00:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:08:05.553 11:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:08:05.553 11:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:08:05.553 11:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:08:05.553 11:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:08:05.553 11:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:08:05.553 11:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:08:05.553 11:00:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:05.553 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:05.553 11:00:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:05.553 11:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:08:05.553 11:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:08:05.553 11:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:05.553 11:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:08:05.553 11:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:05.553 11:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:08:05.553 11:00:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:05.553 11:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:05.553 11:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.553 11:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:05.553 11:00:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:05.553 11:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:05.553 11:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.553 11:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:05.553 11:00:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:05.553 11:00:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:05.553 11:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:05.553 11:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.553 11:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:05.553 11:00:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:05.553 11:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:05.553 11:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.553 [2024-05-15 11:00:02.811782] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:05.553 11:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:05.553 11:00:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:05.553 11:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:05.553 11:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.812 11:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:05.812 11:00:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:05.812 11:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:05.812 11:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.812 11:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:05.812 11:00:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:06.745 11:00:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:06.745 11:00:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:08:06.745 11:00:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:08:06.745 11:00:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:08:06.745 11:00:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:08:08.649 11:00:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:08:08.906 11:00:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:08:08.906 11:00:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:08:08.906 11:00:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:08:08.906 11:00:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:08:08.906 11:00:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:08:08.906 11:00:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:08.906 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.906 [2024-05-15 11:00:06.102000] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.906 [2024-05-15 11:00:06.150135] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:08.906 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.164 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:09.164 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:09.164 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:09.164 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.164 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:09.164 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:09.164 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:09.164 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:09.164 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.165 [2024-05-15 11:00:06.202299] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.165 [2024-05-15 11:00:06.250443] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.165 [2024-05-15 11:00:06.298615] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:08:09.165 "tick_rate": 2300000000, 00:08:09.165 "poll_groups": [ 00:08:09.165 { 00:08:09.165 "name": "nvmf_tgt_poll_group_000", 00:08:09.165 "admin_qpairs": 2, 00:08:09.165 "io_qpairs": 168, 00:08:09.165 "current_admin_qpairs": 0, 00:08:09.165 "current_io_qpairs": 0, 00:08:09.165 "pending_bdev_io": 0, 00:08:09.165 "completed_nvme_io": 224, 00:08:09.165 "transports": [ 00:08:09.165 { 00:08:09.165 "trtype": "TCP" 00:08:09.165 } 00:08:09.165 ] 00:08:09.165 }, 00:08:09.165 { 00:08:09.165 "name": "nvmf_tgt_poll_group_001", 00:08:09.165 "admin_qpairs": 2, 00:08:09.165 "io_qpairs": 168, 00:08:09.165 "current_admin_qpairs": 0, 00:08:09.165 "current_io_qpairs": 0, 00:08:09.165 "pending_bdev_io": 0, 00:08:09.165 "completed_nvme_io": 171, 00:08:09.165 "transports": [ 00:08:09.165 { 00:08:09.165 "trtype": "TCP" 00:08:09.165 } 00:08:09.165 ] 00:08:09.165 }, 00:08:09.165 { 00:08:09.165 "name": "nvmf_tgt_poll_group_002", 00:08:09.165 "admin_qpairs": 1, 00:08:09.165 "io_qpairs": 168, 00:08:09.165 "current_admin_qpairs": 0, 00:08:09.165 "current_io_qpairs": 0, 00:08:09.165 "pending_bdev_io": 0, 00:08:09.165 "completed_nvme_io": 316, 00:08:09.165 "transports": [ 00:08:09.165 { 00:08:09.165 "trtype": "TCP" 00:08:09.165 } 00:08:09.165 ] 00:08:09.165 }, 00:08:09.165 { 00:08:09.165 "name": "nvmf_tgt_poll_group_003", 00:08:09.165 "admin_qpairs": 2, 00:08:09.165 "io_qpairs": 168, 00:08:09.165 "current_admin_qpairs": 0, 00:08:09.165 "current_io_qpairs": 0, 00:08:09.165 "pending_bdev_io": 0, 00:08:09.165 "completed_nvme_io": 311, 00:08:09.165 "transports": [ 00:08:09.165 { 00:08:09.165 "trtype": "TCP" 00:08:09.165 } 00:08:09.165 ] 00:08:09.165 } 00:08:09.165 ] 00:08:09.165 }' 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:09.165 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:09.426 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:08:09.426 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:08:09.426 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:08:09.426 11:00:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:08:09.426 11:00:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:09.426 11:00:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:08:09.426 11:00:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:09.426 11:00:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:08:09.426 11:00:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:09.426 11:00:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:09.426 rmmod nvme_tcp 00:08:09.426 rmmod nvme_fabrics 00:08:09.426 rmmod nvme_keyring 00:08:09.426 11:00:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:09.426 11:00:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:08:09.426 11:00:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:08:09.426 11:00:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2123925 ']' 00:08:09.426 11:00:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2123925 00:08:09.426 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@947 -- # '[' -z 2123925 ']' 00:08:09.426 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # kill -0 2123925 00:08:09.426 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # uname 00:08:09.426 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:08:09.426 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2123925 00:08:09.426 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:08:09.426 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:08:09.426 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2123925' 00:08:09.426 killing process with pid 2123925 00:08:09.426 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # kill 2123925 00:08:09.426 [2024-05-15 11:00:06.551589] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:09.426 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@971 -- # wait 2123925 00:08:09.712 11:00:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:09.712 11:00:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:09.712 11:00:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:09.712 11:00:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:09.712 11:00:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:09.712 11:00:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.712 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:09.712 11:00:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.627 11:00:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:11.627 00:08:11.627 real 0m32.633s 00:08:11.627 user 1m41.284s 00:08:11.627 sys 0m5.776s 00:08:11.627 11:00:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:08:11.627 11:00:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:11.627 ************************************ 00:08:11.627 END TEST nvmf_rpc 00:08:11.627 ************************************ 00:08:11.627 11:00:08 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:11.627 11:00:08 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:08:11.627 11:00:08 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:08:11.627 11:00:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:11.886 ************************************ 00:08:11.886 START TEST nvmf_invalid 00:08:11.886 ************************************ 00:08:11.886 11:00:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:11.886 * Looking for test storage... 00:08:11.886 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:11.886 11:00:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:11.886 11:00:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:08:11.886 11:00:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:11.886 11:00:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:11.886 11:00:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:11.886 11:00:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:11.886 11:00:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:11.886 11:00:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:11.886 11:00:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:11.886 11:00:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:11.886 11:00:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:11.886 11:00:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:11.886 11:00:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:11.886 11:00:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:11.886 11:00:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:11.886 11:00:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:11.886 11:00:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:11.886 11:00:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:11.887 11:00:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:11.887 11:00:09 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.887 11:00:09 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.887 11:00:09 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.887 11:00:09 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.887 11:00:09 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.887 11:00:09 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.887 11:00:09 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:08:11.887 11:00:09 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.887 11:00:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:08:11.887 11:00:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:11.887 11:00:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:11.887 11:00:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:11.887 11:00:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:11.887 11:00:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:11.887 11:00:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:11.887 11:00:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:11.887 11:00:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:11.887 11:00:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:11.887 11:00:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:11.887 11:00:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:08:11.887 11:00:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:08:11.887 11:00:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:08:11.887 11:00:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:08:11.887 11:00:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:11.887 11:00:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:11.887 11:00:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:11.887 11:00:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:11.887 11:00:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:11.887 11:00:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.887 11:00:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:11.887 11:00:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.887 11:00:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:11.887 11:00:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:11.887 11:00:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:08:11.887 11:00:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:17.150 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:17.150 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:17.150 Found net devices under 0000:86:00.0: cvl_0_0 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:17.150 Found net devices under 0000:86:00.1: cvl_0_1 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:17.150 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:17.150 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:08:17.150 00:08:17.150 --- 10.0.0.2 ping statistics --- 00:08:17.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.150 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:17.150 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:17.150 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:08:17.150 00:08:17.150 --- 10.0.0.1 ping statistics --- 00:08:17.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.150 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@721 -- # xtrace_disable 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2132050 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2132050 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@828 -- # '[' -z 2132050 ']' 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local max_retries=100 00:08:17.150 11:00:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.151 11:00:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:17.151 11:00:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@837 -- # xtrace_disable 00:08:17.151 11:00:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:17.151 [2024-05-15 11:00:13.946078] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:08:17.151 [2024-05-15 11:00:13.946121] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:17.151 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.151 [2024-05-15 11:00:14.001929] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:17.151 [2024-05-15 11:00:14.082277] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:17.151 [2024-05-15 11:00:14.082313] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:17.151 [2024-05-15 11:00:14.082320] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:17.151 [2024-05-15 11:00:14.082327] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:17.151 [2024-05-15 11:00:14.082332] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:17.151 [2024-05-15 11:00:14.082374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.151 [2024-05-15 11:00:14.082472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:17.151 [2024-05-15 11:00:14.082489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:17.151 [2024-05-15 11:00:14.082491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.740 11:00:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:08:17.740 11:00:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@861 -- # return 0 00:08:17.740 11:00:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:17.740 11:00:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@727 -- # xtrace_disable 00:08:17.740 11:00:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:17.740 11:00:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:17.740 11:00:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:17.740 11:00:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode29916 00:08:17.740 [2024-05-15 11:00:14.958654] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:08:17.740 11:00:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:08:17.740 { 00:08:17.740 "nqn": "nqn.2016-06.io.spdk:cnode29916", 00:08:17.740 "tgt_name": "foobar", 00:08:17.740 "method": "nvmf_create_subsystem", 00:08:17.740 "req_id": 1 00:08:17.740 } 00:08:17.740 Got JSON-RPC error response 00:08:17.740 response: 00:08:17.740 { 00:08:17.740 "code": -32603, 00:08:17.740 "message": "Unable to find target foobar" 00:08:17.740 }' 00:08:17.740 11:00:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:08:17.740 { 00:08:17.740 "nqn": "nqn.2016-06.io.spdk:cnode29916", 00:08:17.740 "tgt_name": "foobar", 00:08:17.740 "method": "nvmf_create_subsystem", 00:08:17.740 "req_id": 1 00:08:17.740 } 00:08:17.740 Got JSON-RPC error response 00:08:17.740 response: 00:08:17.740 { 00:08:17.740 "code": -32603, 00:08:17.740 "message": "Unable to find target foobar" 00:08:17.740 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:08:17.740 11:00:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:08:17.740 11:00:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode29468 00:08:17.999 [2024-05-15 11:00:15.147350] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29468: invalid serial number 'SPDKISFASTANDAWESOME' 00:08:17.999 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:08:17.999 { 00:08:17.999 "nqn": "nqn.2016-06.io.spdk:cnode29468", 00:08:17.999 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:08:17.999 "method": "nvmf_create_subsystem", 00:08:17.999 "req_id": 1 00:08:17.999 } 00:08:17.999 Got JSON-RPC error response 00:08:17.999 response: 00:08:17.999 { 00:08:17.999 "code": -32602, 00:08:17.999 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:08:17.999 }' 00:08:17.999 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:08:17.999 { 00:08:17.999 "nqn": "nqn.2016-06.io.spdk:cnode29468", 00:08:17.999 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:08:17.999 "method": "nvmf_create_subsystem", 00:08:17.999 "req_id": 1 00:08:17.999 } 00:08:17.999 Got JSON-RPC error response 00:08:17.999 response: 00:08:17.999 { 00:08:17.999 "code": -32602, 00:08:17.999 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:08:17.999 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:17.999 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:08:17.999 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode5874 00:08:18.258 [2024-05-15 11:00:15.335946] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5874: invalid model number 'SPDK_Controller' 00:08:18.258 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:08:18.258 { 00:08:18.258 "nqn": "nqn.2016-06.io.spdk:cnode5874", 00:08:18.258 "model_number": "SPDK_Controller\u001f", 00:08:18.258 "method": "nvmf_create_subsystem", 00:08:18.258 "req_id": 1 00:08:18.258 } 00:08:18.258 Got JSON-RPC error response 00:08:18.258 response: 00:08:18.258 { 00:08:18.258 "code": -32602, 00:08:18.258 "message": "Invalid MN SPDK_Controller\u001f" 00:08:18.258 }' 00:08:18.258 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:08:18.258 { 00:08:18.258 "nqn": "nqn.2016-06.io.spdk:cnode5874", 00:08:18.258 "model_number": "SPDK_Controller\u001f", 00:08:18.258 "method": "nvmf_create_subsystem", 00:08:18.258 "req_id": 1 00:08:18.258 } 00:08:18.258 Got JSON-RPC error response 00:08:18.258 response: 00:08:18.258 { 00:08:18.258 "code": -32602, 00:08:18.258 "message": "Invalid MN SPDK_Controller\u001f" 00:08:18.258 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:08:18.258 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:08:18.258 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:08:18.258 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ ] == \- ]] 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '],s)5rtWx7%^amb0$wr0' 00:08:18.259 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '],s)5rtWx7%^amb0$wr0' nqn.2016-06.io.spdk:cnode13577 00:08:18.519 [2024-05-15 11:00:15.649016] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13577: invalid serial number '],s)5rtWx7%^amb0$wr0' 00:08:18.519 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:08:18.519 { 00:08:18.519 "nqn": "nqn.2016-06.io.spdk:cnode13577", 00:08:18.519 "serial_number": "],s)\u007f5rtWx7%^amb0$wr0", 00:08:18.519 "method": "nvmf_create_subsystem", 00:08:18.519 "req_id": 1 00:08:18.519 } 00:08:18.519 Got JSON-RPC error response 00:08:18.519 response: 00:08:18.519 { 00:08:18.519 "code": -32602, 00:08:18.519 "message": "Invalid SN ],s)\u007f5rtWx7%^amb0$wr0" 00:08:18.519 }' 00:08:18.519 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:08:18.519 { 00:08:18.519 "nqn": "nqn.2016-06.io.spdk:cnode13577", 00:08:18.519 "serial_number": "],s)\u007f5rtWx7%^amb0$wr0", 00:08:18.519 "method": "nvmf_create_subsystem", 00:08:18.519 "req_id": 1 00:08:18.519 } 00:08:18.519 Got JSON-RPC error response 00:08:18.519 response: 00:08:18.519 { 00:08:18.519 "code": -32602, 00:08:18.519 "message": "Invalid SN ],s)\u007f5rtWx7%^amb0$wr0" 00:08:18.519 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:18.519 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:08:18.519 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:08:18.519 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:18.519 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:08:18.519 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:08:18.519 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:18.519 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.519 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:08:18.519 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:08:18.519 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:08:18.519 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.519 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.520 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.778 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:08:18.778 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:08:18.778 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:08:18.778 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.778 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.778 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:08:18.778 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:08:18.778 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:08:18.778 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.778 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.778 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:08:18.778 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ U == \- ]] 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Us%nWAnG"IwiHBv%^^nu8cp\8`B1wNQ!HQ6[cVK|u' 00:08:18.779 11:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'Us%nWAnG"IwiHBv%^^nu8cp\8`B1wNQ!HQ6[cVK|u' nqn.2016-06.io.spdk:cnode16785 00:08:19.038 [2024-05-15 11:00:16.102558] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16785: invalid model number 'Us%nWAnG"IwiHBv%^^nu8cp\8`B1wNQ!HQ6[cVK|u' 00:08:19.038 11:00:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:08:19.038 { 00:08:19.038 "nqn": "nqn.2016-06.io.spdk:cnode16785", 00:08:19.038 "model_number": "Us%nWAnG\"IwiHBv%^^nu8cp\\8`B1wNQ!HQ6[cVK|u", 00:08:19.038 "method": "nvmf_create_subsystem", 00:08:19.038 "req_id": 1 00:08:19.038 } 00:08:19.038 Got JSON-RPC error response 00:08:19.038 response: 00:08:19.038 { 00:08:19.038 "code": -32602, 00:08:19.038 "message": "Invalid MN Us%nWAnG\"IwiHBv%^^nu8cp\\8`B1wNQ!HQ6[cVK|u" 00:08:19.038 }' 00:08:19.038 11:00:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:08:19.038 { 00:08:19.038 "nqn": "nqn.2016-06.io.spdk:cnode16785", 00:08:19.038 "model_number": "Us%nWAnG\"IwiHBv%^^nu8cp\\8`B1wNQ!HQ6[cVK|u", 00:08:19.038 "method": "nvmf_create_subsystem", 00:08:19.038 "req_id": 1 00:08:19.038 } 00:08:19.038 Got JSON-RPC error response 00:08:19.038 response: 00:08:19.038 { 00:08:19.038 "code": -32602, 00:08:19.038 "message": "Invalid MN Us%nWAnG\"IwiHBv%^^nu8cp\\8`B1wNQ!HQ6[cVK|u" 00:08:19.038 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:08:19.038 11:00:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:08:19.038 [2024-05-15 11:00:16.295280] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:19.295 11:00:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:08:19.295 11:00:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:08:19.295 11:00:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:08:19.295 11:00:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:08:19.295 11:00:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:08:19.295 11:00:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:08:19.552 [2024-05-15 11:00:16.656451] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:19.552 [2024-05-15 11:00:16.656524] nvmf_rpc.c: 794:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:08:19.552 11:00:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:08:19.552 { 00:08:19.552 "nqn": "nqn.2016-06.io.spdk:cnode", 00:08:19.552 "listen_address": { 00:08:19.552 "trtype": "tcp", 00:08:19.552 "traddr": "", 00:08:19.552 "trsvcid": "4421" 00:08:19.552 }, 00:08:19.552 "method": "nvmf_subsystem_remove_listener", 00:08:19.552 "req_id": 1 00:08:19.552 } 00:08:19.552 Got JSON-RPC error response 00:08:19.552 response: 00:08:19.552 { 00:08:19.552 "code": -32602, 00:08:19.552 "message": "Invalid parameters" 00:08:19.552 }' 00:08:19.552 11:00:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:08:19.552 { 00:08:19.552 "nqn": "nqn.2016-06.io.spdk:cnode", 00:08:19.552 "listen_address": { 00:08:19.552 "trtype": "tcp", 00:08:19.552 "traddr": "", 00:08:19.552 "trsvcid": "4421" 00:08:19.552 }, 00:08:19.552 "method": "nvmf_subsystem_remove_listener", 00:08:19.552 "req_id": 1 00:08:19.552 } 00:08:19.552 Got JSON-RPC error response 00:08:19.552 response: 00:08:19.552 { 00:08:19.552 "code": -32602, 00:08:19.552 "message": "Invalid parameters" 00:08:19.552 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:08:19.552 11:00:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7214 -i 0 00:08:19.811 [2024-05-15 11:00:16.829042] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7214: invalid cntlid range [0-65519] 00:08:19.811 11:00:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:08:19.811 { 00:08:19.811 "nqn": "nqn.2016-06.io.spdk:cnode7214", 00:08:19.811 "min_cntlid": 0, 00:08:19.811 "method": "nvmf_create_subsystem", 00:08:19.811 "req_id": 1 00:08:19.811 } 00:08:19.811 Got JSON-RPC error response 00:08:19.811 response: 00:08:19.811 { 00:08:19.811 "code": -32602, 00:08:19.811 "message": "Invalid cntlid range [0-65519]" 00:08:19.811 }' 00:08:19.811 11:00:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:08:19.811 { 00:08:19.811 "nqn": "nqn.2016-06.io.spdk:cnode7214", 00:08:19.811 "min_cntlid": 0, 00:08:19.811 "method": "nvmf_create_subsystem", 00:08:19.811 "req_id": 1 00:08:19.811 } 00:08:19.811 Got JSON-RPC error response 00:08:19.811 response: 00:08:19.811 { 00:08:19.811 "code": -32602, 00:08:19.811 "message": "Invalid cntlid range [0-65519]" 00:08:19.811 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:19.811 11:00:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20889 -i 65520 00:08:19.811 [2024-05-15 11:00:17.001624] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20889: invalid cntlid range [65520-65519] 00:08:19.811 11:00:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:08:19.811 { 00:08:19.811 "nqn": "nqn.2016-06.io.spdk:cnode20889", 00:08:19.811 "min_cntlid": 65520, 00:08:19.811 "method": "nvmf_create_subsystem", 00:08:19.811 "req_id": 1 00:08:19.811 } 00:08:19.811 Got JSON-RPC error response 00:08:19.811 response: 00:08:19.811 { 00:08:19.811 "code": -32602, 00:08:19.811 "message": "Invalid cntlid range [65520-65519]" 00:08:19.811 }' 00:08:19.811 11:00:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:08:19.811 { 00:08:19.811 "nqn": "nqn.2016-06.io.spdk:cnode20889", 00:08:19.811 "min_cntlid": 65520, 00:08:19.811 "method": "nvmf_create_subsystem", 00:08:19.811 "req_id": 1 00:08:19.811 } 00:08:19.811 Got JSON-RPC error response 00:08:19.811 response: 00:08:19.811 { 00:08:19.811 "code": -32602, 00:08:19.811 "message": "Invalid cntlid range [65520-65519]" 00:08:19.811 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:19.811 11:00:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1845 -I 0 00:08:20.070 [2024-05-15 11:00:17.190317] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1845: invalid cntlid range [1-0] 00:08:20.070 11:00:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:08:20.070 { 00:08:20.070 "nqn": "nqn.2016-06.io.spdk:cnode1845", 00:08:20.070 "max_cntlid": 0, 00:08:20.070 "method": "nvmf_create_subsystem", 00:08:20.070 "req_id": 1 00:08:20.070 } 00:08:20.070 Got JSON-RPC error response 00:08:20.070 response: 00:08:20.070 { 00:08:20.070 "code": -32602, 00:08:20.070 "message": "Invalid cntlid range [1-0]" 00:08:20.070 }' 00:08:20.070 11:00:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:08:20.070 { 00:08:20.070 "nqn": "nqn.2016-06.io.spdk:cnode1845", 00:08:20.070 "max_cntlid": 0, 00:08:20.070 "method": "nvmf_create_subsystem", 00:08:20.070 "req_id": 1 00:08:20.070 } 00:08:20.070 Got JSON-RPC error response 00:08:20.070 response: 00:08:20.070 { 00:08:20.070 "code": -32602, 00:08:20.070 "message": "Invalid cntlid range [1-0]" 00:08:20.070 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:20.070 11:00:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1921 -I 65520 00:08:20.328 [2024-05-15 11:00:17.386966] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1921: invalid cntlid range [1-65520] 00:08:20.328 11:00:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:08:20.328 { 00:08:20.328 "nqn": "nqn.2016-06.io.spdk:cnode1921", 00:08:20.328 "max_cntlid": 65520, 00:08:20.328 "method": "nvmf_create_subsystem", 00:08:20.328 "req_id": 1 00:08:20.328 } 00:08:20.328 Got JSON-RPC error response 00:08:20.328 response: 00:08:20.328 { 00:08:20.328 "code": -32602, 00:08:20.328 "message": "Invalid cntlid range [1-65520]" 00:08:20.328 }' 00:08:20.328 11:00:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:08:20.328 { 00:08:20.328 "nqn": "nqn.2016-06.io.spdk:cnode1921", 00:08:20.328 "max_cntlid": 65520, 00:08:20.328 "method": "nvmf_create_subsystem", 00:08:20.328 "req_id": 1 00:08:20.328 } 00:08:20.328 Got JSON-RPC error response 00:08:20.328 response: 00:08:20.328 { 00:08:20.328 "code": -32602, 00:08:20.328 "message": "Invalid cntlid range [1-65520]" 00:08:20.328 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:20.329 11:00:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13271 -i 6 -I 5 00:08:20.329 [2024-05-15 11:00:17.575651] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13271: invalid cntlid range [6-5] 00:08:20.587 11:00:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:08:20.587 { 00:08:20.587 "nqn": "nqn.2016-06.io.spdk:cnode13271", 00:08:20.587 "min_cntlid": 6, 00:08:20.587 "max_cntlid": 5, 00:08:20.587 "method": "nvmf_create_subsystem", 00:08:20.587 "req_id": 1 00:08:20.587 } 00:08:20.587 Got JSON-RPC error response 00:08:20.587 response: 00:08:20.587 { 00:08:20.587 "code": -32602, 00:08:20.587 "message": "Invalid cntlid range [6-5]" 00:08:20.587 }' 00:08:20.587 11:00:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:08:20.587 { 00:08:20.587 "nqn": "nqn.2016-06.io.spdk:cnode13271", 00:08:20.587 "min_cntlid": 6, 00:08:20.587 "max_cntlid": 5, 00:08:20.587 "method": "nvmf_create_subsystem", 00:08:20.587 "req_id": 1 00:08:20.587 } 00:08:20.587 Got JSON-RPC error response 00:08:20.587 response: 00:08:20.587 { 00:08:20.587 "code": -32602, 00:08:20.587 "message": "Invalid cntlid range [6-5]" 00:08:20.587 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:20.587 11:00:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:08:20.587 11:00:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:08:20.587 { 00:08:20.587 "name": "foobar", 00:08:20.587 "method": "nvmf_delete_target", 00:08:20.587 "req_id": 1 00:08:20.587 } 00:08:20.587 Got JSON-RPC error response 00:08:20.587 response: 00:08:20.587 { 00:08:20.587 "code": -32602, 00:08:20.587 "message": "The specified target doesn'\''t exist, cannot delete it." 00:08:20.587 }' 00:08:20.587 11:00:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:08:20.587 { 00:08:20.587 "name": "foobar", 00:08:20.587 "method": "nvmf_delete_target", 00:08:20.587 "req_id": 1 00:08:20.587 } 00:08:20.587 Got JSON-RPC error response 00:08:20.587 response: 00:08:20.587 { 00:08:20.587 "code": -32602, 00:08:20.587 "message": "The specified target doesn't exist, cannot delete it." 00:08:20.587 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:08:20.587 11:00:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:08:20.587 11:00:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:08:20.587 11:00:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:20.587 11:00:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:08:20.587 11:00:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:20.587 11:00:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:08:20.587 11:00:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:20.587 11:00:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:20.587 rmmod nvme_tcp 00:08:20.587 rmmod nvme_fabrics 00:08:20.587 rmmod nvme_keyring 00:08:20.587 11:00:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:20.587 11:00:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:08:20.587 11:00:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:08:20.587 11:00:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 2132050 ']' 00:08:20.587 11:00:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 2132050 00:08:20.587 11:00:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@947 -- # '[' -z 2132050 ']' 00:08:20.587 11:00:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # kill -0 2132050 00:08:20.587 11:00:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # uname 00:08:20.587 11:00:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:08:20.587 11:00:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2132050 00:08:20.587 11:00:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:08:20.587 11:00:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:08:20.587 11:00:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2132050' 00:08:20.587 killing process with pid 2132050 00:08:20.587 11:00:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # kill 2132050 00:08:20.587 [2024-05-15 11:00:17.807108] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:20.587 11:00:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@971 -- # wait 2132050 00:08:20.846 11:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:20.846 11:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:20.846 11:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:20.846 11:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:20.846 11:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:20.846 11:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.846 11:00:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:20.846 11:00:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.387 11:00:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:23.387 00:08:23.387 real 0m11.164s 00:08:23.387 user 0m19.184s 00:08:23.387 sys 0m4.566s 00:08:23.387 11:00:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # xtrace_disable 00:08:23.387 11:00:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:23.387 ************************************ 00:08:23.387 END TEST nvmf_invalid 00:08:23.387 ************************************ 00:08:23.387 11:00:20 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:23.387 11:00:20 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:08:23.387 11:00:20 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:08:23.387 11:00:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:23.387 ************************************ 00:08:23.387 START TEST nvmf_abort 00:08:23.387 ************************************ 00:08:23.387 11:00:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:23.387 * Looking for test storage... 00:08:23.387 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:23.387 11:00:20 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:23.387 11:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:08:23.387 11:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:23.387 11:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:23.387 11:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:23.387 11:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:23.387 11:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:23.387 11:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:23.387 11:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:23.387 11:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:23.387 11:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:23.387 11:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:23.387 11:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:23.387 11:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:23.387 11:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:23.387 11:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:23.387 11:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:23.387 11:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:23.387 11:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:23.387 11:00:20 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:23.387 11:00:20 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:23.387 11:00:20 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:23.387 11:00:20 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.387 11:00:20 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.387 11:00:20 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.387 11:00:20 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:08:23.387 11:00:20 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.387 11:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:08:23.387 11:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:23.387 11:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:23.387 11:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:23.387 11:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:23.387 11:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:23.387 11:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:23.387 11:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:23.387 11:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:23.387 11:00:20 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:23.387 11:00:20 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:23.387 11:00:20 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:08:23.387 11:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:23.387 11:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:23.387 11:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:23.387 11:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:23.387 11:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:23.387 11:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:23.387 11:00:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:23.388 11:00:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.388 11:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:23.388 11:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:23.388 11:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:08:23.388 11:00:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:28.667 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:28.667 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:28.667 Found net devices under 0000:86:00.0: cvl_0_0 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:28.667 Found net devices under 0000:86:00.1: cvl_0_1 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:28.667 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:28.667 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:28.667 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:08:28.667 00:08:28.668 --- 10.0.0.2 ping statistics --- 00:08:28.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.668 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:08:28.668 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:28.668 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:28.668 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:08:28.668 00:08:28.668 --- 10.0.0.1 ping statistics --- 00:08:28.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.668 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:08:28.668 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:28.668 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:08:28.668 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:28.668 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:28.668 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:28.668 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:28.668 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:28.668 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:28.668 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:28.668 11:00:25 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:28.668 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:28.668 11:00:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@721 -- # xtrace_disable 00:08:28.668 11:00:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:28.668 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2136428 00:08:28.668 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2136428 00:08:28.668 11:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:28.668 11:00:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@828 -- # '[' -z 2136428 ']' 00:08:28.668 11:00:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.668 11:00:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local max_retries=100 00:08:28.668 11:00:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.668 11:00:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@837 -- # xtrace_disable 00:08:28.668 11:00:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:28.668 [2024-05-15 11:00:25.776073] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:08:28.668 [2024-05-15 11:00:25.776112] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:28.668 EAL: No free 2048 kB hugepages reported on node 1 00:08:28.668 [2024-05-15 11:00:25.833288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:28.668 [2024-05-15 11:00:25.904395] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:28.668 [2024-05-15 11:00:25.904436] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:28.668 [2024-05-15 11:00:25.904443] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:28.668 [2024-05-15 11:00:25.904449] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:28.668 [2024-05-15 11:00:25.904454] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:28.668 [2024-05-15 11:00:25.904510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:28.668 [2024-05-15 11:00:25.904593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:28.668 [2024-05-15 11:00:25.904594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:29.603 11:00:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:08:29.603 11:00:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@861 -- # return 0 00:08:29.603 11:00:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:29.603 11:00:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@727 -- # xtrace_disable 00:08:29.603 11:00:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:29.603 11:00:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:29.603 11:00:26 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:08:29.603 11:00:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:29.603 11:00:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:29.603 [2024-05-15 11:00:26.621491] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:29.603 11:00:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:29.603 11:00:26 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:29.603 11:00:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:29.603 11:00:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:29.603 Malloc0 00:08:29.603 11:00:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:29.603 11:00:26 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:29.603 11:00:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:29.603 11:00:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:29.603 Delay0 00:08:29.603 11:00:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:29.603 11:00:26 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:29.603 11:00:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:29.603 11:00:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:29.603 11:00:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:29.603 11:00:26 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:29.603 11:00:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:29.603 11:00:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:29.603 11:00:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:29.603 11:00:26 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:29.603 11:00:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:29.603 11:00:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:29.603 [2024-05-15 11:00:26.684104] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:29.603 [2024-05-15 11:00:26.684339] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:29.603 11:00:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:29.603 11:00:26 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:29.603 11:00:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:29.603 11:00:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:29.603 11:00:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:29.603 11:00:26 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:29.603 EAL: No free 2048 kB hugepages reported on node 1 00:08:29.603 [2024-05-15 11:00:26.835310] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:32.139 Initializing NVMe Controllers 00:08:32.139 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:32.139 controller IO queue size 128 less than required 00:08:32.139 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:32.139 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:32.139 Initialization complete. Launching workers. 00:08:32.139 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 44039 00:08:32.139 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 44100, failed to submit 62 00:08:32.139 success 44043, unsuccess 57, failed 0 00:08:32.139 11:00:28 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:32.139 11:00:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:32.139 11:00:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:32.139 11:00:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:32.139 11:00:28 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:32.139 11:00:28 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:08:32.139 11:00:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:32.139 11:00:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:08:32.139 11:00:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:32.139 11:00:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:08:32.139 11:00:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:32.139 11:00:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:32.139 rmmod nvme_tcp 00:08:32.139 rmmod nvme_fabrics 00:08:32.139 rmmod nvme_keyring 00:08:32.139 11:00:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:32.139 11:00:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:08:32.139 11:00:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:08:32.139 11:00:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2136428 ']' 00:08:32.139 11:00:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2136428 00:08:32.139 11:00:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@947 -- # '[' -z 2136428 ']' 00:08:32.139 11:00:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # kill -0 2136428 00:08:32.139 11:00:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # uname 00:08:32.140 11:00:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:08:32.140 11:00:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2136428 00:08:32.140 11:00:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:08:32.140 11:00:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:08:32.140 11:00:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2136428' 00:08:32.140 killing process with pid 2136428 00:08:32.140 11:00:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # kill 2136428 00:08:32.140 [2024-05-15 11:00:28.982707] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:32.140 11:00:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@971 -- # wait 2136428 00:08:32.140 11:00:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:32.140 11:00:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:32.140 11:00:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:32.140 11:00:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:32.140 11:00:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:32.140 11:00:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.140 11:00:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:32.140 11:00:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.045 11:00:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:34.045 00:08:34.045 real 0m11.122s 00:08:34.045 user 0m13.060s 00:08:34.045 sys 0m5.030s 00:08:34.045 11:00:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # xtrace_disable 00:08:34.045 11:00:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:34.045 ************************************ 00:08:34.045 END TEST nvmf_abort 00:08:34.045 ************************************ 00:08:34.045 11:00:31 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:34.045 11:00:31 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:08:34.045 11:00:31 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:08:34.045 11:00:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:34.305 ************************************ 00:08:34.305 START TEST nvmf_ns_hotplug_stress 00:08:34.305 ************************************ 00:08:34.305 11:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:34.305 * Looking for test storage... 00:08:34.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:34.305 11:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:34.305 11:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:08:34.305 11:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:34.305 11:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:34.305 11:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:34.305 11:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:34.305 11:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:34.305 11:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:34.305 11:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:34.305 11:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:34.305 11:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:34.305 11:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:34.305 11:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:34.305 11:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:34.305 11:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:34.305 11:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:34.305 11:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:34.305 11:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:34.306 11:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:34.306 11:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:34.306 11:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:34.306 11:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:34.306 11:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.306 11:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.306 11:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.306 11:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:08:34.306 11:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.306 11:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:08:34.306 11:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:34.306 11:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:34.306 11:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:34.306 11:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:34.306 11:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:34.306 11:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:34.306 11:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:34.306 11:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:34.306 11:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:34.306 11:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:08:34.306 11:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:34.306 11:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:34.306 11:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:34.306 11:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:34.306 11:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:34.306 11:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.306 11:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:34.306 11:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.306 11:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:34.306 11:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:34.306 11:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:08:34.306 11:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:39.604 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:39.604 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:08:39.604 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:39.604 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:39.604 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:39.604 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:39.604 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:39.604 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:08:39.604 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:39.604 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:08:39.604 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:08:39.604 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:08:39.604 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:08:39.604 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:08:39.604 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:08:39.604 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:39.604 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:39.604 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:39.604 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:39.604 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:39.604 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:39.604 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:39.604 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:39.604 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:39.604 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:39.604 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:39.604 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:39.604 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:39.604 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:39.604 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:39.604 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:39.604 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:39.604 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:39.604 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:39.604 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:39.604 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:39.604 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:39.604 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:39.604 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:39.604 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:39.604 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:39.604 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:39.604 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:39.604 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:39.604 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:39.604 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:39.604 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:39.605 Found net devices under 0000:86:00.0: cvl_0_0 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:39.605 Found net devices under 0000:86:00.1: cvl_0_1 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:39.605 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:39.605 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:08:39.605 00:08:39.605 --- 10.0.0.2 ping statistics --- 00:08:39.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.605 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:39.605 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:39.605 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:08:39.605 00:08:39.605 --- 10.0.0.1 ping statistics --- 00:08:39.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.605 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@721 -- # xtrace_disable 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2140324 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2140324 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@828 -- # '[' -z 2140324 ']' 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local max_retries=100 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # xtrace_disable 00:08:39.605 11:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:39.605 [2024-05-15 11:00:36.794056] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:08:39.605 [2024-05-15 11:00:36.794101] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:39.605 EAL: No free 2048 kB hugepages reported on node 1 00:08:39.864 [2024-05-15 11:00:36.852656] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:39.864 [2024-05-15 11:00:36.932812] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:39.864 [2024-05-15 11:00:36.932846] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:39.864 [2024-05-15 11:00:36.932854] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:39.864 [2024-05-15 11:00:36.932860] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:39.864 [2024-05-15 11:00:36.932865] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:39.864 [2024-05-15 11:00:36.932913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:39.864 [2024-05-15 11:00:36.932998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:39.864 [2024-05-15 11:00:36.932999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.430 11:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:08:40.430 11:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@861 -- # return 0 00:08:40.430 11:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:40.430 11:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@727 -- # xtrace_disable 00:08:40.430 11:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:40.430 11:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:40.430 11:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:08:40.430 11:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:40.688 [2024-05-15 11:00:37.802606] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:40.688 11:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:40.946 11:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:40.946 [2024-05-15 11:00:38.183828] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:40.946 [2024-05-15 11:00:38.184039] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:41.205 11:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:41.205 11:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:41.461 Malloc0 00:08:41.461 11:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:41.719 Delay0 00:08:41.719 11:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:41.719 11:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:41.977 NULL1 00:08:41.977 11:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:42.235 11:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:42.235 11:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2140710 00:08:42.236 11:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2140710 00:08:42.236 11:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:42.236 EAL: No free 2048 kB hugepages reported on node 1 00:08:43.611 Read completed with error (sct=0, sc=11) 00:08:43.611 11:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:43.611 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.611 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.611 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.611 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.611 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.611 11:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:43.611 11:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:43.611 true 00:08:43.869 11:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2140710 00:08:43.869 11:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:44.803 11:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:44.803 11:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:44.803 11:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:44.803 true 00:08:45.060 11:00:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2140710 00:08:45.060 11:00:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:45.060 11:00:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:45.317 11:00:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:45.317 11:00:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:45.575 true 00:08:45.575 11:00:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2140710 00:08:45.575 11:00:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.950 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.950 11:00:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:46.950 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.950 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.950 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.950 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.950 11:00:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:46.950 11:00:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:46.950 true 00:08:46.950 11:00:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2140710 00:08:46.950 11:00:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:47.883 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:47.883 11:00:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:48.140 11:00:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:48.140 11:00:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:48.140 true 00:08:48.140 11:00:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2140710 00:08:48.140 11:00:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:48.398 11:00:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:48.656 11:00:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:48.656 11:00:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:48.656 true 00:08:48.913 11:00:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2140710 00:08:48.913 11:00:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:48.913 11:00:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:49.171 11:00:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:49.171 11:00:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:49.428 true 00:08:49.428 11:00:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2140710 00:08:49.428 11:00:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:49.428 11:00:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:49.685 11:00:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:49.686 11:00:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:49.943 true 00:08:49.943 11:00:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2140710 00:08:49.943 11:00:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:50.200 11:00:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:50.200 11:00:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:50.200 11:00:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:50.463 true 00:08:50.463 11:00:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2140710 00:08:50.463 11:00:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:50.722 11:00:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:50.722 11:00:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:50.722 11:00:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:50.981 true 00:08:50.981 11:00:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2140710 00:08:50.981 11:00:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:52.357 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.357 11:00:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:52.357 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.357 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.357 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.357 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.357 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.357 11:00:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:52.357 11:00:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:52.615 true 00:08:52.615 11:00:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2140710 00:08:52.615 11:00:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:53.550 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:53.550 11:00:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:53.550 11:00:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:53.550 11:00:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:53.808 true 00:08:53.808 11:00:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2140710 00:08:53.809 11:00:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:53.809 11:00:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:54.067 11:00:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:54.067 11:00:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:54.324 true 00:08:54.324 11:00:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2140710 00:08:54.324 11:00:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:55.696 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.696 11:00:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:55.696 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.696 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.696 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.696 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.696 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.696 11:00:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:55.696 11:00:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:55.696 true 00:08:55.954 11:00:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2140710 00:08:55.954 11:00:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.888 11:00:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:56.888 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:56.888 11:00:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:56.888 11:00:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:56.888 true 00:08:56.888 11:00:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2140710 00:08:56.888 11:00:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.146 11:00:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:57.404 11:00:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:57.404 11:00:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:57.404 true 00:08:57.404 11:00:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2140710 00:08:57.404 11:00:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.663 11:00:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:57.922 11:00:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:57.922 11:00:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:57.922 true 00:08:58.181 11:00:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2140710 00:08:58.181 11:00:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:58.181 11:00:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:58.438 11:00:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:58.438 11:00:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:58.438 true 00:08:58.696 11:00:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2140710 00:08:58.696 11:00:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:58.696 11:00:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:58.955 11:00:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:58.955 11:00:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:59.212 true 00:08:59.212 11:00:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2140710 00:08:59.212 11:00:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:59.212 11:00:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:59.470 11:00:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:59.470 11:00:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:59.729 true 00:08:59.729 11:00:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2140710 00:08:59.729 11:00:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:59.729 11:00:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:59.987 11:00:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:59.987 11:00:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:00.269 true 00:09:00.269 11:00:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2140710 00:09:00.269 11:00:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.535 11:00:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:00.535 11:00:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:09:00.535 11:00:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:00.793 true 00:09:00.794 11:00:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2140710 00:09:00.794 11:00:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:01.052 11:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:01.052 11:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:09:01.052 11:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:01.311 true 00:09:01.311 11:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2140710 00:09:01.311 11:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:01.570 11:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:01.570 11:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:09:01.570 11:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:01.828 true 00:09:01.828 11:00:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2140710 00:09:01.828 11:00:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:03.205 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.205 11:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:03.205 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.205 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.205 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.205 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.205 11:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:09:03.205 11:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:03.464 true 00:09:03.464 11:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2140710 00:09:03.464 11:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:04.397 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:04.397 11:01:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:04.397 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:04.397 11:01:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:09:04.397 11:01:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:04.654 true 00:09:04.654 11:01:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2140710 00:09:04.654 11:01:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:04.912 11:01:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:04.912 11:01:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:09:04.912 11:01:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:05.168 true 00:09:05.169 11:01:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2140710 00:09:05.169 11:01:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:06.538 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.538 11:01:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:06.538 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.538 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.538 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.538 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.538 11:01:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:09:06.538 11:01:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:06.797 true 00:09:06.797 11:01:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2140710 00:09:06.797 11:01:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:07.732 11:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:07.732 11:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:09:07.732 11:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:09:07.991 true 00:09:07.991 11:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2140710 00:09:07.991 11:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:08.249 11:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:08.249 11:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:09:08.249 11:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:09:08.508 true 00:09:08.508 11:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2140710 00:09:08.508 11:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:09.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.884 11:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:09.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.884 11:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:09:09.884 11:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:09:10.143 true 00:09:10.143 11:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2140710 00:09:10.143 11:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.078 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:11.078 11:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:11.078 11:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:09:11.078 11:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:09:11.337 true 00:09:11.337 11:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2140710 00:09:11.337 11:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.597 11:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:11.597 11:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:09:11.597 11:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:09:11.855 true 00:09:11.855 11:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2140710 00:09:11.855 11:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:12.114 11:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:12.114 11:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:09:12.114 11:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:09:12.373 true 00:09:12.373 11:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2140710 00:09:12.373 11:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:12.373 Initializing NVMe Controllers 00:09:12.373 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:12.373 Controller IO queue size 128, less than required. 00:09:12.373 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:12.373 Controller IO queue size 128, less than required. 00:09:12.373 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:12.373 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:12.373 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:12.373 Initialization complete. Launching workers. 00:09:12.373 ======================================================== 00:09:12.373 Latency(us) 00:09:12.373 Device Information : IOPS MiB/s Average min max 00:09:12.373 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1546.51 0.76 44550.19 2832.84 1012814.74 00:09:12.373 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 13675.55 6.68 9360.33 1653.29 455925.79 00:09:12.373 ======================================================== 00:09:12.373 Total : 15222.06 7.43 12935.49 1653.29 1012814.74 00:09:12.373 00:09:12.631 11:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:12.631 11:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:09:12.631 11:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:09:12.890 true 00:09:12.890 11:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2140710 00:09:12.890 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2140710) - No such process 00:09:12.890 11:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2140710 00:09:12.890 11:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:13.148 11:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:13.407 11:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:09:13.407 11:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:09:13.407 11:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:09:13.407 11:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:13.407 11:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:09:13.407 null0 00:09:13.407 11:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:13.407 11:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:13.407 11:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:09:13.666 null1 00:09:13.666 11:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:13.666 11:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:13.666 11:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:09:13.925 null2 00:09:13.925 11:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:13.925 11:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:13.925 11:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:09:13.925 null3 00:09:13.925 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:13.925 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:13.925 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:09:14.183 null4 00:09:14.183 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:14.183 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:14.183 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:09:14.441 null5 00:09:14.441 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:14.441 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:14.441 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:09:14.441 null6 00:09:14.441 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:14.441 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:14.441 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:09:14.701 null7 00:09:14.701 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:14.701 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:14.701 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:09:14.701 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:14.701 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:09:14.701 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:14.701 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:09:14.701 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:14.701 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.701 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:14.701 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:14.701 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:14.701 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:09:14.701 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:14.701 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:09:14.701 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:14.701 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:14.701 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:14.701 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.701 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:14.701 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:14.701 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:09:14.701 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:14.701 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:14.701 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:09:14.701 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:14.701 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.701 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:14.701 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:14.701 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:09:14.701 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:14.701 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:14.701 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:09:14.701 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:14.701 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.701 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:14.701 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:14.701 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:14.701 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:09:14.701 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:14.701 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:09:14.702 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:14.702 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.702 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:14.702 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:14.702 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:09:14.702 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:14.702 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:14.702 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:09:14.702 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:14.702 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.702 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:14.702 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:14.702 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:09:14.702 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:14.702 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:14.702 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:09:14.702 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:14.702 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.702 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:14.702 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:14.702 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:09:14.702 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:14.702 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:14.702 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:09:14.702 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2146309 2146311 2146312 2146314 2146316 2146318 2146320 2146322 00:09:14.702 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:14.702 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.702 11:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:14.960 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:14.960 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:14.960 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:14.961 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.961 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:14.961 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:14.961 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:14.961 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:15.220 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.220 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.220 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:15.220 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.220 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.220 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:15.220 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.220 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.220 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:15.220 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.220 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.220 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:15.220 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.220 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.220 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:15.220 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.220 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.220 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.220 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:15.220 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.220 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:15.220 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.220 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.220 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:15.220 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:15.220 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:15.220 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.220 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:15.220 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:15.220 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:15.220 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:15.220 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:15.479 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.479 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.479 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:15.479 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.479 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.479 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:15.479 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.479 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.479 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:15.479 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.479 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.479 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.479 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.479 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:15.479 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:15.479 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.479 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.479 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.479 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:15.479 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.479 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:15.479 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.480 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.480 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:15.739 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:15.739 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:15.739 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:15.739 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:15.739 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:15.739 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:15.739 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:15.739 11:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.009 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.009 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.009 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:16.009 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.009 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.009 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.009 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:16.009 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.009 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:16.009 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.009 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.009 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:16.009 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.009 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.009 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:16.009 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.010 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.010 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:16.010 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.010 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.010 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:16.010 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.010 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.010 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:16.010 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:16.010 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:16.010 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:16.010 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:16.010 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.010 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:16.010 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:16.010 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:16.272 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.273 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.273 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:16.273 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.273 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.273 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:16.273 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.273 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.273 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:16.273 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.273 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.273 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:16.273 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.273 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.273 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:16.273 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.273 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.273 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:16.273 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.273 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.273 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:16.273 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.273 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.273 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:16.531 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:16.531 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:16.531 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:16.531 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:16.531 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:16.531 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:16.531 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:16.531 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.531 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.531 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.531 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:16.531 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.531 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.531 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:16.531 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.531 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.531 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:16.531 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.531 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.531 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:16.531 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.531 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.531 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:16.531 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.531 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.531 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:16.531 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.531 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.531 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:16.531 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.531 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.531 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:16.789 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:16.789 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:16.789 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:16.789 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:16.789 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.789 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:16.789 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:16.789 11:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:17.048 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.048 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.048 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:17.048 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.048 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.048 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:17.048 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.048 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.048 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:17.048 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.048 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.049 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:17.049 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.049 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.049 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:17.049 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.049 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.049 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.049 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:17.049 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.049 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:17.049 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.049 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.049 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:17.307 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:17.307 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:17.307 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:17.307 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:17.307 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:17.307 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:17.307 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.307 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:17.307 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.307 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.307 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:17.307 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.307 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.307 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.307 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:17.307 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.307 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:17.307 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.307 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.307 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:17.307 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.307 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.307 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:17.307 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.307 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.307 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.307 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:17.307 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.307 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:17.565 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.565 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.565 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:17.565 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:17.565 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:17.565 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:17.565 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:17.565 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:17.565 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.565 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:17.565 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:17.823 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.823 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.823 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:17.823 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.823 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.823 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:17.823 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.823 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.823 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:17.823 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.823 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.823 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:17.823 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.823 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.823 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:17.823 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.823 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.823 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.823 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:17.823 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.823 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:17.823 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.823 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.823 11:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:17.823 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:18.082 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:18.082 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:18.082 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:18.082 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:18.082 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:18.082 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:18.082 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:18.082 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.082 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.082 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:18.082 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.082 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.082 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:18.082 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.082 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.082 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:18.082 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.082 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.082 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:18.082 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.082 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.082 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:18.082 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.082 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.082 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:18.082 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.082 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.082 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:18.082 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.082 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.082 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:18.341 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:18.341 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:18.341 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:18.341 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:18.341 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:18.341 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:18.341 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:18.341 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:18.600 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.600 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.600 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.600 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.600 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.600 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.600 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.600 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.600 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.600 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.600 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.600 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.600 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.600 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.600 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.600 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.600 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:18.600 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:09:18.600 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:18.600 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:09:18.600 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:18.600 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:09:18.600 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:18.600 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:18.600 rmmod nvme_tcp 00:09:18.600 rmmod nvme_fabrics 00:09:18.600 rmmod nvme_keyring 00:09:18.600 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:18.600 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:09:18.600 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:09:18.600 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2140324 ']' 00:09:18.600 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2140324 00:09:18.600 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@947 -- # '[' -z 2140324 ']' 00:09:18.600 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # kill -0 2140324 00:09:18.600 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # uname 00:09:18.600 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:09:18.600 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2140324 00:09:18.600 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:09:18.600 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:09:18.600 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2140324' 00:09:18.600 killing process with pid 2140324 00:09:18.600 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # kill 2140324 00:09:18.600 [2024-05-15 11:01:15.804673] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:18.600 11:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # wait 2140324 00:09:18.859 11:01:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:18.859 11:01:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:18.859 11:01:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:18.859 11:01:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:18.859 11:01:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:18.859 11:01:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.859 11:01:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:18.859 11:01:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.429 11:01:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:21.429 00:09:21.429 real 0m46.735s 00:09:21.429 user 3m12.424s 00:09:21.429 sys 0m14.787s 00:09:21.429 11:01:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # xtrace_disable 00:09:21.429 11:01:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:21.429 ************************************ 00:09:21.429 END TEST nvmf_ns_hotplug_stress 00:09:21.429 ************************************ 00:09:21.429 11:01:18 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:09:21.429 11:01:18 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:09:21.429 11:01:18 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:09:21.429 11:01:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:21.429 ************************************ 00:09:21.429 START TEST nvmf_connect_stress 00:09:21.429 ************************************ 00:09:21.429 11:01:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:09:21.429 * Looking for test storage... 00:09:21.429 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:21.429 11:01:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:21.429 11:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:09:21.429 11:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:21.429 11:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:21.429 11:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:21.429 11:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:21.429 11:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:21.429 11:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:21.429 11:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:21.429 11:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:21.429 11:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:21.429 11:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:21.429 11:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:21.429 11:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:21.429 11:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:21.429 11:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:21.429 11:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:21.429 11:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:21.429 11:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:21.429 11:01:18 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:21.430 11:01:18 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:21.430 11:01:18 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:21.430 11:01:18 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.430 11:01:18 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.430 11:01:18 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.430 11:01:18 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:09:21.430 11:01:18 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.430 11:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:09:21.430 11:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:21.430 11:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:21.430 11:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:21.430 11:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:21.430 11:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:21.430 11:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:21.430 11:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:21.430 11:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:21.430 11:01:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:09:21.430 11:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:21.430 11:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:21.430 11:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:21.430 11:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:21.430 11:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:21.430 11:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:21.430 11:01:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:21.430 11:01:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.430 11:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:21.430 11:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:21.430 11:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:09:21.430 11:01:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:26.708 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:26.708 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:09:26.708 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:26.708 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:26.708 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:26.708 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:26.708 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:26.708 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:09:26.708 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:26.708 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:09:26.708 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:09:26.708 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:09:26.708 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:09:26.708 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:09:26.708 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:09:26.708 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:26.708 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:26.708 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:26.708 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:26.708 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:26.708 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:26.708 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:26.708 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:26.708 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:26.708 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:26.708 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:26.708 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:26.708 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:26.708 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:26.708 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:26.708 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:26.708 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:26.708 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:26.708 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:26.708 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:26.708 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:26.709 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:26.709 Found net devices under 0000:86:00.0: cvl_0_0 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:26.709 Found net devices under 0000:86:00.1: cvl_0_1 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:26.709 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:26.709 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:09:26.709 00:09:26.709 --- 10.0.0.2 ping statistics --- 00:09:26.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.709 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:26.709 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:26.709 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:09:26.709 00:09:26.709 --- 10.0.0.1 ping statistics --- 00:09:26.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.709 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@721 -- # xtrace_disable 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2150642 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2150642 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@828 -- # '[' -z 2150642 ']' 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local max_retries=100 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@837 -- # xtrace_disable 00:09:26.709 11:01:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:26.709 [2024-05-15 11:01:23.692188] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:09:26.709 [2024-05-15 11:01:23.692231] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:26.709 EAL: No free 2048 kB hugepages reported on node 1 00:09:26.709 [2024-05-15 11:01:23.749215] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:26.709 [2024-05-15 11:01:23.828613] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:26.709 [2024-05-15 11:01:23.828648] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:26.709 [2024-05-15 11:01:23.828656] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:26.709 [2024-05-15 11:01:23.828662] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:26.709 [2024-05-15 11:01:23.828667] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:26.709 [2024-05-15 11:01:23.828773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:26.709 [2024-05-15 11:01:23.828864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:26.709 [2024-05-15 11:01:23.828866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.275 11:01:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:09:27.275 11:01:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@861 -- # return 0 00:09:27.275 11:01:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:27.275 11:01:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@727 -- # xtrace_disable 00:09:27.275 11:01:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:27.534 11:01:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:27.535 [2024-05-15 11:01:24.553055] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:27.535 [2024-05-15 11:01:24.573079] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:27.535 [2024-05-15 11:01:24.591294] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:27.535 NULL1 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2150716 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:27.535 EAL: No free 2048 kB hugepages reported on node 1 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2150716 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:27.535 11:01:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:27.793 11:01:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:27.793 11:01:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2150716 00:09:27.793 11:01:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:27.793 11:01:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:27.793 11:01:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:28.367 11:01:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:28.367 11:01:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2150716 00:09:28.367 11:01:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:28.367 11:01:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:28.367 11:01:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:28.624 11:01:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:28.624 11:01:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2150716 00:09:28.624 11:01:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:28.624 11:01:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:28.624 11:01:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:28.882 11:01:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:28.882 11:01:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2150716 00:09:28.882 11:01:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:28.882 11:01:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:28.882 11:01:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:29.140 11:01:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:29.140 11:01:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2150716 00:09:29.140 11:01:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:29.140 11:01:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:29.140 11:01:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:29.398 11:01:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:29.398 11:01:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2150716 00:09:29.398 11:01:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:29.398 11:01:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:29.398 11:01:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:29.963 11:01:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:29.963 11:01:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2150716 00:09:29.963 11:01:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:29.963 11:01:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:29.963 11:01:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:30.221 11:01:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:30.221 11:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2150716 00:09:30.221 11:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:30.221 11:01:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:30.221 11:01:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:30.479 11:01:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:30.479 11:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2150716 00:09:30.479 11:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:30.479 11:01:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:30.479 11:01:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:30.737 11:01:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:30.737 11:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2150716 00:09:30.737 11:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:30.737 11:01:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:30.737 11:01:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:30.995 11:01:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:30.995 11:01:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2150716 00:09:30.995 11:01:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:30.995 11:01:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:30.995 11:01:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:31.561 11:01:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:31.561 11:01:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2150716 00:09:31.561 11:01:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:31.561 11:01:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:31.561 11:01:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:31.818 11:01:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:31.818 11:01:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2150716 00:09:31.818 11:01:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:31.818 11:01:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:31.818 11:01:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:32.077 11:01:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.077 11:01:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2150716 00:09:32.077 11:01:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:32.077 11:01:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.077 11:01:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:32.335 11:01:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.335 11:01:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2150716 00:09:32.335 11:01:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:32.335 11:01:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.335 11:01:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:32.593 11:01:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.593 11:01:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2150716 00:09:32.593 11:01:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:32.593 11:01:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.593 11:01:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:33.159 11:01:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:33.159 11:01:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2150716 00:09:33.159 11:01:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:33.159 11:01:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:33.159 11:01:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:33.417 11:01:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:33.417 11:01:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2150716 00:09:33.417 11:01:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:33.417 11:01:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:33.417 11:01:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:33.674 11:01:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:33.674 11:01:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2150716 00:09:33.674 11:01:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:33.674 11:01:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:33.674 11:01:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:33.932 11:01:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:33.932 11:01:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2150716 00:09:33.932 11:01:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:33.932 11:01:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:33.932 11:01:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:34.497 11:01:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:34.497 11:01:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2150716 00:09:34.497 11:01:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:34.497 11:01:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:34.497 11:01:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:34.755 11:01:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:34.755 11:01:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2150716 00:09:34.755 11:01:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:34.755 11:01:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:34.755 11:01:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:35.013 11:01:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:35.013 11:01:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2150716 00:09:35.013 11:01:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:35.013 11:01:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:35.013 11:01:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:35.270 11:01:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:35.270 11:01:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2150716 00:09:35.270 11:01:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:35.270 11:01:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:35.270 11:01:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:35.527 11:01:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:35.528 11:01:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2150716 00:09:35.528 11:01:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:35.528 11:01:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:35.528 11:01:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:36.094 11:01:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:36.094 11:01:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2150716 00:09:36.094 11:01:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:36.094 11:01:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:36.094 11:01:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:36.351 11:01:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:36.351 11:01:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2150716 00:09:36.351 11:01:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:36.351 11:01:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:36.351 11:01:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:36.608 11:01:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:36.609 11:01:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2150716 00:09:36.609 11:01:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:36.609 11:01:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:36.609 11:01:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:36.866 11:01:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:36.866 11:01:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2150716 00:09:36.866 11:01:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:36.866 11:01:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:36.866 11:01:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:37.124 11:01:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:37.124 11:01:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2150716 00:09:37.124 11:01:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:37.124 11:01:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:37.124 11:01:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:37.688 11:01:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:37.688 11:01:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2150716 00:09:37.688 11:01:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:37.688 11:01:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:37.688 11:01:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:37.688 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:37.946 11:01:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:37.946 11:01:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2150716 00:09:37.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2150716) - No such process 00:09:37.946 11:01:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2150716 00:09:37.946 11:01:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:37.946 11:01:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:37.946 11:01:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:09:37.946 11:01:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:37.946 11:01:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:09:37.946 11:01:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:37.946 11:01:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:09:37.946 11:01:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:37.946 11:01:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:37.946 rmmod nvme_tcp 00:09:37.946 rmmod nvme_fabrics 00:09:37.946 rmmod nvme_keyring 00:09:37.946 11:01:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:37.946 11:01:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:09:37.946 11:01:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:09:37.946 11:01:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2150642 ']' 00:09:37.946 11:01:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2150642 00:09:37.946 11:01:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@947 -- # '[' -z 2150642 ']' 00:09:37.946 11:01:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # kill -0 2150642 00:09:37.946 11:01:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # uname 00:09:37.946 11:01:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:09:37.946 11:01:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2150642 00:09:37.946 11:01:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:09:37.946 11:01:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:09:37.946 11:01:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2150642' 00:09:37.946 killing process with pid 2150642 00:09:37.946 11:01:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # kill 2150642 00:09:37.946 [2024-05-15 11:01:35.139205] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:37.946 11:01:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@971 -- # wait 2150642 00:09:38.245 11:01:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:38.245 11:01:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:38.245 11:01:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:38.245 11:01:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:38.245 11:01:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:38.245 11:01:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.245 11:01:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:38.245 11:01:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.206 11:01:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:40.206 00:09:40.206 real 0m19.255s 00:09:40.206 user 0m41.938s 00:09:40.206 sys 0m7.897s 00:09:40.206 11:01:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # xtrace_disable 00:09:40.206 11:01:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:40.206 ************************************ 00:09:40.206 END TEST nvmf_connect_stress 00:09:40.206 ************************************ 00:09:40.206 11:01:37 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:09:40.206 11:01:37 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:09:40.206 11:01:37 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:09:40.206 11:01:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:40.465 ************************************ 00:09:40.465 START TEST nvmf_fused_ordering 00:09:40.465 ************************************ 00:09:40.465 11:01:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:09:40.465 * Looking for test storage... 00:09:40.465 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:40.465 11:01:37 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:40.465 11:01:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:09:40.465 11:01:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:40.465 11:01:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:40.465 11:01:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:40.465 11:01:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:40.465 11:01:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:40.465 11:01:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:40.465 11:01:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:40.465 11:01:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:40.465 11:01:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:40.465 11:01:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:40.465 11:01:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:40.465 11:01:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:40.465 11:01:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:40.465 11:01:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:40.465 11:01:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:40.465 11:01:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:40.465 11:01:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:40.465 11:01:37 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:40.465 11:01:37 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:40.465 11:01:37 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:40.465 11:01:37 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.465 11:01:37 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.465 11:01:37 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.465 11:01:37 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:09:40.465 11:01:37 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.465 11:01:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:09:40.465 11:01:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:40.465 11:01:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:40.465 11:01:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:40.465 11:01:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:40.465 11:01:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:40.465 11:01:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:40.465 11:01:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:40.465 11:01:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:40.465 11:01:37 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:09:40.465 11:01:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:40.465 11:01:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:40.465 11:01:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:40.465 11:01:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:40.465 11:01:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:40.466 11:01:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.466 11:01:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:40.466 11:01:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.466 11:01:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:40.466 11:01:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:40.466 11:01:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:09:40.466 11:01:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:45.734 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:45.734 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:45.734 Found net devices under 0000:86:00.0: cvl_0_0 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:45.734 Found net devices under 0000:86:00.1: cvl_0_1 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:45.734 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:45.735 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:45.735 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:45.735 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:09:45.735 00:09:45.735 --- 10.0.0.2 ping statistics --- 00:09:45.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.735 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:09:45.735 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:45.735 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:45.735 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:09:45.735 00:09:45.735 --- 10.0.0.1 ping statistics --- 00:09:45.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.735 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:09:45.735 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:45.735 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:09:45.735 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:45.735 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:45.735 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:45.735 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:45.735 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:45.735 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:45.735 11:01:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:45.993 11:01:43 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:09:45.993 11:01:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:45.993 11:01:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@721 -- # xtrace_disable 00:09:45.993 11:01:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:45.993 11:01:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2156078 00:09:45.993 11:01:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2156078 00:09:45.993 11:01:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:45.993 11:01:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@828 -- # '[' -z 2156078 ']' 00:09:45.993 11:01:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.993 11:01:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local max_retries=100 00:09:45.993 11:01:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.993 11:01:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # xtrace_disable 00:09:45.993 11:01:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:45.993 [2024-05-15 11:01:43.055520] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:09:45.993 [2024-05-15 11:01:43.055560] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:45.993 EAL: No free 2048 kB hugepages reported on node 1 00:09:45.993 [2024-05-15 11:01:43.110597] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.993 [2024-05-15 11:01:43.188825] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:45.993 [2024-05-15 11:01:43.188859] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:45.993 [2024-05-15 11:01:43.188866] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:45.993 [2024-05-15 11:01:43.188872] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:45.993 [2024-05-15 11:01:43.188878] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:45.994 [2024-05-15 11:01:43.188902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:46.927 11:01:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:09:46.927 11:01:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@861 -- # return 0 00:09:46.927 11:01:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:46.927 11:01:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@727 -- # xtrace_disable 00:09:46.927 11:01:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:46.927 11:01:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:46.927 11:01:43 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:46.927 11:01:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:46.927 11:01:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:46.927 [2024-05-15 11:01:43.895511] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:46.927 11:01:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:46.927 11:01:43 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:46.927 11:01:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:46.927 11:01:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:46.927 11:01:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:46.927 11:01:43 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:46.927 11:01:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:46.927 11:01:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:46.927 [2024-05-15 11:01:43.911505] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:46.927 [2024-05-15 11:01:43.911657] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:46.927 11:01:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:46.927 11:01:43 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:46.927 11:01:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:46.927 11:01:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:46.927 NULL1 00:09:46.927 11:01:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:46.927 11:01:43 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:09:46.927 11:01:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:46.927 11:01:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:46.927 11:01:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:46.927 11:01:43 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:46.927 11:01:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:46.927 11:01:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:46.927 11:01:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:46.927 11:01:43 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:46.927 [2024-05-15 11:01:43.964194] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:09:46.927 [2024-05-15 11:01:43.964236] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2156110 ] 00:09:46.927 EAL: No free 2048 kB hugepages reported on node 1 00:09:47.186 Attached to nqn.2016-06.io.spdk:cnode1 00:09:47.186 Namespace ID: 1 size: 1GB 00:09:47.186 fused_ordering(0) 00:09:47.186 fused_ordering(1) 00:09:47.186 fused_ordering(2) 00:09:47.186 fused_ordering(3) 00:09:47.186 fused_ordering(4) 00:09:47.186 fused_ordering(5) 00:09:47.186 fused_ordering(6) 00:09:47.186 fused_ordering(7) 00:09:47.186 fused_ordering(8) 00:09:47.186 fused_ordering(9) 00:09:47.186 fused_ordering(10) 00:09:47.186 fused_ordering(11) 00:09:47.186 fused_ordering(12) 00:09:47.186 fused_ordering(13) 00:09:47.186 fused_ordering(14) 00:09:47.186 fused_ordering(15) 00:09:47.186 fused_ordering(16) 00:09:47.186 fused_ordering(17) 00:09:47.186 fused_ordering(18) 00:09:47.186 fused_ordering(19) 00:09:47.186 fused_ordering(20) 00:09:47.186 fused_ordering(21) 00:09:47.186 fused_ordering(22) 00:09:47.186 fused_ordering(23) 00:09:47.186 fused_ordering(24) 00:09:47.186 fused_ordering(25) 00:09:47.186 fused_ordering(26) 00:09:47.186 fused_ordering(27) 00:09:47.186 fused_ordering(28) 00:09:47.186 fused_ordering(29) 00:09:47.186 fused_ordering(30) 00:09:47.186 fused_ordering(31) 00:09:47.186 fused_ordering(32) 00:09:47.186 fused_ordering(33) 00:09:47.186 fused_ordering(34) 00:09:47.186 fused_ordering(35) 00:09:47.186 fused_ordering(36) 00:09:47.186 fused_ordering(37) 00:09:47.186 fused_ordering(38) 00:09:47.186 fused_ordering(39) 00:09:47.186 fused_ordering(40) 00:09:47.186 fused_ordering(41) 00:09:47.186 fused_ordering(42) 00:09:47.186 fused_ordering(43) 00:09:47.186 fused_ordering(44) 00:09:47.186 fused_ordering(45) 00:09:47.186 fused_ordering(46) 00:09:47.186 fused_ordering(47) 00:09:47.186 fused_ordering(48) 00:09:47.186 fused_ordering(49) 00:09:47.186 fused_ordering(50) 00:09:47.186 fused_ordering(51) 00:09:47.186 fused_ordering(52) 00:09:47.186 fused_ordering(53) 00:09:47.186 fused_ordering(54) 00:09:47.186 fused_ordering(55) 00:09:47.186 fused_ordering(56) 00:09:47.186 fused_ordering(57) 00:09:47.186 fused_ordering(58) 00:09:47.186 fused_ordering(59) 00:09:47.186 fused_ordering(60) 00:09:47.186 fused_ordering(61) 00:09:47.186 fused_ordering(62) 00:09:47.186 fused_ordering(63) 00:09:47.186 fused_ordering(64) 00:09:47.186 fused_ordering(65) 00:09:47.186 fused_ordering(66) 00:09:47.186 fused_ordering(67) 00:09:47.186 fused_ordering(68) 00:09:47.186 fused_ordering(69) 00:09:47.186 fused_ordering(70) 00:09:47.186 fused_ordering(71) 00:09:47.186 fused_ordering(72) 00:09:47.186 fused_ordering(73) 00:09:47.186 fused_ordering(74) 00:09:47.186 fused_ordering(75) 00:09:47.186 fused_ordering(76) 00:09:47.186 fused_ordering(77) 00:09:47.186 fused_ordering(78) 00:09:47.186 fused_ordering(79) 00:09:47.186 fused_ordering(80) 00:09:47.186 fused_ordering(81) 00:09:47.186 fused_ordering(82) 00:09:47.186 fused_ordering(83) 00:09:47.186 fused_ordering(84) 00:09:47.186 fused_ordering(85) 00:09:47.186 fused_ordering(86) 00:09:47.186 fused_ordering(87) 00:09:47.186 fused_ordering(88) 00:09:47.186 fused_ordering(89) 00:09:47.186 fused_ordering(90) 00:09:47.186 fused_ordering(91) 00:09:47.186 fused_ordering(92) 00:09:47.186 fused_ordering(93) 00:09:47.186 fused_ordering(94) 00:09:47.186 fused_ordering(95) 00:09:47.186 fused_ordering(96) 00:09:47.186 fused_ordering(97) 00:09:47.186 fused_ordering(98) 00:09:47.186 fused_ordering(99) 00:09:47.186 fused_ordering(100) 00:09:47.186 fused_ordering(101) 00:09:47.186 fused_ordering(102) 00:09:47.186 fused_ordering(103) 00:09:47.186 fused_ordering(104) 00:09:47.186 fused_ordering(105) 00:09:47.186 fused_ordering(106) 00:09:47.186 fused_ordering(107) 00:09:47.186 fused_ordering(108) 00:09:47.186 fused_ordering(109) 00:09:47.186 fused_ordering(110) 00:09:47.186 fused_ordering(111) 00:09:47.186 fused_ordering(112) 00:09:47.186 fused_ordering(113) 00:09:47.186 fused_ordering(114) 00:09:47.186 fused_ordering(115) 00:09:47.186 fused_ordering(116) 00:09:47.186 fused_ordering(117) 00:09:47.186 fused_ordering(118) 00:09:47.186 fused_ordering(119) 00:09:47.186 fused_ordering(120) 00:09:47.186 fused_ordering(121) 00:09:47.186 fused_ordering(122) 00:09:47.186 fused_ordering(123) 00:09:47.186 fused_ordering(124) 00:09:47.186 fused_ordering(125) 00:09:47.186 fused_ordering(126) 00:09:47.186 fused_ordering(127) 00:09:47.186 fused_ordering(128) 00:09:47.186 fused_ordering(129) 00:09:47.186 fused_ordering(130) 00:09:47.186 fused_ordering(131) 00:09:47.186 fused_ordering(132) 00:09:47.186 fused_ordering(133) 00:09:47.186 fused_ordering(134) 00:09:47.186 fused_ordering(135) 00:09:47.186 fused_ordering(136) 00:09:47.186 fused_ordering(137) 00:09:47.186 fused_ordering(138) 00:09:47.186 fused_ordering(139) 00:09:47.186 fused_ordering(140) 00:09:47.186 fused_ordering(141) 00:09:47.186 fused_ordering(142) 00:09:47.186 fused_ordering(143) 00:09:47.186 fused_ordering(144) 00:09:47.186 fused_ordering(145) 00:09:47.186 fused_ordering(146) 00:09:47.186 fused_ordering(147) 00:09:47.186 fused_ordering(148) 00:09:47.186 fused_ordering(149) 00:09:47.186 fused_ordering(150) 00:09:47.186 fused_ordering(151) 00:09:47.186 fused_ordering(152) 00:09:47.186 fused_ordering(153) 00:09:47.186 fused_ordering(154) 00:09:47.186 fused_ordering(155) 00:09:47.186 fused_ordering(156) 00:09:47.186 fused_ordering(157) 00:09:47.186 fused_ordering(158) 00:09:47.186 fused_ordering(159) 00:09:47.186 fused_ordering(160) 00:09:47.186 fused_ordering(161) 00:09:47.186 fused_ordering(162) 00:09:47.186 fused_ordering(163) 00:09:47.186 fused_ordering(164) 00:09:47.186 fused_ordering(165) 00:09:47.186 fused_ordering(166) 00:09:47.186 fused_ordering(167) 00:09:47.186 fused_ordering(168) 00:09:47.186 fused_ordering(169) 00:09:47.186 fused_ordering(170) 00:09:47.186 fused_ordering(171) 00:09:47.186 fused_ordering(172) 00:09:47.186 fused_ordering(173) 00:09:47.186 fused_ordering(174) 00:09:47.186 fused_ordering(175) 00:09:47.186 fused_ordering(176) 00:09:47.186 fused_ordering(177) 00:09:47.186 fused_ordering(178) 00:09:47.186 fused_ordering(179) 00:09:47.186 fused_ordering(180) 00:09:47.186 fused_ordering(181) 00:09:47.186 fused_ordering(182) 00:09:47.186 fused_ordering(183) 00:09:47.186 fused_ordering(184) 00:09:47.186 fused_ordering(185) 00:09:47.186 fused_ordering(186) 00:09:47.186 fused_ordering(187) 00:09:47.186 fused_ordering(188) 00:09:47.186 fused_ordering(189) 00:09:47.186 fused_ordering(190) 00:09:47.186 fused_ordering(191) 00:09:47.186 fused_ordering(192) 00:09:47.186 fused_ordering(193) 00:09:47.186 fused_ordering(194) 00:09:47.186 fused_ordering(195) 00:09:47.186 fused_ordering(196) 00:09:47.186 fused_ordering(197) 00:09:47.186 fused_ordering(198) 00:09:47.186 fused_ordering(199) 00:09:47.186 fused_ordering(200) 00:09:47.186 fused_ordering(201) 00:09:47.186 fused_ordering(202) 00:09:47.186 fused_ordering(203) 00:09:47.186 fused_ordering(204) 00:09:47.186 fused_ordering(205) 00:09:47.445 fused_ordering(206) 00:09:47.445 fused_ordering(207) 00:09:47.445 fused_ordering(208) 00:09:47.445 fused_ordering(209) 00:09:47.445 fused_ordering(210) 00:09:47.445 fused_ordering(211) 00:09:47.445 fused_ordering(212) 00:09:47.445 fused_ordering(213) 00:09:47.445 fused_ordering(214) 00:09:47.445 fused_ordering(215) 00:09:47.445 fused_ordering(216) 00:09:47.445 fused_ordering(217) 00:09:47.445 fused_ordering(218) 00:09:47.445 fused_ordering(219) 00:09:47.445 fused_ordering(220) 00:09:47.445 fused_ordering(221) 00:09:47.445 fused_ordering(222) 00:09:47.445 fused_ordering(223) 00:09:47.445 fused_ordering(224) 00:09:47.445 fused_ordering(225) 00:09:47.445 fused_ordering(226) 00:09:47.445 fused_ordering(227) 00:09:47.445 fused_ordering(228) 00:09:47.445 fused_ordering(229) 00:09:47.445 fused_ordering(230) 00:09:47.445 fused_ordering(231) 00:09:47.445 fused_ordering(232) 00:09:47.445 fused_ordering(233) 00:09:47.445 fused_ordering(234) 00:09:47.445 fused_ordering(235) 00:09:47.445 fused_ordering(236) 00:09:47.445 fused_ordering(237) 00:09:47.445 fused_ordering(238) 00:09:47.445 fused_ordering(239) 00:09:47.445 fused_ordering(240) 00:09:47.445 fused_ordering(241) 00:09:47.445 fused_ordering(242) 00:09:47.445 fused_ordering(243) 00:09:47.445 fused_ordering(244) 00:09:47.445 fused_ordering(245) 00:09:47.445 fused_ordering(246) 00:09:47.445 fused_ordering(247) 00:09:47.445 fused_ordering(248) 00:09:47.445 fused_ordering(249) 00:09:47.445 fused_ordering(250) 00:09:47.445 fused_ordering(251) 00:09:47.445 fused_ordering(252) 00:09:47.445 fused_ordering(253) 00:09:47.445 fused_ordering(254) 00:09:47.445 fused_ordering(255) 00:09:47.445 fused_ordering(256) 00:09:47.445 fused_ordering(257) 00:09:47.445 fused_ordering(258) 00:09:47.445 fused_ordering(259) 00:09:47.445 fused_ordering(260) 00:09:47.445 fused_ordering(261) 00:09:47.445 fused_ordering(262) 00:09:47.445 fused_ordering(263) 00:09:47.445 fused_ordering(264) 00:09:47.445 fused_ordering(265) 00:09:47.445 fused_ordering(266) 00:09:47.445 fused_ordering(267) 00:09:47.445 fused_ordering(268) 00:09:47.445 fused_ordering(269) 00:09:47.445 fused_ordering(270) 00:09:47.445 fused_ordering(271) 00:09:47.445 fused_ordering(272) 00:09:47.445 fused_ordering(273) 00:09:47.445 fused_ordering(274) 00:09:47.445 fused_ordering(275) 00:09:47.445 fused_ordering(276) 00:09:47.445 fused_ordering(277) 00:09:47.445 fused_ordering(278) 00:09:47.445 fused_ordering(279) 00:09:47.445 fused_ordering(280) 00:09:47.445 fused_ordering(281) 00:09:47.445 fused_ordering(282) 00:09:47.445 fused_ordering(283) 00:09:47.445 fused_ordering(284) 00:09:47.445 fused_ordering(285) 00:09:47.445 fused_ordering(286) 00:09:47.445 fused_ordering(287) 00:09:47.445 fused_ordering(288) 00:09:47.445 fused_ordering(289) 00:09:47.445 fused_ordering(290) 00:09:47.445 fused_ordering(291) 00:09:47.445 fused_ordering(292) 00:09:47.445 fused_ordering(293) 00:09:47.445 fused_ordering(294) 00:09:47.445 fused_ordering(295) 00:09:47.445 fused_ordering(296) 00:09:47.445 fused_ordering(297) 00:09:47.445 fused_ordering(298) 00:09:47.445 fused_ordering(299) 00:09:47.445 fused_ordering(300) 00:09:47.445 fused_ordering(301) 00:09:47.445 fused_ordering(302) 00:09:47.445 fused_ordering(303) 00:09:47.445 fused_ordering(304) 00:09:47.445 fused_ordering(305) 00:09:47.445 fused_ordering(306) 00:09:47.445 fused_ordering(307) 00:09:47.445 fused_ordering(308) 00:09:47.445 fused_ordering(309) 00:09:47.445 fused_ordering(310) 00:09:47.445 fused_ordering(311) 00:09:47.445 fused_ordering(312) 00:09:47.445 fused_ordering(313) 00:09:47.445 fused_ordering(314) 00:09:47.445 fused_ordering(315) 00:09:47.445 fused_ordering(316) 00:09:47.445 fused_ordering(317) 00:09:47.445 fused_ordering(318) 00:09:47.445 fused_ordering(319) 00:09:47.445 fused_ordering(320) 00:09:47.445 fused_ordering(321) 00:09:47.445 fused_ordering(322) 00:09:47.445 fused_ordering(323) 00:09:47.445 fused_ordering(324) 00:09:47.446 fused_ordering(325) 00:09:47.446 fused_ordering(326) 00:09:47.446 fused_ordering(327) 00:09:47.446 fused_ordering(328) 00:09:47.446 fused_ordering(329) 00:09:47.446 fused_ordering(330) 00:09:47.446 fused_ordering(331) 00:09:47.446 fused_ordering(332) 00:09:47.446 fused_ordering(333) 00:09:47.446 fused_ordering(334) 00:09:47.446 fused_ordering(335) 00:09:47.446 fused_ordering(336) 00:09:47.446 fused_ordering(337) 00:09:47.446 fused_ordering(338) 00:09:47.446 fused_ordering(339) 00:09:47.446 fused_ordering(340) 00:09:47.446 fused_ordering(341) 00:09:47.446 fused_ordering(342) 00:09:47.446 fused_ordering(343) 00:09:47.446 fused_ordering(344) 00:09:47.446 fused_ordering(345) 00:09:47.446 fused_ordering(346) 00:09:47.446 fused_ordering(347) 00:09:47.446 fused_ordering(348) 00:09:47.446 fused_ordering(349) 00:09:47.446 fused_ordering(350) 00:09:47.446 fused_ordering(351) 00:09:47.446 fused_ordering(352) 00:09:47.446 fused_ordering(353) 00:09:47.446 fused_ordering(354) 00:09:47.446 fused_ordering(355) 00:09:47.446 fused_ordering(356) 00:09:47.446 fused_ordering(357) 00:09:47.446 fused_ordering(358) 00:09:47.446 fused_ordering(359) 00:09:47.446 fused_ordering(360) 00:09:47.446 fused_ordering(361) 00:09:47.446 fused_ordering(362) 00:09:47.446 fused_ordering(363) 00:09:47.446 fused_ordering(364) 00:09:47.446 fused_ordering(365) 00:09:47.446 fused_ordering(366) 00:09:47.446 fused_ordering(367) 00:09:47.446 fused_ordering(368) 00:09:47.446 fused_ordering(369) 00:09:47.446 fused_ordering(370) 00:09:47.446 fused_ordering(371) 00:09:47.446 fused_ordering(372) 00:09:47.446 fused_ordering(373) 00:09:47.446 fused_ordering(374) 00:09:47.446 fused_ordering(375) 00:09:47.446 fused_ordering(376) 00:09:47.446 fused_ordering(377) 00:09:47.446 fused_ordering(378) 00:09:47.446 fused_ordering(379) 00:09:47.446 fused_ordering(380) 00:09:47.446 fused_ordering(381) 00:09:47.446 fused_ordering(382) 00:09:47.446 fused_ordering(383) 00:09:47.446 fused_ordering(384) 00:09:47.446 fused_ordering(385) 00:09:47.446 fused_ordering(386) 00:09:47.446 fused_ordering(387) 00:09:47.446 fused_ordering(388) 00:09:47.446 fused_ordering(389) 00:09:47.446 fused_ordering(390) 00:09:47.446 fused_ordering(391) 00:09:47.446 fused_ordering(392) 00:09:47.446 fused_ordering(393) 00:09:47.446 fused_ordering(394) 00:09:47.446 fused_ordering(395) 00:09:47.446 fused_ordering(396) 00:09:47.446 fused_ordering(397) 00:09:47.446 fused_ordering(398) 00:09:47.446 fused_ordering(399) 00:09:47.446 fused_ordering(400) 00:09:47.446 fused_ordering(401) 00:09:47.446 fused_ordering(402) 00:09:47.446 fused_ordering(403) 00:09:47.446 fused_ordering(404) 00:09:47.446 fused_ordering(405) 00:09:47.446 fused_ordering(406) 00:09:47.446 fused_ordering(407) 00:09:47.446 fused_ordering(408) 00:09:47.446 fused_ordering(409) 00:09:47.446 fused_ordering(410) 00:09:47.704 fused_ordering(411) 00:09:47.704 fused_ordering(412) 00:09:47.704 fused_ordering(413) 00:09:47.704 fused_ordering(414) 00:09:47.704 fused_ordering(415) 00:09:47.704 fused_ordering(416) 00:09:47.704 fused_ordering(417) 00:09:47.704 fused_ordering(418) 00:09:47.704 fused_ordering(419) 00:09:47.704 fused_ordering(420) 00:09:47.704 fused_ordering(421) 00:09:47.704 fused_ordering(422) 00:09:47.704 fused_ordering(423) 00:09:47.704 fused_ordering(424) 00:09:47.704 fused_ordering(425) 00:09:47.704 fused_ordering(426) 00:09:47.704 fused_ordering(427) 00:09:47.704 fused_ordering(428) 00:09:47.704 fused_ordering(429) 00:09:47.704 fused_ordering(430) 00:09:47.704 fused_ordering(431) 00:09:47.704 fused_ordering(432) 00:09:47.704 fused_ordering(433) 00:09:47.704 fused_ordering(434) 00:09:47.704 fused_ordering(435) 00:09:47.704 fused_ordering(436) 00:09:47.704 fused_ordering(437) 00:09:47.704 fused_ordering(438) 00:09:47.704 fused_ordering(439) 00:09:47.704 fused_ordering(440) 00:09:47.704 fused_ordering(441) 00:09:47.704 fused_ordering(442) 00:09:47.704 fused_ordering(443) 00:09:47.704 fused_ordering(444) 00:09:47.704 fused_ordering(445) 00:09:47.704 fused_ordering(446) 00:09:47.704 fused_ordering(447) 00:09:47.704 fused_ordering(448) 00:09:47.704 fused_ordering(449) 00:09:47.704 fused_ordering(450) 00:09:47.704 fused_ordering(451) 00:09:47.704 fused_ordering(452) 00:09:47.704 fused_ordering(453) 00:09:47.704 fused_ordering(454) 00:09:47.704 fused_ordering(455) 00:09:47.704 fused_ordering(456) 00:09:47.704 fused_ordering(457) 00:09:47.704 fused_ordering(458) 00:09:47.704 fused_ordering(459) 00:09:47.704 fused_ordering(460) 00:09:47.704 fused_ordering(461) 00:09:47.704 fused_ordering(462) 00:09:47.704 fused_ordering(463) 00:09:47.704 fused_ordering(464) 00:09:47.704 fused_ordering(465) 00:09:47.704 fused_ordering(466) 00:09:47.704 fused_ordering(467) 00:09:47.704 fused_ordering(468) 00:09:47.704 fused_ordering(469) 00:09:47.704 fused_ordering(470) 00:09:47.704 fused_ordering(471) 00:09:47.704 fused_ordering(472) 00:09:47.704 fused_ordering(473) 00:09:47.704 fused_ordering(474) 00:09:47.704 fused_ordering(475) 00:09:47.704 fused_ordering(476) 00:09:47.704 fused_ordering(477) 00:09:47.704 fused_ordering(478) 00:09:47.704 fused_ordering(479) 00:09:47.704 fused_ordering(480) 00:09:47.704 fused_ordering(481) 00:09:47.704 fused_ordering(482) 00:09:47.704 fused_ordering(483) 00:09:47.704 fused_ordering(484) 00:09:47.704 fused_ordering(485) 00:09:47.704 fused_ordering(486) 00:09:47.704 fused_ordering(487) 00:09:47.704 fused_ordering(488) 00:09:47.704 fused_ordering(489) 00:09:47.704 fused_ordering(490) 00:09:47.704 fused_ordering(491) 00:09:47.704 fused_ordering(492) 00:09:47.704 fused_ordering(493) 00:09:47.705 fused_ordering(494) 00:09:47.705 fused_ordering(495) 00:09:47.705 fused_ordering(496) 00:09:47.705 fused_ordering(497) 00:09:47.705 fused_ordering(498) 00:09:47.705 fused_ordering(499) 00:09:47.705 fused_ordering(500) 00:09:47.705 fused_ordering(501) 00:09:47.705 fused_ordering(502) 00:09:47.705 fused_ordering(503) 00:09:47.705 fused_ordering(504) 00:09:47.705 fused_ordering(505) 00:09:47.705 fused_ordering(506) 00:09:47.705 fused_ordering(507) 00:09:47.705 fused_ordering(508) 00:09:47.705 fused_ordering(509) 00:09:47.705 fused_ordering(510) 00:09:47.705 fused_ordering(511) 00:09:47.705 fused_ordering(512) 00:09:47.705 fused_ordering(513) 00:09:47.705 fused_ordering(514) 00:09:47.705 fused_ordering(515) 00:09:47.705 fused_ordering(516) 00:09:47.705 fused_ordering(517) 00:09:47.705 fused_ordering(518) 00:09:47.705 fused_ordering(519) 00:09:47.705 fused_ordering(520) 00:09:47.705 fused_ordering(521) 00:09:47.705 fused_ordering(522) 00:09:47.705 fused_ordering(523) 00:09:47.705 fused_ordering(524) 00:09:47.705 fused_ordering(525) 00:09:47.705 fused_ordering(526) 00:09:47.705 fused_ordering(527) 00:09:47.705 fused_ordering(528) 00:09:47.705 fused_ordering(529) 00:09:47.705 fused_ordering(530) 00:09:47.705 fused_ordering(531) 00:09:47.705 fused_ordering(532) 00:09:47.705 fused_ordering(533) 00:09:47.705 fused_ordering(534) 00:09:47.705 fused_ordering(535) 00:09:47.705 fused_ordering(536) 00:09:47.705 fused_ordering(537) 00:09:47.705 fused_ordering(538) 00:09:47.705 fused_ordering(539) 00:09:47.705 fused_ordering(540) 00:09:47.705 fused_ordering(541) 00:09:47.705 fused_ordering(542) 00:09:47.705 fused_ordering(543) 00:09:47.705 fused_ordering(544) 00:09:47.705 fused_ordering(545) 00:09:47.705 fused_ordering(546) 00:09:47.705 fused_ordering(547) 00:09:47.705 fused_ordering(548) 00:09:47.705 fused_ordering(549) 00:09:47.705 fused_ordering(550) 00:09:47.705 fused_ordering(551) 00:09:47.705 fused_ordering(552) 00:09:47.705 fused_ordering(553) 00:09:47.705 fused_ordering(554) 00:09:47.705 fused_ordering(555) 00:09:47.705 fused_ordering(556) 00:09:47.705 fused_ordering(557) 00:09:47.705 fused_ordering(558) 00:09:47.705 fused_ordering(559) 00:09:47.705 fused_ordering(560) 00:09:47.705 fused_ordering(561) 00:09:47.705 fused_ordering(562) 00:09:47.705 fused_ordering(563) 00:09:47.705 fused_ordering(564) 00:09:47.705 fused_ordering(565) 00:09:47.705 fused_ordering(566) 00:09:47.705 fused_ordering(567) 00:09:47.705 fused_ordering(568) 00:09:47.705 fused_ordering(569) 00:09:47.705 fused_ordering(570) 00:09:47.705 fused_ordering(571) 00:09:47.705 fused_ordering(572) 00:09:47.705 fused_ordering(573) 00:09:47.705 fused_ordering(574) 00:09:47.705 fused_ordering(575) 00:09:47.705 fused_ordering(576) 00:09:47.705 fused_ordering(577) 00:09:47.705 fused_ordering(578) 00:09:47.705 fused_ordering(579) 00:09:47.705 fused_ordering(580) 00:09:47.705 fused_ordering(581) 00:09:47.705 fused_ordering(582) 00:09:47.705 fused_ordering(583) 00:09:47.705 fused_ordering(584) 00:09:47.705 fused_ordering(585) 00:09:47.705 fused_ordering(586) 00:09:47.705 fused_ordering(587) 00:09:47.705 fused_ordering(588) 00:09:47.705 fused_ordering(589) 00:09:47.705 fused_ordering(590) 00:09:47.705 fused_ordering(591) 00:09:47.705 fused_ordering(592) 00:09:47.705 fused_ordering(593) 00:09:47.705 fused_ordering(594) 00:09:47.705 fused_ordering(595) 00:09:47.705 fused_ordering(596) 00:09:47.705 fused_ordering(597) 00:09:47.705 fused_ordering(598) 00:09:47.705 fused_ordering(599) 00:09:47.705 fused_ordering(600) 00:09:47.705 fused_ordering(601) 00:09:47.705 fused_ordering(602) 00:09:47.705 fused_ordering(603) 00:09:47.705 fused_ordering(604) 00:09:47.705 fused_ordering(605) 00:09:47.705 fused_ordering(606) 00:09:47.705 fused_ordering(607) 00:09:47.705 fused_ordering(608) 00:09:47.705 fused_ordering(609) 00:09:47.705 fused_ordering(610) 00:09:47.705 fused_ordering(611) 00:09:47.705 fused_ordering(612) 00:09:47.705 fused_ordering(613) 00:09:47.705 fused_ordering(614) 00:09:47.705 fused_ordering(615) 00:09:48.271 fused_ordering(616) 00:09:48.271 fused_ordering(617) 00:09:48.271 fused_ordering(618) 00:09:48.271 fused_ordering(619) 00:09:48.271 fused_ordering(620) 00:09:48.271 fused_ordering(621) 00:09:48.271 fused_ordering(622) 00:09:48.271 fused_ordering(623) 00:09:48.271 fused_ordering(624) 00:09:48.271 fused_ordering(625) 00:09:48.271 fused_ordering(626) 00:09:48.271 fused_ordering(627) 00:09:48.271 fused_ordering(628) 00:09:48.271 fused_ordering(629) 00:09:48.271 fused_ordering(630) 00:09:48.271 fused_ordering(631) 00:09:48.271 fused_ordering(632) 00:09:48.271 fused_ordering(633) 00:09:48.271 fused_ordering(634) 00:09:48.271 fused_ordering(635) 00:09:48.271 fused_ordering(636) 00:09:48.271 fused_ordering(637) 00:09:48.271 fused_ordering(638) 00:09:48.271 fused_ordering(639) 00:09:48.271 fused_ordering(640) 00:09:48.271 fused_ordering(641) 00:09:48.271 fused_ordering(642) 00:09:48.271 fused_ordering(643) 00:09:48.271 fused_ordering(644) 00:09:48.271 fused_ordering(645) 00:09:48.271 fused_ordering(646) 00:09:48.271 fused_ordering(647) 00:09:48.271 fused_ordering(648) 00:09:48.271 fused_ordering(649) 00:09:48.271 fused_ordering(650) 00:09:48.271 fused_ordering(651) 00:09:48.271 fused_ordering(652) 00:09:48.271 fused_ordering(653) 00:09:48.271 fused_ordering(654) 00:09:48.271 fused_ordering(655) 00:09:48.271 fused_ordering(656) 00:09:48.271 fused_ordering(657) 00:09:48.271 fused_ordering(658) 00:09:48.271 fused_ordering(659) 00:09:48.271 fused_ordering(660) 00:09:48.271 fused_ordering(661) 00:09:48.271 fused_ordering(662) 00:09:48.271 fused_ordering(663) 00:09:48.271 fused_ordering(664) 00:09:48.271 fused_ordering(665) 00:09:48.271 fused_ordering(666) 00:09:48.271 fused_ordering(667) 00:09:48.271 fused_ordering(668) 00:09:48.271 fused_ordering(669) 00:09:48.271 fused_ordering(670) 00:09:48.271 fused_ordering(671) 00:09:48.271 fused_ordering(672) 00:09:48.271 fused_ordering(673) 00:09:48.271 fused_ordering(674) 00:09:48.271 fused_ordering(675) 00:09:48.271 fused_ordering(676) 00:09:48.271 fused_ordering(677) 00:09:48.271 fused_ordering(678) 00:09:48.271 fused_ordering(679) 00:09:48.271 fused_ordering(680) 00:09:48.271 fused_ordering(681) 00:09:48.271 fused_ordering(682) 00:09:48.271 fused_ordering(683) 00:09:48.271 fused_ordering(684) 00:09:48.271 fused_ordering(685) 00:09:48.271 fused_ordering(686) 00:09:48.271 fused_ordering(687) 00:09:48.271 fused_ordering(688) 00:09:48.272 fused_ordering(689) 00:09:48.272 fused_ordering(690) 00:09:48.272 fused_ordering(691) 00:09:48.272 fused_ordering(692) 00:09:48.272 fused_ordering(693) 00:09:48.272 fused_ordering(694) 00:09:48.272 fused_ordering(695) 00:09:48.272 fused_ordering(696) 00:09:48.272 fused_ordering(697) 00:09:48.272 fused_ordering(698) 00:09:48.272 fused_ordering(699) 00:09:48.272 fused_ordering(700) 00:09:48.272 fused_ordering(701) 00:09:48.272 fused_ordering(702) 00:09:48.272 fused_ordering(703) 00:09:48.272 fused_ordering(704) 00:09:48.272 fused_ordering(705) 00:09:48.272 fused_ordering(706) 00:09:48.272 fused_ordering(707) 00:09:48.272 fused_ordering(708) 00:09:48.272 fused_ordering(709) 00:09:48.272 fused_ordering(710) 00:09:48.272 fused_ordering(711) 00:09:48.272 fused_ordering(712) 00:09:48.272 fused_ordering(713) 00:09:48.272 fused_ordering(714) 00:09:48.272 fused_ordering(715) 00:09:48.272 fused_ordering(716) 00:09:48.272 fused_ordering(717) 00:09:48.272 fused_ordering(718) 00:09:48.272 fused_ordering(719) 00:09:48.272 fused_ordering(720) 00:09:48.272 fused_ordering(721) 00:09:48.272 fused_ordering(722) 00:09:48.272 fused_ordering(723) 00:09:48.272 fused_ordering(724) 00:09:48.272 fused_ordering(725) 00:09:48.272 fused_ordering(726) 00:09:48.272 fused_ordering(727) 00:09:48.272 fused_ordering(728) 00:09:48.272 fused_ordering(729) 00:09:48.272 fused_ordering(730) 00:09:48.272 fused_ordering(731) 00:09:48.272 fused_ordering(732) 00:09:48.272 fused_ordering(733) 00:09:48.272 fused_ordering(734) 00:09:48.272 fused_ordering(735) 00:09:48.272 fused_ordering(736) 00:09:48.272 fused_ordering(737) 00:09:48.272 fused_ordering(738) 00:09:48.272 fused_ordering(739) 00:09:48.272 fused_ordering(740) 00:09:48.272 fused_ordering(741) 00:09:48.272 fused_ordering(742) 00:09:48.272 fused_ordering(743) 00:09:48.272 fused_ordering(744) 00:09:48.272 fused_ordering(745) 00:09:48.272 fused_ordering(746) 00:09:48.272 fused_ordering(747) 00:09:48.272 fused_ordering(748) 00:09:48.272 fused_ordering(749) 00:09:48.272 fused_ordering(750) 00:09:48.272 fused_ordering(751) 00:09:48.272 fused_ordering(752) 00:09:48.272 fused_ordering(753) 00:09:48.272 fused_ordering(754) 00:09:48.272 fused_ordering(755) 00:09:48.272 fused_ordering(756) 00:09:48.272 fused_ordering(757) 00:09:48.272 fused_ordering(758) 00:09:48.272 fused_ordering(759) 00:09:48.272 fused_ordering(760) 00:09:48.272 fused_ordering(761) 00:09:48.272 fused_ordering(762) 00:09:48.272 fused_ordering(763) 00:09:48.272 fused_ordering(764) 00:09:48.272 fused_ordering(765) 00:09:48.272 fused_ordering(766) 00:09:48.272 fused_ordering(767) 00:09:48.272 fused_ordering(768) 00:09:48.272 fused_ordering(769) 00:09:48.272 fused_ordering(770) 00:09:48.272 fused_ordering(771) 00:09:48.272 fused_ordering(772) 00:09:48.272 fused_ordering(773) 00:09:48.272 fused_ordering(774) 00:09:48.272 fused_ordering(775) 00:09:48.272 fused_ordering(776) 00:09:48.272 fused_ordering(777) 00:09:48.272 fused_ordering(778) 00:09:48.272 fused_ordering(779) 00:09:48.272 fused_ordering(780) 00:09:48.272 fused_ordering(781) 00:09:48.272 fused_ordering(782) 00:09:48.272 fused_ordering(783) 00:09:48.272 fused_ordering(784) 00:09:48.272 fused_ordering(785) 00:09:48.272 fused_ordering(786) 00:09:48.272 fused_ordering(787) 00:09:48.272 fused_ordering(788) 00:09:48.272 fused_ordering(789) 00:09:48.272 fused_ordering(790) 00:09:48.272 fused_ordering(791) 00:09:48.272 fused_ordering(792) 00:09:48.272 fused_ordering(793) 00:09:48.272 fused_ordering(794) 00:09:48.272 fused_ordering(795) 00:09:48.272 fused_ordering(796) 00:09:48.272 fused_ordering(797) 00:09:48.272 fused_ordering(798) 00:09:48.272 fused_ordering(799) 00:09:48.272 fused_ordering(800) 00:09:48.272 fused_ordering(801) 00:09:48.272 fused_ordering(802) 00:09:48.272 fused_ordering(803) 00:09:48.272 fused_ordering(804) 00:09:48.272 fused_ordering(805) 00:09:48.272 fused_ordering(806) 00:09:48.272 fused_ordering(807) 00:09:48.272 fused_ordering(808) 00:09:48.272 fused_ordering(809) 00:09:48.272 fused_ordering(810) 00:09:48.272 fused_ordering(811) 00:09:48.272 fused_ordering(812) 00:09:48.272 fused_ordering(813) 00:09:48.272 fused_ordering(814) 00:09:48.272 fused_ordering(815) 00:09:48.272 fused_ordering(816) 00:09:48.272 fused_ordering(817) 00:09:48.272 fused_ordering(818) 00:09:48.272 fused_ordering(819) 00:09:48.272 fused_ordering(820) 00:09:48.841 fused_ordering(821) 00:09:48.841 fused_ordering(822) 00:09:48.841 fused_ordering(823) 00:09:48.841 fused_ordering(824) 00:09:48.841 fused_ordering(825) 00:09:48.841 fused_ordering(826) 00:09:48.841 fused_ordering(827) 00:09:48.841 fused_ordering(828) 00:09:48.841 fused_ordering(829) 00:09:48.841 fused_ordering(830) 00:09:48.841 fused_ordering(831) 00:09:48.841 fused_ordering(832) 00:09:48.841 fused_ordering(833) 00:09:48.841 fused_ordering(834) 00:09:48.841 fused_ordering(835) 00:09:48.841 fused_ordering(836) 00:09:48.841 fused_ordering(837) 00:09:48.841 fused_ordering(838) 00:09:48.841 fused_ordering(839) 00:09:48.841 fused_ordering(840) 00:09:48.841 fused_ordering(841) 00:09:48.841 fused_ordering(842) 00:09:48.841 fused_ordering(843) 00:09:48.841 fused_ordering(844) 00:09:48.841 fused_ordering(845) 00:09:48.841 fused_ordering(846) 00:09:48.841 fused_ordering(847) 00:09:48.841 fused_ordering(848) 00:09:48.841 fused_ordering(849) 00:09:48.841 fused_ordering(850) 00:09:48.841 fused_ordering(851) 00:09:48.841 fused_ordering(852) 00:09:48.841 fused_ordering(853) 00:09:48.841 fused_ordering(854) 00:09:48.841 fused_ordering(855) 00:09:48.841 fused_ordering(856) 00:09:48.841 fused_ordering(857) 00:09:48.841 fused_ordering(858) 00:09:48.841 fused_ordering(859) 00:09:48.841 fused_ordering(860) 00:09:48.841 fused_ordering(861) 00:09:48.841 fused_ordering(862) 00:09:48.841 fused_ordering(863) 00:09:48.841 fused_ordering(864) 00:09:48.841 fused_ordering(865) 00:09:48.841 fused_ordering(866) 00:09:48.841 fused_ordering(867) 00:09:48.841 fused_ordering(868) 00:09:48.841 fused_ordering(869) 00:09:48.841 fused_ordering(870) 00:09:48.841 fused_ordering(871) 00:09:48.841 fused_ordering(872) 00:09:48.841 fused_ordering(873) 00:09:48.841 fused_ordering(874) 00:09:48.841 fused_ordering(875) 00:09:48.841 fused_ordering(876) 00:09:48.841 fused_ordering(877) 00:09:48.841 fused_ordering(878) 00:09:48.841 fused_ordering(879) 00:09:48.841 fused_ordering(880) 00:09:48.841 fused_ordering(881) 00:09:48.841 fused_ordering(882) 00:09:48.841 fused_ordering(883) 00:09:48.841 fused_ordering(884) 00:09:48.841 fused_ordering(885) 00:09:48.841 fused_ordering(886) 00:09:48.841 fused_ordering(887) 00:09:48.841 fused_ordering(888) 00:09:48.841 fused_ordering(889) 00:09:48.841 fused_ordering(890) 00:09:48.841 fused_ordering(891) 00:09:48.841 fused_ordering(892) 00:09:48.841 fused_ordering(893) 00:09:48.841 fused_ordering(894) 00:09:48.841 fused_ordering(895) 00:09:48.841 fused_ordering(896) 00:09:48.841 fused_ordering(897) 00:09:48.841 fused_ordering(898) 00:09:48.841 fused_ordering(899) 00:09:48.841 fused_ordering(900) 00:09:48.841 fused_ordering(901) 00:09:48.841 fused_ordering(902) 00:09:48.841 fused_ordering(903) 00:09:48.841 fused_ordering(904) 00:09:48.841 fused_ordering(905) 00:09:48.841 fused_ordering(906) 00:09:48.841 fused_ordering(907) 00:09:48.841 fused_ordering(908) 00:09:48.841 fused_ordering(909) 00:09:48.841 fused_ordering(910) 00:09:48.841 fused_ordering(911) 00:09:48.841 fused_ordering(912) 00:09:48.841 fused_ordering(913) 00:09:48.841 fused_ordering(914) 00:09:48.841 fused_ordering(915) 00:09:48.841 fused_ordering(916) 00:09:48.841 fused_ordering(917) 00:09:48.841 fused_ordering(918) 00:09:48.841 fused_ordering(919) 00:09:48.841 fused_ordering(920) 00:09:48.841 fused_ordering(921) 00:09:48.841 fused_ordering(922) 00:09:48.841 fused_ordering(923) 00:09:48.841 fused_ordering(924) 00:09:48.841 fused_ordering(925) 00:09:48.841 fused_ordering(926) 00:09:48.841 fused_ordering(927) 00:09:48.841 fused_ordering(928) 00:09:48.841 fused_ordering(929) 00:09:48.841 fused_ordering(930) 00:09:48.841 fused_ordering(931) 00:09:48.841 fused_ordering(932) 00:09:48.841 fused_ordering(933) 00:09:48.841 fused_ordering(934) 00:09:48.841 fused_ordering(935) 00:09:48.841 fused_ordering(936) 00:09:48.841 fused_ordering(937) 00:09:48.841 fused_ordering(938) 00:09:48.841 fused_ordering(939) 00:09:48.841 fused_ordering(940) 00:09:48.841 fused_ordering(941) 00:09:48.841 fused_ordering(942) 00:09:48.841 fused_ordering(943) 00:09:48.841 fused_ordering(944) 00:09:48.841 fused_ordering(945) 00:09:48.841 fused_ordering(946) 00:09:48.841 fused_ordering(947) 00:09:48.841 fused_ordering(948) 00:09:48.841 fused_ordering(949) 00:09:48.841 fused_ordering(950) 00:09:48.841 fused_ordering(951) 00:09:48.841 fused_ordering(952) 00:09:48.841 fused_ordering(953) 00:09:48.841 fused_ordering(954) 00:09:48.841 fused_ordering(955) 00:09:48.841 fused_ordering(956) 00:09:48.841 fused_ordering(957) 00:09:48.841 fused_ordering(958) 00:09:48.841 fused_ordering(959) 00:09:48.841 fused_ordering(960) 00:09:48.841 fused_ordering(961) 00:09:48.841 fused_ordering(962) 00:09:48.841 fused_ordering(963) 00:09:48.841 fused_ordering(964) 00:09:48.841 fused_ordering(965) 00:09:48.841 fused_ordering(966) 00:09:48.841 fused_ordering(967) 00:09:48.841 fused_ordering(968) 00:09:48.841 fused_ordering(969) 00:09:48.841 fused_ordering(970) 00:09:48.841 fused_ordering(971) 00:09:48.841 fused_ordering(972) 00:09:48.841 fused_ordering(973) 00:09:48.841 fused_ordering(974) 00:09:48.841 fused_ordering(975) 00:09:48.841 fused_ordering(976) 00:09:48.841 fused_ordering(977) 00:09:48.841 fused_ordering(978) 00:09:48.841 fused_ordering(979) 00:09:48.841 fused_ordering(980) 00:09:48.841 fused_ordering(981) 00:09:48.841 fused_ordering(982) 00:09:48.841 fused_ordering(983) 00:09:48.841 fused_ordering(984) 00:09:48.841 fused_ordering(985) 00:09:48.841 fused_ordering(986) 00:09:48.841 fused_ordering(987) 00:09:48.841 fused_ordering(988) 00:09:48.841 fused_ordering(989) 00:09:48.841 fused_ordering(990) 00:09:48.841 fused_ordering(991) 00:09:48.841 fused_ordering(992) 00:09:48.841 fused_ordering(993) 00:09:48.841 fused_ordering(994) 00:09:48.841 fused_ordering(995) 00:09:48.841 fused_ordering(996) 00:09:48.841 fused_ordering(997) 00:09:48.841 fused_ordering(998) 00:09:48.841 fused_ordering(999) 00:09:48.841 fused_ordering(1000) 00:09:48.841 fused_ordering(1001) 00:09:48.841 fused_ordering(1002) 00:09:48.841 fused_ordering(1003) 00:09:48.841 fused_ordering(1004) 00:09:48.841 fused_ordering(1005) 00:09:48.841 fused_ordering(1006) 00:09:48.841 fused_ordering(1007) 00:09:48.841 fused_ordering(1008) 00:09:48.841 fused_ordering(1009) 00:09:48.841 fused_ordering(1010) 00:09:48.841 fused_ordering(1011) 00:09:48.841 fused_ordering(1012) 00:09:48.841 fused_ordering(1013) 00:09:48.841 fused_ordering(1014) 00:09:48.841 fused_ordering(1015) 00:09:48.841 fused_ordering(1016) 00:09:48.841 fused_ordering(1017) 00:09:48.841 fused_ordering(1018) 00:09:48.841 fused_ordering(1019) 00:09:48.841 fused_ordering(1020) 00:09:48.841 fused_ordering(1021) 00:09:48.841 fused_ordering(1022) 00:09:48.841 fused_ordering(1023) 00:09:48.841 11:01:45 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:09:48.841 11:01:45 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:09:48.841 11:01:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:48.841 11:01:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:09:48.841 11:01:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:48.841 11:01:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:09:48.841 11:01:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:48.841 11:01:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:48.841 rmmod nvme_tcp 00:09:48.841 rmmod nvme_fabrics 00:09:48.841 rmmod nvme_keyring 00:09:48.841 11:01:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:48.841 11:01:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:09:48.841 11:01:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:09:48.841 11:01:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2156078 ']' 00:09:48.841 11:01:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2156078 00:09:48.841 11:01:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@947 -- # '[' -z 2156078 ']' 00:09:48.841 11:01:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # kill -0 2156078 00:09:48.842 11:01:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # uname 00:09:48.842 11:01:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:09:48.842 11:01:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2156078 00:09:48.842 11:01:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:09:48.842 11:01:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:09:48.842 11:01:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2156078' 00:09:48.842 killing process with pid 2156078 00:09:48.842 11:01:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # kill 2156078 00:09:48.842 [2024-05-15 11:01:45.957917] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:48.842 11:01:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # wait 2156078 00:09:49.102 11:01:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:49.102 11:01:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:49.102 11:01:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:49.102 11:01:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:49.102 11:01:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:49.102 11:01:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:49.102 11:01:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:49.102 11:01:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:51.009 11:01:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:51.009 00:09:51.009 real 0m10.733s 00:09:51.009 user 0m5.613s 00:09:51.009 sys 0m5.471s 00:09:51.009 11:01:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # xtrace_disable 00:09:51.009 11:01:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:51.009 ************************************ 00:09:51.009 END TEST nvmf_fused_ordering 00:09:51.009 ************************************ 00:09:51.009 11:01:48 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:51.009 11:01:48 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:09:51.009 11:01:48 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:09:51.009 11:01:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:51.268 ************************************ 00:09:51.268 START TEST nvmf_delete_subsystem 00:09:51.268 ************************************ 00:09:51.268 11:01:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:51.268 * Looking for test storage... 00:09:51.268 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:51.268 11:01:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:51.268 11:01:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:09:51.268 11:01:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:51.268 11:01:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:51.268 11:01:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:51.268 11:01:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:51.268 11:01:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:51.268 11:01:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:51.268 11:01:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:51.268 11:01:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:51.268 11:01:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:51.268 11:01:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:51.268 11:01:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:51.268 11:01:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:51.268 11:01:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:51.268 11:01:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:51.268 11:01:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:51.268 11:01:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:51.268 11:01:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:51.268 11:01:48 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:51.268 11:01:48 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:51.268 11:01:48 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:51.268 11:01:48 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.268 11:01:48 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.268 11:01:48 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.268 11:01:48 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:09:51.268 11:01:48 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.268 11:01:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:09:51.268 11:01:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:51.268 11:01:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:51.268 11:01:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:51.268 11:01:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:51.268 11:01:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:51.268 11:01:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:51.268 11:01:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:51.268 11:01:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:51.268 11:01:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:51.268 11:01:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:51.268 11:01:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:51.268 11:01:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:51.268 11:01:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:51.268 11:01:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:51.268 11:01:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:51.268 11:01:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:51.268 11:01:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:51.268 11:01:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:51.268 11:01:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:51.268 11:01:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:09:51.268 11:01:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:56.587 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:56.587 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:56.587 Found net devices under 0000:86:00.0: cvl_0_0 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:56.587 Found net devices under 0000:86:00.1: cvl_0_1 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:09:56.587 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:56.588 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:56.588 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:56.588 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:56.588 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:56.588 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:56.588 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:56.588 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:56.588 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:56.588 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:56.588 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:56.588 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:56.588 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:56.588 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:56.588 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:56.588 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:56.588 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:56.588 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:56.588 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:56.588 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:56.588 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:56.588 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:56.588 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:56.588 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:56.588 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:09:56.588 00:09:56.588 --- 10.0.0.2 ping statistics --- 00:09:56.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.588 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:09:56.588 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:56.588 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:56.588 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:09:56.588 00:09:56.588 --- 10.0.0.1 ping statistics --- 00:09:56.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.588 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:09:56.588 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:56.588 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:09:56.588 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:56.588 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:56.588 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:56.588 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:56.588 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:56.588 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:56.588 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:56.588 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:56.588 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:56.588 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@721 -- # xtrace_disable 00:09:56.588 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:56.588 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2159857 00:09:56.588 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2159857 00:09:56.588 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:56.588 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@828 -- # '[' -z 2159857 ']' 00:09:56.588 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.588 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local max_retries=100 00:09:56.588 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.588 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # xtrace_disable 00:09:56.588 11:01:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:56.588 [2024-05-15 11:01:53.693377] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:09:56.588 [2024-05-15 11:01:53.693424] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:56.588 EAL: No free 2048 kB hugepages reported on node 1 00:09:56.588 [2024-05-15 11:01:53.750202] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:56.588 [2024-05-15 11:01:53.829592] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:56.588 [2024-05-15 11:01:53.829628] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:56.588 [2024-05-15 11:01:53.829635] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:56.588 [2024-05-15 11:01:53.829641] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:56.588 [2024-05-15 11:01:53.829647] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:56.588 [2024-05-15 11:01:53.829683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:56.588 [2024-05-15 11:01:53.829687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.521 11:01:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:09:57.521 11:01:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@861 -- # return 0 00:09:57.521 11:01:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:57.521 11:01:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@727 -- # xtrace_disable 00:09:57.521 11:01:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:57.521 11:01:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:57.522 11:01:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:57.522 11:01:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:57.522 11:01:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:57.522 [2024-05-15 11:01:54.543586] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:57.522 11:01:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:57.522 11:01:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:57.522 11:01:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:57.522 11:01:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:57.522 11:01:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:57.522 11:01:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:57.522 11:01:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:57.522 11:01:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:57.522 [2024-05-15 11:01:54.559572] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:57.522 [2024-05-15 11:01:54.559733] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:57.522 11:01:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:57.522 11:01:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:57.522 11:01:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:57.522 11:01:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:57.522 NULL1 00:09:57.522 11:01:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:57.522 11:01:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:57.522 11:01:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:57.522 11:01:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:57.522 Delay0 00:09:57.522 11:01:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:57.522 11:01:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:57.522 11:01:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:57.522 11:01:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:57.522 11:01:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:57.522 11:01:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2160104 00:09:57.522 11:01:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:57.522 11:01:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:57.522 EAL: No free 2048 kB hugepages reported on node 1 00:09:57.522 [2024-05-15 11:01:54.624210] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:59.420 11:01:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:59.420 11:01:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:59.420 11:01:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:59.420 Read completed with error (sct=0, sc=8) 00:09:59.420 Read completed with error (sct=0, sc=8) 00:09:59.420 starting I/O failed: -6 00:09:59.420 Read completed with error (sct=0, sc=8) 00:09:59.420 Read completed with error (sct=0, sc=8) 00:09:59.420 Write completed with error (sct=0, sc=8) 00:09:59.420 Read completed with error (sct=0, sc=8) 00:09:59.420 starting I/O failed: -6 00:09:59.420 Read completed with error (sct=0, sc=8) 00:09:59.420 Write completed with error (sct=0, sc=8) 00:09:59.420 Read completed with error (sct=0, sc=8) 00:09:59.420 Write completed with error (sct=0, sc=8) 00:09:59.420 starting I/O failed: -6 00:09:59.420 Read completed with error (sct=0, sc=8) 00:09:59.420 Read completed with error (sct=0, sc=8) 00:09:59.420 Write completed with error (sct=0, sc=8) 00:09:59.420 Write completed with error (sct=0, sc=8) 00:09:59.420 starting I/O failed: -6 00:09:59.420 Read completed with error (sct=0, sc=8) 00:09:59.420 Read completed with error (sct=0, sc=8) 00:09:59.420 Read completed with error (sct=0, sc=8) 00:09:59.420 Read completed with error (sct=0, sc=8) 00:09:59.420 starting I/O failed: -6 00:09:59.420 Write completed with error (sct=0, sc=8) 00:09:59.420 Read completed with error (sct=0, sc=8) 00:09:59.420 Write completed with error (sct=0, sc=8) 00:09:59.420 Read completed with error (sct=0, sc=8) 00:09:59.420 starting I/O failed: -6 00:09:59.420 Read completed with error (sct=0, sc=8) 00:09:59.420 Write completed with error (sct=0, sc=8) 00:09:59.420 Write completed with error (sct=0, sc=8) 00:09:59.420 Read completed with error (sct=0, sc=8) 00:09:59.420 starting I/O failed: -6 00:09:59.420 Write completed with error (sct=0, sc=8) 00:09:59.420 Read completed with error (sct=0, sc=8) 00:09:59.420 Read completed with error (sct=0, sc=8) 00:09:59.420 Read completed with error (sct=0, sc=8) 00:09:59.420 starting I/O failed: -6 00:09:59.420 Write completed with error (sct=0, sc=8) 00:09:59.420 Read completed with error (sct=0, sc=8) 00:09:59.420 Read completed with error (sct=0, sc=8) 00:09:59.420 Read completed with error (sct=0, sc=8) 00:09:59.420 starting I/O failed: -6 00:09:59.420 Read completed with error (sct=0, sc=8) 00:09:59.420 Read completed with error (sct=0, sc=8) 00:09:59.420 Read completed with error (sct=0, sc=8) 00:09:59.420 Read completed with error (sct=0, sc=8) 00:09:59.420 starting I/O failed: -6 00:09:59.420 Read completed with error (sct=0, sc=8) 00:09:59.420 Write completed with error (sct=0, sc=8) 00:09:59.420 Read completed with error (sct=0, sc=8) 00:09:59.420 Read completed with error (sct=0, sc=8) 00:09:59.420 starting I/O failed: -6 00:09:59.420 Read completed with error (sct=0, sc=8) 00:09:59.420 Read completed with error (sct=0, sc=8) 00:09:59.420 Write completed with error (sct=0, sc=8) 00:09:59.420 Read completed with error (sct=0, sc=8) 00:09:59.420 starting I/O failed: -6 00:09:59.420 Write completed with error (sct=0, sc=8) 00:09:59.420 Read completed with error (sct=0, sc=8) 00:09:59.420 Read completed with error (sct=0, sc=8) 00:09:59.420 Read completed with error (sct=0, sc=8) 00:09:59.420 starting I/O failed: -6 00:09:59.420 Write completed with error (sct=0, sc=8) 00:09:59.420 Read completed with error (sct=0, sc=8) 00:09:59.420 [2024-05-15 11:01:56.664213] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe666a0 is same with the state(5) to be set 00:09:59.420 Read completed with error (sct=0, sc=8) 00:09:59.420 Write completed with error (sct=0, sc=8) 00:09:59.420 Read completed with error (sct=0, sc=8) 00:09:59.420 Read completed with error (sct=0, sc=8) 00:09:59.420 starting I/O failed: -6 00:09:59.420 Read completed with error (sct=0, sc=8) 00:09:59.420 Read completed with error (sct=0, sc=8) 00:09:59.420 Write completed with error (sct=0, sc=8) 00:09:59.420 Read completed with error (sct=0, sc=8) 00:09:59.420 starting I/O failed: -6 00:09:59.420 Read completed with error (sct=0, sc=8) 00:09:59.420 Write completed with error (sct=0, sc=8) 00:09:59.420 Write completed with error (sct=0, sc=8) 00:09:59.420 Write completed with error (sct=0, sc=8) 00:09:59.421 starting I/O failed: -6 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 starting I/O failed: -6 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 starting I/O failed: -6 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 starting I/O failed: -6 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 starting I/O failed: -6 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 starting I/O failed: -6 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 starting I/O failed: -6 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 [2024-05-15 11:01:56.665105] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd86c000c00 is same with the state(5) to be set 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:10:00.798 [2024-05-15 11:01:57.637228] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe66060 is same with the state(5) to be set 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Write completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Write completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Write completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Write completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Write completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Write completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Write completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Write completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Write completed with error (sct=0, sc=8) 00:10:00.798 Write completed with error (sct=0, sc=8) 00:10:00.798 Write completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Write completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Write completed with error (sct=0, sc=8) 00:10:00.798 [2024-05-15 11:01:57.666838] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe67f10 is same with the state(5) to be set 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Write completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Write completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Write completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Write completed with error (sct=0, sc=8) 00:10:00.798 Write completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Write completed with error (sct=0, sc=8) 00:10:00.798 Write completed with error (sct=0, sc=8) 00:10:00.798 Write completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Write completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Write completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 [2024-05-15 11:01:57.668002] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe670c0 is same with the state(5) to be set 00:10:00.798 Write completed with error (sct=0, sc=8) 00:10:00.798 Write completed with error (sct=0, sc=8) 00:10:00.798 Write completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Write completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Write completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Write completed with error (sct=0, sc=8) 00:10:00.798 Write completed with error (sct=0, sc=8) 00:10:00.798 Write completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Write completed with error (sct=0, sc=8) 00:10:00.798 Write completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 [2024-05-15 11:01:57.668178] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd86c00c2f0 is same with the state(5) to be set 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Write completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Write completed with error (sct=0, sc=8) 00:10:00.798 Write completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Write completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Write completed with error (sct=0, sc=8) 00:10:00.798 Write completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Write completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 Read completed with error (sct=0, sc=8) 00:10:00.798 [2024-05-15 11:01:57.668323] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe6ec20 is same with the state(5) to be set 00:10:00.798 Initializing NVMe Controllers 00:10:00.798 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:00.798 Controller IO queue size 128, less than required. 00:10:00.798 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:00.798 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:00.798 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:00.798 Initialization complete. Launching workers. 00:10:00.798 ======================================================== 00:10:00.798 Latency(us) 00:10:00.798 Device Information : IOPS MiB/s Average min max 00:10:00.798 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 186.72 0.09 951680.11 842.15 1012694.77 00:10:00.798 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 154.44 0.08 880548.53 226.90 1012898.64 00:10:00.798 ======================================================== 00:10:00.798 Total : 341.16 0.17 919479.35 226.90 1012898.64 00:10:00.798 00:10:00.798 [2024-05-15 11:01:57.668970] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe66060 (9): Bad file descriptor 00:10:00.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:10:00.798 11:01:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:00.798 11:01:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:10:00.798 11:01:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2160104 00:10:00.798 11:01:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:01.057 11:01:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:01.057 11:01:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2160104 00:10:01.057 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2160104) - No such process 00:10:01.057 11:01:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2160104 00:10:01.057 11:01:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@649 -- # local es=0 00:10:01.057 11:01:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # valid_exec_arg wait 2160104 00:10:01.057 11:01:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@637 -- # local arg=wait 00:10:01.057 11:01:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:01.057 11:01:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # type -t wait 00:10:01.057 11:01:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:01.057 11:01:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # wait 2160104 00:10:01.057 11:01:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # es=1 00:10:01.057 11:01:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:10:01.057 11:01:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:10:01.057 11:01:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:10:01.057 11:01:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:01.057 11:01:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:01.057 11:01:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:01.057 11:01:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:01.057 11:01:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:01.057 11:01:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:01.057 11:01:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:01.057 [2024-05-15 11:01:58.195536] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:01.057 11:01:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:01.057 11:01:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:01.057 11:01:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:01.057 11:01:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:01.057 11:01:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:01.057 11:01:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2160785 00:10:01.057 11:01:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:10:01.057 11:01:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:01.057 11:01:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2160785 00:10:01.057 11:01:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:01.057 EAL: No free 2048 kB hugepages reported on node 1 00:10:01.057 [2024-05-15 11:01:58.255645] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:01.623 11:01:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:01.623 11:01:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2160785 00:10:01.623 11:01:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:02.189 11:01:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:02.189 11:01:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2160785 00:10:02.189 11:01:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:02.755 11:01:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:02.755 11:01:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2160785 00:10:02.755 11:01:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:03.013 11:02:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:03.013 11:02:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2160785 00:10:03.013 11:02:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:03.581 11:02:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:03.581 11:02:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2160785 00:10:03.581 11:02:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:04.148 11:02:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:04.148 11:02:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2160785 00:10:04.148 11:02:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:04.148 Initializing NVMe Controllers 00:10:04.148 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:04.149 Controller IO queue size 128, less than required. 00:10:04.149 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:04.149 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:04.149 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:04.149 Initialization complete. Launching workers. 00:10:04.149 ======================================================== 00:10:04.149 Latency(us) 00:10:04.149 Device Information : IOPS MiB/s Average min max 00:10:04.149 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003307.20 1000157.96 1010600.38 00:10:04.149 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004894.05 1000215.33 1012249.90 00:10:04.149 ======================================================== 00:10:04.149 Total : 256.00 0.12 1004100.63 1000157.96 1012249.90 00:10:04.149 00:10:04.750 11:02:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:04.750 11:02:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2160785 00:10:04.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2160785) - No such process 00:10:04.750 11:02:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2160785 00:10:04.750 11:02:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:04.750 11:02:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:10:04.750 11:02:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:04.750 11:02:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:10:04.750 11:02:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:04.750 11:02:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:10:04.750 11:02:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:04.750 11:02:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:04.750 rmmod nvme_tcp 00:10:04.750 rmmod nvme_fabrics 00:10:04.750 rmmod nvme_keyring 00:10:04.750 11:02:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:04.750 11:02:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:10:04.750 11:02:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:10:04.750 11:02:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2159857 ']' 00:10:04.750 11:02:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2159857 00:10:04.750 11:02:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@947 -- # '[' -z 2159857 ']' 00:10:04.750 11:02:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # kill -0 2159857 00:10:04.750 11:02:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # uname 00:10:04.750 11:02:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:10:04.750 11:02:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2159857 00:10:04.750 11:02:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:10:04.750 11:02:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:10:04.750 11:02:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2159857' 00:10:04.750 killing process with pid 2159857 00:10:04.750 11:02:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # kill 2159857 00:10:04.750 [2024-05-15 11:02:01.856279] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:04.750 11:02:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # wait 2159857 00:10:05.011 11:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:05.011 11:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:05.011 11:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:05.011 11:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:05.011 11:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:05.011 11:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.011 11:02:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:05.011 11:02:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.917 11:02:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:06.917 00:10:06.917 real 0m15.825s 00:10:06.917 user 0m29.925s 00:10:06.917 sys 0m4.737s 00:10:06.917 11:02:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:06.917 11:02:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:06.917 ************************************ 00:10:06.917 END TEST nvmf_delete_subsystem 00:10:06.917 ************************************ 00:10:06.917 11:02:04 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:10:06.917 11:02:04 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:10:06.917 11:02:04 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:06.917 11:02:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:07.176 ************************************ 00:10:07.176 START TEST nvmf_ns_masking 00:10:07.176 ************************************ 00:10:07.176 11:02:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:10:07.176 * Looking for test storage... 00:10:07.176 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:07.176 11:02:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:07.176 11:02:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:10:07.176 11:02:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:07.176 11:02:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:07.176 11:02:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:07.176 11:02:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:07.176 11:02:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:07.176 11:02:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:07.176 11:02:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:07.176 11:02:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:07.176 11:02:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:07.176 11:02:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:07.176 11:02:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:07.176 11:02:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:07.176 11:02:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:07.176 11:02:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:07.176 11:02:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:07.176 11:02:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:07.176 11:02:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:07.176 11:02:04 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:07.176 11:02:04 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:07.176 11:02:04 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:07.176 11:02:04 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.176 11:02:04 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.177 11:02:04 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.177 11:02:04 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:10:07.177 11:02:04 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.177 11:02:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:10:07.177 11:02:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:07.177 11:02:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:07.177 11:02:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:07.177 11:02:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:07.177 11:02:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:07.177 11:02:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:07.177 11:02:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:07.177 11:02:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:07.177 11:02:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:07.177 11:02:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:10:07.177 11:02:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:10:07.177 11:02:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:10:07.177 11:02:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:10:07.177 11:02:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=616db14c-a71c-4313-ad2d-2a35c3724059 00:10:07.177 11:02:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:10:07.177 11:02:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:07.177 11:02:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:07.177 11:02:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:07.177 11:02:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:07.177 11:02:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:07.177 11:02:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.177 11:02:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:07.177 11:02:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.177 11:02:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:07.177 11:02:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:07.177 11:02:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:10:07.177 11:02:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:12.451 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:12.451 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:10:12.451 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:12.451 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:12.451 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:12.451 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:12.451 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:12.451 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:10:12.451 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:12.451 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:10:12.451 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:10:12.451 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:10:12.451 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:10:12.451 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:10:12.451 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:10:12.451 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:12.451 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:12.451 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:12.451 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:12.451 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:12.451 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:12.451 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:12.451 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:12.451 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:12.451 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:12.451 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:12.451 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:12.451 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:12.451 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:12.452 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:12.452 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:12.452 Found net devices under 0000:86:00.0: cvl_0_0 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:12.452 Found net devices under 0000:86:00.1: cvl_0_1 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:12.452 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:12.452 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:10:12.452 00:10:12.452 --- 10.0.0.2 ping statistics --- 00:10:12.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.452 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:12.452 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:12.452 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:10:12.452 00:10:12.452 --- 10.0.0.1 ping statistics --- 00:10:12.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.452 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@721 -- # xtrace_disable 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2164783 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2164783 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@828 -- # '[' -z 2164783 ']' 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local max_retries=100 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@837 -- # xtrace_disable 00:10:12.452 11:02:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:12.452 [2024-05-15 11:02:09.487552] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:10:12.452 [2024-05-15 11:02:09.487597] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:12.452 EAL: No free 2048 kB hugepages reported on node 1 00:10:12.452 [2024-05-15 11:02:09.543715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:12.452 [2024-05-15 11:02:09.624139] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:12.452 [2024-05-15 11:02:09.624177] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:12.452 [2024-05-15 11:02:09.624184] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:12.452 [2024-05-15 11:02:09.624190] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:12.452 [2024-05-15 11:02:09.624195] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:12.452 [2024-05-15 11:02:09.624238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.452 [2024-05-15 11:02:09.624335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:12.452 [2024-05-15 11:02:09.624353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:12.452 [2024-05-15 11:02:09.624354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.388 11:02:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:10:13.388 11:02:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@861 -- # return 0 00:10:13.388 11:02:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:13.388 11:02:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@727 -- # xtrace_disable 00:10:13.388 11:02:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:13.388 11:02:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:13.388 11:02:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:13.388 [2024-05-15 11:02:10.502673] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:13.388 11:02:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:10:13.388 11:02:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:10:13.388 11:02:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:13.647 Malloc1 00:10:13.647 11:02:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:13.647 Malloc2 00:10:13.906 11:02:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:13.906 11:02:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:10:14.165 11:02:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:14.165 [2024-05-15 11:02:11.418778] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:14.165 [2024-05-15 11:02:11.419003] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:14.424 11:02:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:10:14.424 11:02:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 616db14c-a71c-4313-ad2d-2a35c3724059 -a 10.0.0.2 -s 4420 -i 4 00:10:14.424 11:02:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:10:14.424 11:02:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local i=0 00:10:14.424 11:02:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:10:14.424 11:02:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:10:14.424 11:02:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # sleep 2 00:10:16.958 11:02:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:10:16.958 11:02:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:10:16.958 11:02:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:10:16.958 11:02:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:10:16.958 11:02:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:10:16.958 11:02:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # return 0 00:10:16.958 11:02:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:10:16.958 11:02:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:16.958 11:02:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:10:16.958 11:02:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:10:16.958 11:02:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:10:16.958 11:02:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:16.958 11:02:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:10:16.958 [ 0]:0x1 00:10:16.958 11:02:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:16.958 11:02:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:16.958 11:02:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=9703d75f689041898d2e9020788f7b49 00:10:16.958 11:02:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 9703d75f689041898d2e9020788f7b49 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:16.958 11:02:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:10:16.958 11:02:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:10:16.958 11:02:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:16.958 11:02:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:10:16.958 [ 0]:0x1 00:10:16.958 11:02:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:16.958 11:02:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:16.958 11:02:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=9703d75f689041898d2e9020788f7b49 00:10:16.958 11:02:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 9703d75f689041898d2e9020788f7b49 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:16.958 11:02:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:10:16.958 11:02:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:16.958 11:02:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:10:16.958 [ 1]:0x2 00:10:16.958 11:02:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:16.958 11:02:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:16.958 11:02:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=fa46026da9ec4fd997fb7a5811b45a0b 00:10:16.958 11:02:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ fa46026da9ec4fd997fb7a5811b45a0b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:16.958 11:02:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:10:16.958 11:02:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:16.958 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.958 11:02:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.217 11:02:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:10:17.217 11:02:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:10:17.217 11:02:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 616db14c-a71c-4313-ad2d-2a35c3724059 -a 10.0.0.2 -s 4420 -i 4 00:10:17.476 11:02:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:10:17.476 11:02:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local i=0 00:10:17.476 11:02:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:10:17.476 11:02:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # [[ -n 1 ]] 00:10:17.476 11:02:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # nvme_device_counter=1 00:10:17.476 11:02:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # sleep 2 00:10:19.381 11:02:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:10:19.381 11:02:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:10:19.381 11:02:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:10:19.381 11:02:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:10:19.381 11:02:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:10:19.381 11:02:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # return 0 00:10:19.381 11:02:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:10:19.381 11:02:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:19.641 11:02:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:10:19.641 11:02:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:10:19.641 11:02:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:10:19.641 11:02:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:10:19.641 11:02:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:10:19.641 11:02:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:10:19.641 11:02:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:19.641 11:02:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:10:19.641 11:02:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:19.641 11:02:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:10:19.641 11:02:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:19.641 11:02:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:10:19.641 11:02:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:19.641 11:02:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:19.641 11:02:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:10:19.641 11:02:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:19.641 11:02:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:10:19.641 11:02:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:10:19.641 11:02:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:10:19.641 11:02:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:10:19.641 11:02:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:10:19.641 11:02:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:19.641 11:02:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:10:19.641 [ 0]:0x2 00:10:19.641 11:02:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:19.641 11:02:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:19.641 11:02:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=fa46026da9ec4fd997fb7a5811b45a0b 00:10:19.641 11:02:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ fa46026da9ec4fd997fb7a5811b45a0b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:19.641 11:02:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:19.901 11:02:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:10:19.901 11:02:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:19.901 11:02:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:10:19.901 [ 0]:0x1 00:10:19.901 11:02:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:19.901 11:02:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:19.901 11:02:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=9703d75f689041898d2e9020788f7b49 00:10:19.901 11:02:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 9703d75f689041898d2e9020788f7b49 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:19.901 11:02:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:10:19.901 11:02:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:19.901 11:02:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:10:19.901 [ 1]:0x2 00:10:19.901 11:02:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:19.901 11:02:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:19.901 11:02:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=fa46026da9ec4fd997fb7a5811b45a0b 00:10:19.901 11:02:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ fa46026da9ec4fd997fb7a5811b45a0b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:19.901 11:02:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:20.160 11:02:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:10:20.160 11:02:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:10:20.160 11:02:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:10:20.160 11:02:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:10:20.160 11:02:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:20.160 11:02:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:10:20.160 11:02:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:20.160 11:02:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:10:20.160 11:02:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:10:20.160 11:02:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:20.160 11:02:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:20.160 11:02:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:20.160 11:02:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:10:20.160 11:02:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:20.160 11:02:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:10:20.160 11:02:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:10:20.160 11:02:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:10:20.160 11:02:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:10:20.160 11:02:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:10:20.160 11:02:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:20.160 11:02:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:10:20.160 [ 0]:0x2 00:10:20.160 11:02:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:20.160 11:02:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:20.418 11:02:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=fa46026da9ec4fd997fb7a5811b45a0b 00:10:20.418 11:02:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ fa46026da9ec4fd997fb7a5811b45a0b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:20.418 11:02:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:10:20.418 11:02:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:20.418 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.418 11:02:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:20.418 11:02:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:10:20.418 11:02:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 616db14c-a71c-4313-ad2d-2a35c3724059 -a 10.0.0.2 -s 4420 -i 4 00:10:20.677 11:02:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:10:20.677 11:02:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local i=0 00:10:20.677 11:02:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:10:20.677 11:02:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # [[ -n 2 ]] 00:10:20.677 11:02:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # nvme_device_counter=2 00:10:20.677 11:02:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # sleep 2 00:10:23.212 11:02:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:10:23.212 11:02:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:10:23.212 11:02:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:10:23.212 11:02:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # nvme_devices=2 00:10:23.212 11:02:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:10:23.212 11:02:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # return 0 00:10:23.212 11:02:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:10:23.212 11:02:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:23.212 11:02:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:10:23.212 11:02:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:10:23.212 11:02:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:10:23.212 11:02:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:23.212 11:02:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:10:23.212 [ 0]:0x1 00:10:23.212 11:02:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:23.212 11:02:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:23.212 11:02:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=9703d75f689041898d2e9020788f7b49 00:10:23.212 11:02:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 9703d75f689041898d2e9020788f7b49 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:23.212 11:02:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:10:23.212 11:02:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:23.212 11:02:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:10:23.212 [ 1]:0x2 00:10:23.213 11:02:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:23.213 11:02:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:23.213 11:02:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=fa46026da9ec4fd997fb7a5811b45a0b 00:10:23.213 11:02:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ fa46026da9ec4fd997fb7a5811b45a0b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:23.213 11:02:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:23.213 11:02:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:10:23.213 11:02:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:10:23.213 11:02:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:10:23.213 11:02:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:10:23.213 11:02:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:23.213 11:02:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:10:23.213 11:02:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:23.213 11:02:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:10:23.213 11:02:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:23.213 11:02:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:10:23.213 11:02:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:23.213 11:02:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:23.213 11:02:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:10:23.213 11:02:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:23.213 11:02:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:10:23.213 11:02:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:10:23.213 11:02:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:10:23.213 11:02:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:10:23.213 11:02:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:10:23.213 11:02:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:23.213 11:02:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:10:23.213 [ 0]:0x2 00:10:23.213 11:02:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:23.213 11:02:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:23.213 11:02:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=fa46026da9ec4fd997fb7a5811b45a0b 00:10:23.213 11:02:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ fa46026da9ec4fd997fb7a5811b45a0b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:23.213 11:02:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:23.213 11:02:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:10:23.213 11:02:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:23.213 11:02:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:23.213 11:02:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:23.213 11:02:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:23.213 11:02:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:23.213 11:02:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:23.213 11:02:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:23.213 11:02:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:23.213 11:02:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:23.213 11:02:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:23.472 [2024-05-15 11:02:20.496500] nvmf_rpc.c:1781:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:10:23.472 request: 00:10:23.472 { 00:10:23.472 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:23.472 "nsid": 2, 00:10:23.472 "host": "nqn.2016-06.io.spdk:host1", 00:10:23.472 "method": "nvmf_ns_remove_host", 00:10:23.472 "req_id": 1 00:10:23.472 } 00:10:23.472 Got JSON-RPC error response 00:10:23.472 response: 00:10:23.472 { 00:10:23.472 "code": -32602, 00:10:23.472 "message": "Invalid parameters" 00:10:23.472 } 00:10:23.472 11:02:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:10:23.472 11:02:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:10:23.472 11:02:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:10:23.472 11:02:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:10:23.472 11:02:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:10:23.472 11:02:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:10:23.472 11:02:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:10:23.472 11:02:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:10:23.472 11:02:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:23.472 11:02:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:10:23.472 11:02:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:23.472 11:02:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:10:23.472 11:02:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:23.472 11:02:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:10:23.472 11:02:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:23.472 11:02:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:23.472 11:02:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:10:23.472 11:02:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:23.472 11:02:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:10:23.472 11:02:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:10:23.472 11:02:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:10:23.472 11:02:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:10:23.472 11:02:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:10:23.472 11:02:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:23.472 11:02:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:10:23.472 [ 0]:0x2 00:10:23.472 11:02:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:23.472 11:02:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:23.473 11:02:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=fa46026da9ec4fd997fb7a5811b45a0b 00:10:23.473 11:02:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ fa46026da9ec4fd997fb7a5811b45a0b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:23.473 11:02:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:10:23.473 11:02:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:23.473 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.473 11:02:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:23.732 11:02:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:10:23.732 11:02:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:10:23.732 11:02:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:23.732 11:02:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:10:23.732 11:02:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:23.732 11:02:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:10:23.732 11:02:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:23.732 11:02:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:23.732 rmmod nvme_tcp 00:10:23.732 rmmod nvme_fabrics 00:10:23.732 rmmod nvme_keyring 00:10:23.732 11:02:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:23.732 11:02:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:10:23.732 11:02:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:10:23.732 11:02:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2164783 ']' 00:10:23.732 11:02:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2164783 00:10:23.732 11:02:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@947 -- # '[' -z 2164783 ']' 00:10:23.732 11:02:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # kill -0 2164783 00:10:23.732 11:02:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # uname 00:10:23.732 11:02:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:10:23.732 11:02:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2164783 00:10:23.732 11:02:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:10:23.732 11:02:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:10:23.732 11:02:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2164783' 00:10:23.732 killing process with pid 2164783 00:10:23.732 11:02:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # kill 2164783 00:10:23.732 [2024-05-15 11:02:20.973800] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:23.732 11:02:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@971 -- # wait 2164783 00:10:23.992 11:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:23.992 11:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:23.992 11:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:23.992 11:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:23.992 11:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:23.992 11:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:23.992 11:02:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:23.992 11:02:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:26.532 11:02:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:26.532 00:10:26.532 real 0m19.075s 00:10:26.532 user 0m50.265s 00:10:26.532 sys 0m5.371s 00:10:26.532 11:02:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:26.532 11:02:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:26.532 ************************************ 00:10:26.532 END TEST nvmf_ns_masking 00:10:26.532 ************************************ 00:10:26.532 11:02:23 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:10:26.532 11:02:23 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:10:26.532 11:02:23 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:10:26.532 11:02:23 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:26.532 11:02:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:26.532 ************************************ 00:10:26.532 START TEST nvmf_nvme_cli 00:10:26.532 ************************************ 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:10:26.532 * Looking for test storage... 00:10:26.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:10:26.532 11:02:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:31.874 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:31.874 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:10:31.874 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:31.874 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:31.874 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:31.874 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:31.874 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:31.874 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:10:31.874 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:31.874 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:10:31.874 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:10:31.874 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:10:31.874 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:10:31.874 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:10:31.874 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:10:31.874 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:31.874 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:31.874 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:31.874 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:31.874 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:31.874 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:31.874 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:31.874 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:31.874 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:31.874 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:31.874 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:31.874 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:31.874 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:31.874 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:31.874 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:31.874 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:31.874 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:31.874 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:31.874 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:31.874 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:31.874 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:31.874 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:31.874 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:31.874 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:31.874 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:31.874 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:31.874 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:31.874 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:31.874 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:31.874 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:31.874 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:31.874 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:31.874 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:31.874 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:31.874 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:31.874 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:31.874 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:31.874 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:31.874 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:31.875 Found net devices under 0000:86:00.0: cvl_0_0 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:31.875 Found net devices under 0000:86:00.1: cvl_0_1 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:31.875 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:31.875 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:10:31.875 00:10:31.875 --- 10.0.0.2 ping statistics --- 00:10:31.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.875 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:31.875 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:31.875 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:10:31.875 00:10:31.875 --- 10.0.0.1 ping statistics --- 00:10:31.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.875 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@721 -- # xtrace_disable 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2170305 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2170305 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@828 -- # '[' -z 2170305 ']' 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local max_retries=100 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # xtrace_disable 00:10:31.875 11:02:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:31.875 [2024-05-15 11:02:28.872374] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:10:31.875 [2024-05-15 11:02:28.872413] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:31.875 EAL: No free 2048 kB hugepages reported on node 1 00:10:31.875 [2024-05-15 11:02:28.933291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:31.875 [2024-05-15 11:02:29.010796] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:31.875 [2024-05-15 11:02:29.010832] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:31.875 [2024-05-15 11:02:29.010839] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:31.875 [2024-05-15 11:02:29.010845] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:31.875 [2024-05-15 11:02:29.010850] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:31.875 [2024-05-15 11:02:29.010901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:31.875 [2024-05-15 11:02:29.010991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:31.875 [2024-05-15 11:02:29.011014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:31.875 [2024-05-15 11:02:29.011019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.443 11:02:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:10:32.443 11:02:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@861 -- # return 0 00:10:32.443 11:02:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:32.443 11:02:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@727 -- # xtrace_disable 00:10:32.443 11:02:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:32.701 11:02:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:32.701 11:02:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:32.701 11:02:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:32.701 11:02:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:32.701 [2024-05-15 11:02:29.718162] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:32.701 11:02:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:32.701 11:02:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:32.701 11:02:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:32.701 11:02:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:32.701 Malloc0 00:10:32.701 11:02:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:32.701 11:02:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:32.701 11:02:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:32.701 11:02:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:32.701 Malloc1 00:10:32.701 11:02:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:32.701 11:02:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:10:32.701 11:02:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:32.701 11:02:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:32.701 11:02:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:32.701 11:02:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:32.701 11:02:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:32.701 11:02:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:32.701 11:02:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:32.701 11:02:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:32.701 11:02:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:32.701 11:02:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:32.701 11:02:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:32.701 11:02:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:32.701 11:02:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:32.701 11:02:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:32.701 [2024-05-15 11:02:29.799827] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:32.701 [2024-05-15 11:02:29.800076] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:32.701 11:02:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:32.701 11:02:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:32.701 11:02:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:32.701 11:02:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:32.701 11:02:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:32.701 11:02:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:10:32.701 00:10:32.701 Discovery Log Number of Records 2, Generation counter 2 00:10:32.701 =====Discovery Log Entry 0====== 00:10:32.701 trtype: tcp 00:10:32.701 adrfam: ipv4 00:10:32.701 subtype: current discovery subsystem 00:10:32.701 treq: not required 00:10:32.701 portid: 0 00:10:32.701 trsvcid: 4420 00:10:32.701 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:32.701 traddr: 10.0.0.2 00:10:32.701 eflags: explicit discovery connections, duplicate discovery information 00:10:32.701 sectype: none 00:10:32.701 =====Discovery Log Entry 1====== 00:10:32.701 trtype: tcp 00:10:32.701 adrfam: ipv4 00:10:32.701 subtype: nvme subsystem 00:10:32.701 treq: not required 00:10:32.701 portid: 0 00:10:32.701 trsvcid: 4420 00:10:32.701 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:32.701 traddr: 10.0.0.2 00:10:32.701 eflags: none 00:10:32.701 sectype: none 00:10:32.701 11:02:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:10:32.701 11:02:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:10:32.701 11:02:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:10:32.701 11:02:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:32.701 11:02:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:10:32.701 11:02:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:10:32.701 11:02:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:32.701 11:02:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:10:32.701 11:02:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:32.701 11:02:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:10:32.701 11:02:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:34.076 11:02:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:10:34.076 11:02:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # local i=0 00:10:34.076 11:02:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:10:34.076 11:02:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # [[ -n 2 ]] 00:10:34.076 11:02:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # nvme_device_counter=2 00:10:34.076 11:02:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # sleep 2 00:10:35.978 11:02:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:10:35.978 11:02:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:10:35.978 11:02:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:10:35.978 11:02:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # nvme_devices=2 00:10:35.978 11:02:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:10:35.978 11:02:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # return 0 00:10:35.978 11:02:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:10:35.978 11:02:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:10:35.978 11:02:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:35.978 11:02:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:10:35.978 11:02:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:10:35.978 11:02:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:35.978 11:02:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:10:35.978 11:02:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:35.978 11:02:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:10:35.978 11:02:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:10:35.978 11:02:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:35.978 11:02:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:10:35.978 11:02:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:10:35.978 11:02:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:35.978 11:02:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:10:35.978 /dev/nvme0n1 ]] 00:10:35.978 11:02:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:10:35.978 11:02:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:10:35.978 11:02:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:10:35.978 11:02:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:35.978 11:02:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:10:35.978 11:02:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:10:35.978 11:02:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:35.978 11:02:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:10:35.978 11:02:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:35.978 11:02:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:10:35.978 11:02:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:10:35.978 11:02:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:35.978 11:02:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:10:35.978 11:02:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:10:35.978 11:02:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:35.978 11:02:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:10:35.978 11:02:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:35.978 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.236 11:02:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:36.236 11:02:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # local i=0 00:10:36.236 11:02:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:10:36.236 11:02:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:36.236 11:02:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:10:36.236 11:02:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:36.236 11:02:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1228 -- # return 0 00:10:36.236 11:02:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:10:36.236 11:02:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:36.236 11:02:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:36.236 11:02:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:36.236 11:02:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:36.236 11:02:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:36.236 11:02:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:10:36.236 11:02:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:36.236 11:02:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:10:36.236 11:02:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:36.236 11:02:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:10:36.236 11:02:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:36.236 11:02:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:36.236 rmmod nvme_tcp 00:10:36.236 rmmod nvme_fabrics 00:10:36.236 rmmod nvme_keyring 00:10:36.236 11:02:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:36.236 11:02:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:10:36.236 11:02:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:10:36.236 11:02:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2170305 ']' 00:10:36.236 11:02:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2170305 00:10:36.236 11:02:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@947 -- # '[' -z 2170305 ']' 00:10:36.236 11:02:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # kill -0 2170305 00:10:36.236 11:02:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # uname 00:10:36.236 11:02:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:10:36.236 11:02:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2170305 00:10:36.237 11:02:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:10:36.237 11:02:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:10:36.237 11:02:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2170305' 00:10:36.237 killing process with pid 2170305 00:10:36.237 11:02:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # kill 2170305 00:10:36.237 [2024-05-15 11:02:33.404723] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:36.237 11:02:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@971 -- # wait 2170305 00:10:36.495 11:02:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:36.495 11:02:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:36.495 11:02:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:36.495 11:02:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:36.495 11:02:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:36.495 11:02:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.495 11:02:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:36.495 11:02:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:39.031 11:02:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:39.031 00:10:39.031 real 0m12.365s 00:10:39.031 user 0m19.926s 00:10:39.031 sys 0m4.606s 00:10:39.031 11:02:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:39.031 11:02:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:39.031 ************************************ 00:10:39.031 END TEST nvmf_nvme_cli 00:10:39.031 ************************************ 00:10:39.031 11:02:35 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:10:39.031 11:02:35 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:10:39.031 11:02:35 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:10:39.031 11:02:35 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:39.031 11:02:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:39.031 ************************************ 00:10:39.031 START TEST nvmf_vfio_user 00:10:39.031 ************************************ 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:10:39.031 * Looking for test storage... 00:10:39.031 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2171593 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2171593' 00:10:39.031 Process pid: 2171593 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2171593 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@828 -- # '[' -z 2171593 ']' 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local max_retries=100 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@837 -- # xtrace_disable 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:39.031 11:02:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:10:39.031 [2024-05-15 11:02:35.961113] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:10:39.031 [2024-05-15 11:02:35.961156] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:39.031 EAL: No free 2048 kB hugepages reported on node 1 00:10:39.031 [2024-05-15 11:02:36.015978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:39.031 [2024-05-15 11:02:36.095630] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:39.031 [2024-05-15 11:02:36.095665] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:39.031 [2024-05-15 11:02:36.095671] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:39.031 [2024-05-15 11:02:36.095678] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:39.031 [2024-05-15 11:02:36.095683] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:39.031 [2024-05-15 11:02:36.095725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:39.031 [2024-05-15 11:02:36.095741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:39.031 [2024-05-15 11:02:36.095831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:39.031 [2024-05-15 11:02:36.095832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.599 11:02:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:10:39.599 11:02:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@861 -- # return 0 00:10:39.599 11:02:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:10:40.535 11:02:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:10:40.793 11:02:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:10:40.793 11:02:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:10:40.793 11:02:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:40.793 11:02:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:10:40.793 11:02:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:41.052 Malloc1 00:10:41.052 11:02:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:10:41.311 11:02:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:10:41.311 11:02:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:10:41.569 [2024-05-15 11:02:38.689718] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:41.569 11:02:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:41.569 11:02:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:10:41.569 11:02:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:41.827 Malloc2 00:10:41.827 11:02:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:10:42.086 11:02:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:10:42.086 11:02:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:10:42.346 11:02:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:10:42.346 11:02:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:10:42.346 11:02:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:42.346 11:02:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:10:42.346 11:02:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:10:42.347 11:02:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:10:42.347 [2024-05-15 11:02:39.488342] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:10:42.347 [2024-05-15 11:02:39.488392] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2172185 ] 00:10:42.347 EAL: No free 2048 kB hugepages reported on node 1 00:10:42.347 [2024-05-15 11:02:39.518715] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:10:42.347 [2024-05-15 11:02:39.528516] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:42.347 [2024-05-15 11:02:39.528537] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fa93ac30000 00:10:42.347 [2024-05-15 11:02:39.529512] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:42.347 [2024-05-15 11:02:39.530511] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:42.347 [2024-05-15 11:02:39.531524] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:42.347 [2024-05-15 11:02:39.532525] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:42.347 [2024-05-15 11:02:39.533533] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:42.347 [2024-05-15 11:02:39.534536] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:42.347 [2024-05-15 11:02:39.535540] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:42.347 [2024-05-15 11:02:39.536548] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:42.347 [2024-05-15 11:02:39.537550] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:42.347 [2024-05-15 11:02:39.537562] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fa93ac25000 00:10:42.347 [2024-05-15 11:02:39.538505] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:42.347 [2024-05-15 11:02:39.547097] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:10:42.347 [2024-05-15 11:02:39.547123] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:10:42.347 [2024-05-15 11:02:39.555685] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:10:42.347 [2024-05-15 11:02:39.555727] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:10:42.347 [2024-05-15 11:02:39.555806] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:10:42.347 [2024-05-15 11:02:39.555824] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:10:42.347 [2024-05-15 11:02:39.555829] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:10:42.347 [2024-05-15 11:02:39.556683] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:10:42.347 [2024-05-15 11:02:39.556692] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:10:42.347 [2024-05-15 11:02:39.556699] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:10:42.347 [2024-05-15 11:02:39.557689] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:10:42.347 [2024-05-15 11:02:39.557699] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:10:42.347 [2024-05-15 11:02:39.557706] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:10:42.347 [2024-05-15 11:02:39.558693] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:10:42.347 [2024-05-15 11:02:39.558701] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:10:42.347 [2024-05-15 11:02:39.559700] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:10:42.347 [2024-05-15 11:02:39.559708] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:10:42.347 [2024-05-15 11:02:39.559712] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:10:42.347 [2024-05-15 11:02:39.559718] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:10:42.347 [2024-05-15 11:02:39.559823] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:10:42.347 [2024-05-15 11:02:39.559827] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:10:42.347 [2024-05-15 11:02:39.559832] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:10:42.347 [2024-05-15 11:02:39.560701] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:10:42.347 [2024-05-15 11:02:39.561704] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:10:42.347 [2024-05-15 11:02:39.562711] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:10:42.347 [2024-05-15 11:02:39.563709] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:42.347 [2024-05-15 11:02:39.563770] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:10:42.347 [2024-05-15 11:02:39.564719] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:10:42.347 [2024-05-15 11:02:39.564726] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:10:42.347 [2024-05-15 11:02:39.564731] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:10:42.347 [2024-05-15 11:02:39.564747] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:10:42.347 [2024-05-15 11:02:39.564755] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:10:42.347 [2024-05-15 11:02:39.564770] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:42.347 [2024-05-15 11:02:39.564775] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:42.347 [2024-05-15 11:02:39.564787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:42.347 [2024-05-15 11:02:39.564825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:10:42.347 [2024-05-15 11:02:39.564835] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:10:42.347 [2024-05-15 11:02:39.564840] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:10:42.347 [2024-05-15 11:02:39.564844] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:10:42.347 [2024-05-15 11:02:39.564848] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:10:42.347 [2024-05-15 11:02:39.564852] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:10:42.347 [2024-05-15 11:02:39.564861] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:10:42.347 [2024-05-15 11:02:39.564865] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:10:42.347 [2024-05-15 11:02:39.564874] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:10:42.347 [2024-05-15 11:02:39.564886] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:10:42.347 [2024-05-15 11:02:39.564900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:10:42.347 [2024-05-15 11:02:39.564912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:42.347 [2024-05-15 11:02:39.564920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:42.347 [2024-05-15 11:02:39.564927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:42.347 [2024-05-15 11:02:39.564934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:42.347 [2024-05-15 11:02:39.564938] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:10:42.347 [2024-05-15 11:02:39.564944] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:10:42.347 [2024-05-15 11:02:39.564952] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:10:42.347 [2024-05-15 11:02:39.564960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:10:42.347 [2024-05-15 11:02:39.564965] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:10:42.347 [2024-05-15 11:02:39.564971] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:10:42.347 [2024-05-15 11:02:39.564978] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:10:42.347 [2024-05-15 11:02:39.564983] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:10:42.347 [2024-05-15 11:02:39.564990] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:42.347 [2024-05-15 11:02:39.565003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:10:42.347 [2024-05-15 11:02:39.565044] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:10:42.347 [2024-05-15 11:02:39.565051] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:10:42.348 [2024-05-15 11:02:39.565057] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:10:42.348 [2024-05-15 11:02:39.565061] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:10:42.348 [2024-05-15 11:02:39.565067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:10:42.348 [2024-05-15 11:02:39.565079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:10:42.348 [2024-05-15 11:02:39.565090] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:10:42.348 [2024-05-15 11:02:39.565102] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:10:42.348 [2024-05-15 11:02:39.565108] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:10:42.348 [2024-05-15 11:02:39.565115] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:42.348 [2024-05-15 11:02:39.565118] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:42.348 [2024-05-15 11:02:39.565124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:42.348 [2024-05-15 11:02:39.565141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:10:42.348 [2024-05-15 11:02:39.565152] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:10:42.348 [2024-05-15 11:02:39.565159] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:10:42.348 [2024-05-15 11:02:39.565170] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:42.348 [2024-05-15 11:02:39.565174] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:42.348 [2024-05-15 11:02:39.565180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:42.348 [2024-05-15 11:02:39.565192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:10:42.348 [2024-05-15 11:02:39.565202] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:10:42.348 [2024-05-15 11:02:39.565208] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:10:42.348 [2024-05-15 11:02:39.565215] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:10:42.348 [2024-05-15 11:02:39.565220] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:10:42.348 [2024-05-15 11:02:39.565225] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:10:42.348 [2024-05-15 11:02:39.565229] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:10:42.348 [2024-05-15 11:02:39.565233] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:10:42.348 [2024-05-15 11:02:39.565238] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:10:42.348 [2024-05-15 11:02:39.565258] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:10:42.348 [2024-05-15 11:02:39.565266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:10:42.348 [2024-05-15 11:02:39.565277] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:10:42.348 [2024-05-15 11:02:39.565284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:10:42.348 [2024-05-15 11:02:39.565295] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:10:42.348 [2024-05-15 11:02:39.565304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:10:42.348 [2024-05-15 11:02:39.565314] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:42.348 [2024-05-15 11:02:39.565320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:10:42.348 [2024-05-15 11:02:39.565328] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:10:42.348 [2024-05-15 11:02:39.565332] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:10:42.348 [2024-05-15 11:02:39.565335] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:10:42.348 [2024-05-15 11:02:39.565338] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:10:42.348 [2024-05-15 11:02:39.565344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:10:42.348 [2024-05-15 11:02:39.565350] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:10:42.348 [2024-05-15 11:02:39.565354] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:10:42.348 [2024-05-15 11:02:39.565359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:10:42.348 [2024-05-15 11:02:39.565365] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:10:42.348 [2024-05-15 11:02:39.565368] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:42.348 [2024-05-15 11:02:39.565374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:42.348 [2024-05-15 11:02:39.565382] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:10:42.348 [2024-05-15 11:02:39.565386] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:10:42.348 [2024-05-15 11:02:39.565391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:10:42.348 [2024-05-15 11:02:39.565397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:10:42.348 [2024-05-15 11:02:39.565409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:10:42.348 [2024-05-15 11:02:39.565417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:10:42.348 [2024-05-15 11:02:39.565425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:10:42.348 ===================================================== 00:10:42.348 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:42.348 ===================================================== 00:10:42.348 Controller Capabilities/Features 00:10:42.348 ================================ 00:10:42.348 Vendor ID: 4e58 00:10:42.348 Subsystem Vendor ID: 4e58 00:10:42.348 Serial Number: SPDK1 00:10:42.348 Model Number: SPDK bdev Controller 00:10:42.348 Firmware Version: 24.05 00:10:42.348 Recommended Arb Burst: 6 00:10:42.348 IEEE OUI Identifier: 8d 6b 50 00:10:42.348 Multi-path I/O 00:10:42.348 May have multiple subsystem ports: Yes 00:10:42.348 May have multiple controllers: Yes 00:10:42.348 Associated with SR-IOV VF: No 00:10:42.348 Max Data Transfer Size: 131072 00:10:42.348 Max Number of Namespaces: 32 00:10:42.348 Max Number of I/O Queues: 127 00:10:42.348 NVMe Specification Version (VS): 1.3 00:10:42.348 NVMe Specification Version (Identify): 1.3 00:10:42.348 Maximum Queue Entries: 256 00:10:42.348 Contiguous Queues Required: Yes 00:10:42.348 Arbitration Mechanisms Supported 00:10:42.348 Weighted Round Robin: Not Supported 00:10:42.348 Vendor Specific: Not Supported 00:10:42.348 Reset Timeout: 15000 ms 00:10:42.348 Doorbell Stride: 4 bytes 00:10:42.348 NVM Subsystem Reset: Not Supported 00:10:42.348 Command Sets Supported 00:10:42.348 NVM Command Set: Supported 00:10:42.348 Boot Partition: Not Supported 00:10:42.348 Memory Page Size Minimum: 4096 bytes 00:10:42.348 Memory Page Size Maximum: 4096 bytes 00:10:42.348 Persistent Memory Region: Not Supported 00:10:42.348 Optional Asynchronous Events Supported 00:10:42.348 Namespace Attribute Notices: Supported 00:10:42.348 Firmware Activation Notices: Not Supported 00:10:42.348 ANA Change Notices: Not Supported 00:10:42.348 PLE Aggregate Log Change Notices: Not Supported 00:10:42.349 LBA Status Info Alert Notices: Not Supported 00:10:42.349 EGE Aggregate Log Change Notices: Not Supported 00:10:42.349 Normal NVM Subsystem Shutdown event: Not Supported 00:10:42.349 Zone Descriptor Change Notices: Not Supported 00:10:42.349 Discovery Log Change Notices: Not Supported 00:10:42.349 Controller Attributes 00:10:42.349 128-bit Host Identifier: Supported 00:10:42.349 Non-Operational Permissive Mode: Not Supported 00:10:42.349 NVM Sets: Not Supported 00:10:42.349 Read Recovery Levels: Not Supported 00:10:42.349 Endurance Groups: Not Supported 00:10:42.349 Predictable Latency Mode: Not Supported 00:10:42.349 Traffic Based Keep ALive: Not Supported 00:10:42.349 Namespace Granularity: Not Supported 00:10:42.349 SQ Associations: Not Supported 00:10:42.349 UUID List: Not Supported 00:10:42.349 Multi-Domain Subsystem: Not Supported 00:10:42.349 Fixed Capacity Management: Not Supported 00:10:42.349 Variable Capacity Management: Not Supported 00:10:42.349 Delete Endurance Group: Not Supported 00:10:42.349 Delete NVM Set: Not Supported 00:10:42.349 Extended LBA Formats Supported: Not Supported 00:10:42.349 Flexible Data Placement Supported: Not Supported 00:10:42.349 00:10:42.349 Controller Memory Buffer Support 00:10:42.349 ================================ 00:10:42.349 Supported: No 00:10:42.349 00:10:42.349 Persistent Memory Region Support 00:10:42.349 ================================ 00:10:42.349 Supported: No 00:10:42.349 00:10:42.349 Admin Command Set Attributes 00:10:42.349 ============================ 00:10:42.349 Security Send/Receive: Not Supported 00:10:42.349 Format NVM: Not Supported 00:10:42.349 Firmware Activate/Download: Not Supported 00:10:42.349 Namespace Management: Not Supported 00:10:42.349 Device Self-Test: Not Supported 00:10:42.349 Directives: Not Supported 00:10:42.349 NVMe-MI: Not Supported 00:10:42.349 Virtualization Management: Not Supported 00:10:42.349 Doorbell Buffer Config: Not Supported 00:10:42.349 Get LBA Status Capability: Not Supported 00:10:42.349 Command & Feature Lockdown Capability: Not Supported 00:10:42.349 Abort Command Limit: 4 00:10:42.349 Async Event Request Limit: 4 00:10:42.349 Number of Firmware Slots: N/A 00:10:42.349 Firmware Slot 1 Read-Only: N/A 00:10:42.349 Firmware Activation Without Reset: N/A 00:10:42.349 Multiple Update Detection Support: N/A 00:10:42.349 Firmware Update Granularity: No Information Provided 00:10:42.349 Per-Namespace SMART Log: No 00:10:42.349 Asymmetric Namespace Access Log Page: Not Supported 00:10:42.349 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:10:42.349 Command Effects Log Page: Supported 00:10:42.349 Get Log Page Extended Data: Supported 00:10:42.349 Telemetry Log Pages: Not Supported 00:10:42.349 Persistent Event Log Pages: Not Supported 00:10:42.349 Supported Log Pages Log Page: May Support 00:10:42.349 Commands Supported & Effects Log Page: Not Supported 00:10:42.349 Feature Identifiers & Effects Log Page:May Support 00:10:42.349 NVMe-MI Commands & Effects Log Page: May Support 00:10:42.349 Data Area 4 for Telemetry Log: Not Supported 00:10:42.349 Error Log Page Entries Supported: 128 00:10:42.349 Keep Alive: Supported 00:10:42.349 Keep Alive Granularity: 10000 ms 00:10:42.349 00:10:42.349 NVM Command Set Attributes 00:10:42.349 ========================== 00:10:42.349 Submission Queue Entry Size 00:10:42.349 Max: 64 00:10:42.349 Min: 64 00:10:42.349 Completion Queue Entry Size 00:10:42.349 Max: 16 00:10:42.349 Min: 16 00:10:42.349 Number of Namespaces: 32 00:10:42.349 Compare Command: Supported 00:10:42.349 Write Uncorrectable Command: Not Supported 00:10:42.349 Dataset Management Command: Supported 00:10:42.349 Write Zeroes Command: Supported 00:10:42.349 Set Features Save Field: Not Supported 00:10:42.349 Reservations: Not Supported 00:10:42.349 Timestamp: Not Supported 00:10:42.349 Copy: Supported 00:10:42.349 Volatile Write Cache: Present 00:10:42.349 Atomic Write Unit (Normal): 1 00:10:42.349 Atomic Write Unit (PFail): 1 00:10:42.349 Atomic Compare & Write Unit: 1 00:10:42.349 Fused Compare & Write: Supported 00:10:42.349 Scatter-Gather List 00:10:42.349 SGL Command Set: Supported (Dword aligned) 00:10:42.349 SGL Keyed: Not Supported 00:10:42.349 SGL Bit Bucket Descriptor: Not Supported 00:10:42.349 SGL Metadata Pointer: Not Supported 00:10:42.349 Oversized SGL: Not Supported 00:10:42.349 SGL Metadata Address: Not Supported 00:10:42.349 SGL Offset: Not Supported 00:10:42.349 Transport SGL Data Block: Not Supported 00:10:42.349 Replay Protected Memory Block: Not Supported 00:10:42.349 00:10:42.349 Firmware Slot Information 00:10:42.349 ========================= 00:10:42.349 Active slot: 1 00:10:42.349 Slot 1 Firmware Revision: 24.05 00:10:42.349 00:10:42.349 00:10:42.349 Commands Supported and Effects 00:10:42.349 ============================== 00:10:42.349 Admin Commands 00:10:42.349 -------------- 00:10:42.349 Get Log Page (02h): Supported 00:10:42.349 Identify (06h): Supported 00:10:42.349 Abort (08h): Supported 00:10:42.349 Set Features (09h): Supported 00:10:42.349 Get Features (0Ah): Supported 00:10:42.349 Asynchronous Event Request (0Ch): Supported 00:10:42.349 Keep Alive (18h): Supported 00:10:42.349 I/O Commands 00:10:42.349 ------------ 00:10:42.349 Flush (00h): Supported LBA-Change 00:10:42.349 Write (01h): Supported LBA-Change 00:10:42.349 Read (02h): Supported 00:10:42.349 Compare (05h): Supported 00:10:42.349 Write Zeroes (08h): Supported LBA-Change 00:10:42.349 Dataset Management (09h): Supported LBA-Change 00:10:42.349 Copy (19h): Supported LBA-Change 00:10:42.349 Unknown (79h): Supported LBA-Change 00:10:42.349 Unknown (7Ah): Supported 00:10:42.349 00:10:42.349 Error Log 00:10:42.349 ========= 00:10:42.349 00:10:42.349 Arbitration 00:10:42.349 =========== 00:10:42.349 Arbitration Burst: 1 00:10:42.349 00:10:42.349 Power Management 00:10:42.349 ================ 00:10:42.349 Number of Power States: 1 00:10:42.349 Current Power State: Power State #0 00:10:42.349 Power State #0: 00:10:42.349 Max Power: 0.00 W 00:10:42.349 Non-Operational State: Operational 00:10:42.349 Entry Latency: Not Reported 00:10:42.349 Exit Latency: Not Reported 00:10:42.349 Relative Read Throughput: 0 00:10:42.349 Relative Read Latency: 0 00:10:42.349 Relative Write Throughput: 0 00:10:42.349 Relative Write Latency: 0 00:10:42.349 Idle Power: Not Reported 00:10:42.349 Active Power: Not Reported 00:10:42.349 Non-Operational Permissive Mode: Not Supported 00:10:42.349 00:10:42.349 Health Information 00:10:42.349 ================== 00:10:42.349 Critical Warnings: 00:10:42.349 Available Spare Space: OK 00:10:42.349 Temperature: OK 00:10:42.349 Device Reliability: OK 00:10:42.349 Read Only: No 00:10:42.349 Volatile Memory Backup: OK 00:10:42.349 Current Temperature: 0 Kelvin (-2[2024-05-15 11:02:39.565510] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:10:42.349 [2024-05-15 11:02:39.565517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:10:42.349 [2024-05-15 11:02:39.565539] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:10:42.349 [2024-05-15 11:02:39.565547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.349 [2024-05-15 11:02:39.565552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.350 [2024-05-15 11:02:39.565557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.350 [2024-05-15 11:02:39.565564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.350 [2024-05-15 11:02:39.565732] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:10:42.350 [2024-05-15 11:02:39.565740] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:10:42.350 [2024-05-15 11:02:39.566733] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:42.350 [2024-05-15 11:02:39.566780] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:10:42.350 [2024-05-15 11:02:39.566786] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:10:42.350 [2024-05-15 11:02:39.567740] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:10:42.350 [2024-05-15 11:02:39.567750] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:10:42.350 [2024-05-15 11:02:39.567800] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:10:42.350 [2024-05-15 11:02:39.569776] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:42.350 73 Celsius) 00:10:42.350 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:10:42.350 Available Spare: 0% 00:10:42.350 Available Spare Threshold: 0% 00:10:42.350 Life Percentage Used: 0% 00:10:42.350 Data Units Read: 0 00:10:42.350 Data Units Written: 0 00:10:42.350 Host Read Commands: 0 00:10:42.350 Host Write Commands: 0 00:10:42.350 Controller Busy Time: 0 minutes 00:10:42.350 Power Cycles: 0 00:10:42.350 Power On Hours: 0 hours 00:10:42.350 Unsafe Shutdowns: 0 00:10:42.350 Unrecoverable Media Errors: 0 00:10:42.350 Lifetime Error Log Entries: 0 00:10:42.350 Warning Temperature Time: 0 minutes 00:10:42.350 Critical Temperature Time: 0 minutes 00:10:42.350 00:10:42.350 Number of Queues 00:10:42.350 ================ 00:10:42.350 Number of I/O Submission Queues: 127 00:10:42.350 Number of I/O Completion Queues: 127 00:10:42.350 00:10:42.350 Active Namespaces 00:10:42.350 ================= 00:10:42.350 Namespace ID:1 00:10:42.350 Error Recovery Timeout: Unlimited 00:10:42.350 Command Set Identifier: NVM (00h) 00:10:42.350 Deallocate: Supported 00:10:42.350 Deallocated/Unwritten Error: Not Supported 00:10:42.350 Deallocated Read Value: Unknown 00:10:42.350 Deallocate in Write Zeroes: Not Supported 00:10:42.350 Deallocated Guard Field: 0xFFFF 00:10:42.350 Flush: Supported 00:10:42.350 Reservation: Supported 00:10:42.350 Namespace Sharing Capabilities: Multiple Controllers 00:10:42.350 Size (in LBAs): 131072 (0GiB) 00:10:42.350 Capacity (in LBAs): 131072 (0GiB) 00:10:42.350 Utilization (in LBAs): 131072 (0GiB) 00:10:42.350 NGUID: 964CDAD04E5244BF96AD1AD847F68CD5 00:10:42.350 UUID: 964cdad0-4e52-44bf-96ad-1ad847f68cd5 00:10:42.350 Thin Provisioning: Not Supported 00:10:42.350 Per-NS Atomic Units: Yes 00:10:42.350 Atomic Boundary Size (Normal): 0 00:10:42.350 Atomic Boundary Size (PFail): 0 00:10:42.350 Atomic Boundary Offset: 0 00:10:42.350 Maximum Single Source Range Length: 65535 00:10:42.350 Maximum Copy Length: 65535 00:10:42.350 Maximum Source Range Count: 1 00:10:42.350 NGUID/EUI64 Never Reused: No 00:10:42.350 Namespace Write Protected: No 00:10:42.350 Number of LBA Formats: 1 00:10:42.350 Current LBA Format: LBA Format #00 00:10:42.350 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:42.350 00:10:42.350 11:02:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:10:42.609 EAL: No free 2048 kB hugepages reported on node 1 00:10:42.609 [2024-05-15 11:02:39.784970] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:47.881 Initializing NVMe Controllers 00:10:47.881 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:47.881 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:10:47.881 Initialization complete. Launching workers. 00:10:47.881 ======================================================== 00:10:47.881 Latency(us) 00:10:47.881 Device Information : IOPS MiB/s Average min max 00:10:47.881 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39942.86 156.03 3204.39 966.61 6661.38 00:10:47.881 ======================================================== 00:10:47.881 Total : 39942.86 156.03 3204.39 966.61 6661.38 00:10:47.881 00:10:47.881 [2024-05-15 11:02:44.804564] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:47.881 11:02:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:10:47.881 EAL: No free 2048 kB hugepages reported on node 1 00:10:47.881 [2024-05-15 11:02:45.029581] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:53.151 Initializing NVMe Controllers 00:10:53.151 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:53.151 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:10:53.151 Initialization complete. Launching workers. 00:10:53.151 ======================================================== 00:10:53.151 Latency(us) 00:10:53.151 Device Information : IOPS MiB/s Average min max 00:10:53.151 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16045.85 62.68 7982.45 5994.35 15472.96 00:10:53.151 ======================================================== 00:10:53.151 Total : 16045.85 62.68 7982.45 5994.35 15472.96 00:10:53.151 00:10:53.151 [2024-05-15 11:02:50.070596] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:53.151 11:02:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:10:53.151 EAL: No free 2048 kB hugepages reported on node 1 00:10:53.151 [2024-05-15 11:02:50.267574] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:58.451 [2024-05-15 11:02:55.329429] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:58.451 Initializing NVMe Controllers 00:10:58.451 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:58.451 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:58.451 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:10:58.451 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:10:58.451 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:10:58.451 Initialization complete. Launching workers. 00:10:58.451 Starting thread on core 2 00:10:58.451 Starting thread on core 3 00:10:58.451 Starting thread on core 1 00:10:58.452 11:02:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:10:58.452 EAL: No free 2048 kB hugepages reported on node 1 00:10:58.452 [2024-05-15 11:02:55.611573] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:01.739 [2024-05-15 11:02:58.689084] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:01.739 Initializing NVMe Controllers 00:11:01.739 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:01.739 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:01.739 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:11:01.739 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:11:01.739 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:11:01.739 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:11:01.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:11:01.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:11:01.739 Initialization complete. Launching workers. 00:11:01.739 Starting thread on core 1 with urgent priority queue 00:11:01.739 Starting thread on core 2 with urgent priority queue 00:11:01.739 Starting thread on core 3 with urgent priority queue 00:11:01.739 Starting thread on core 0 with urgent priority queue 00:11:01.739 SPDK bdev Controller (SPDK1 ) core 0: 7625.67 IO/s 13.11 secs/100000 ios 00:11:01.739 SPDK bdev Controller (SPDK1 ) core 1: 7909.33 IO/s 12.64 secs/100000 ios 00:11:01.739 SPDK bdev Controller (SPDK1 ) core 2: 8292.67 IO/s 12.06 secs/100000 ios 00:11:01.739 SPDK bdev Controller (SPDK1 ) core 3: 8372.00 IO/s 11.94 secs/100000 ios 00:11:01.739 ======================================================== 00:11:01.739 00:11:01.739 11:02:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:11:01.739 EAL: No free 2048 kB hugepages reported on node 1 00:11:01.739 [2024-05-15 11:02:58.962627] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:01.739 Initializing NVMe Controllers 00:11:01.739 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:01.739 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:01.739 Namespace ID: 1 size: 0GB 00:11:01.739 Initialization complete. 00:11:01.739 INFO: using host memory buffer for IO 00:11:01.739 Hello world! 00:11:01.739 [2024-05-15 11:02:58.996852] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:01.998 11:02:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:11:01.998 EAL: No free 2048 kB hugepages reported on node 1 00:11:02.256 [2024-05-15 11:02:59.268617] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:03.193 Initializing NVMe Controllers 00:11:03.193 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:03.193 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:03.193 Initialization complete. Launching workers. 00:11:03.193 submit (in ns) avg, min, max = 5718.7, 3290.4, 3999024.3 00:11:03.193 complete (in ns) avg, min, max = 22841.7, 1825.2, 4994972.2 00:11:03.193 00:11:03.193 Submit histogram 00:11:03.193 ================ 00:11:03.193 Range in us Cumulative Count 00:11:03.193 3.283 - 3.297: 0.0246% ( 4) 00:11:03.193 3.297 - 3.311: 0.0677% ( 7) 00:11:03.193 3.311 - 3.325: 0.2092% ( 23) 00:11:03.193 3.325 - 3.339: 0.5475% ( 55) 00:11:03.193 3.339 - 3.353: 2.0117% ( 238) 00:11:03.193 3.353 - 3.367: 5.7336% ( 605) 00:11:03.193 3.367 - 3.381: 10.6367% ( 797) 00:11:03.193 3.381 - 3.395: 16.4811% ( 950) 00:11:03.193 3.395 - 3.409: 22.7069% ( 1012) 00:11:03.193 3.409 - 3.423: 28.7604% ( 984) 00:11:03.193 3.423 - 3.437: 34.2787% ( 897) 00:11:03.193 3.437 - 3.450: 39.7478% ( 889) 00:11:03.193 3.450 - 3.464: 44.8354% ( 827) 00:11:03.193 3.464 - 3.478: 49.1049% ( 694) 00:11:03.193 3.478 - 3.492: 53.0421% ( 640) 00:11:03.193 3.492 - 3.506: 59.3910% ( 1032) 00:11:03.193 3.506 - 3.520: 65.5860% ( 1007) 00:11:03.193 3.520 - 3.534: 70.2492% ( 758) 00:11:03.193 3.534 - 3.548: 75.2199% ( 808) 00:11:03.193 3.548 - 3.562: 80.0308% ( 782) 00:11:03.193 3.562 - 3.590: 85.2599% ( 850) 00:11:03.193 3.590 - 3.617: 86.8287% ( 255) 00:11:03.193 3.617 - 3.645: 87.7207% ( 145) 00:11:03.193 3.645 - 3.673: 89.0311% ( 213) 00:11:03.193 3.673 - 3.701: 90.8151% ( 290) 00:11:03.193 3.701 - 3.729: 92.6853% ( 304) 00:11:03.193 3.729 - 3.757: 94.3156% ( 265) 00:11:03.193 3.757 - 3.784: 95.9766% ( 270) 00:11:03.193 3.784 - 3.812: 97.4592% ( 241) 00:11:03.193 3.812 - 3.840: 98.4559% ( 162) 00:11:03.193 3.840 - 3.868: 99.0218% ( 92) 00:11:03.193 3.868 - 3.896: 99.3602% ( 55) 00:11:03.193 3.896 - 3.923: 99.5078% ( 24) 00:11:03.193 3.923 - 3.951: 99.6124% ( 17) 00:11:03.193 3.951 - 3.979: 99.6247% ( 2) 00:11:03.193 3.979 - 4.007: 99.6678% ( 7) 00:11:03.193 4.007 - 4.035: 99.6801% ( 2) 00:11:03.193 4.035 - 4.063: 99.6863% ( 1) 00:11:03.193 4.118 - 4.146: 99.6924% ( 1) 00:11:03.193 5.037 - 5.064: 99.6986% ( 1) 00:11:03.193 5.176 - 5.203: 99.7109% ( 2) 00:11:03.193 5.203 - 5.231: 99.7170% ( 1) 00:11:03.193 5.287 - 5.315: 99.7232% ( 1) 00:11:03.193 5.315 - 5.343: 99.7293% ( 1) 00:11:03.193 5.370 - 5.398: 99.7355% ( 1) 00:11:03.193 5.426 - 5.454: 99.7416% ( 1) 00:11:03.193 5.482 - 5.510: 99.7478% ( 1) 00:11:03.193 5.510 - 5.537: 99.7539% ( 1) 00:11:03.193 5.565 - 5.593: 99.7601% ( 1) 00:11:03.193 5.593 - 5.621: 99.7724% ( 2) 00:11:03.193 5.621 - 5.649: 99.7785% ( 1) 00:11:03.193 5.649 - 5.677: 99.7847% ( 1) 00:11:03.193 5.760 - 5.788: 99.7970% ( 2) 00:11:03.193 5.899 - 5.927: 99.8031% ( 1) 00:11:03.193 6.066 - 6.094: 99.8093% ( 1) 00:11:03.193 6.122 - 6.150: 99.8277% ( 3) 00:11:03.193 6.205 - 6.233: 99.8339% ( 1) 00:11:03.193 6.261 - 6.289: 99.8462% ( 2) 00:11:03.193 6.344 - 6.372: 99.8524% ( 1) 00:11:03.193 6.372 - 6.400: 99.8585% ( 1) 00:11:03.193 6.428 - 6.456: 99.8647% ( 1) 00:11:03.193 6.511 - 6.539: 99.8708% ( 1) 00:11:03.193 6.650 - 6.678: 99.8770% ( 1) 00:11:03.193 6.734 - 6.762: 99.8831% ( 1) 00:11:03.193 7.012 - 7.040: 99.8893% ( 1) 00:11:03.193 7.290 - 7.346: 99.8954% ( 1) 00:11:03.193 7.346 - 7.402: 99.9016% ( 1) 00:11:03.193 7.457 - 7.513: 99.9139% ( 2) 00:11:03.193 8.626 - 8.682: 99.9200% ( 1) 00:11:03.193 9.071 - 9.127: 99.9262% ( 1) 00:11:03.193 9.294 - 9.350: 99.9323% ( 1) 00:11:03.193 10.296 - 10.351: 99.9385% ( 1) 00:11:03.193 10.685 - 10.741: 99.9446% ( 1) 00:11:03.193 3989.148 - 4017.642: 100.0000% ( 9) 00:11:03.193 00:11:03.193 Complete histogram 00:11:03.193 ================== 00:11:03.193 Range in us Cumulative Count 00:11:03.193 1.823 - 1.837: 0.1292% ( 21) 00:11:03.193 1.837 - 1.850: 2.8853% ( 448) 00:11:03.193 1.850 - 1.864: 8.5512% ( 921) 00:11:03.193 1.864 - [2024-05-15 11:03:00.289555] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:03.193 1.878: 11.4549% ( 472) 00:11:03.193 1.878 - 1.892: 21.0274% ( 1556) 00:11:03.193 1.892 - 1.906: 67.3639% ( 7532) 00:11:03.193 1.906 - 1.920: 88.9388% ( 3507) 00:11:03.193 1.920 - 1.934: 93.5589% ( 751) 00:11:03.194 1.934 - 1.948: 95.3737% ( 295) 00:11:03.194 1.948 - 1.962: 96.4872% ( 181) 00:11:03.194 1.962 - 1.976: 97.7299% ( 202) 00:11:03.194 1.976 - 1.990: 98.4251% ( 113) 00:11:03.194 1.990 - 2.003: 98.7327% ( 50) 00:11:03.194 2.003 - 2.017: 98.8680% ( 22) 00:11:03.194 2.017 - 2.031: 98.9542% ( 14) 00:11:03.194 2.031 - 2.045: 99.0341% ( 13) 00:11:03.194 2.045 - 2.059: 99.0895% ( 9) 00:11:03.194 2.059 - 2.073: 99.1141% ( 4) 00:11:03.194 2.073 - 2.087: 99.1326% ( 3) 00:11:03.194 2.087 - 2.101: 99.1695% ( 6) 00:11:03.194 2.101 - 2.115: 99.2618% ( 15) 00:11:03.194 2.115 - 2.129: 99.2802% ( 3) 00:11:03.194 2.129 - 2.143: 99.2925% ( 2) 00:11:03.194 2.143 - 2.157: 99.2987% ( 1) 00:11:03.194 2.157 - 2.170: 99.3110% ( 2) 00:11:03.194 2.323 - 2.337: 99.3171% ( 1) 00:11:03.194 3.673 - 3.701: 99.3233% ( 1) 00:11:03.194 3.868 - 3.896: 99.3294% ( 1) 00:11:03.194 3.923 - 3.951: 99.3356% ( 1) 00:11:03.194 3.979 - 4.007: 99.3417% ( 1) 00:11:03.194 4.090 - 4.118: 99.3479% ( 1) 00:11:03.194 4.202 - 4.230: 99.3540% ( 1) 00:11:03.194 4.397 - 4.424: 99.3602% ( 1) 00:11:03.194 4.536 - 4.563: 99.3663% ( 1) 00:11:03.194 4.619 - 4.647: 99.3725% ( 1) 00:11:03.194 4.647 - 4.675: 99.3787% ( 1) 00:11:03.194 4.703 - 4.730: 99.3848% ( 1) 00:11:03.194 4.786 - 4.814: 99.3910% ( 1) 00:11:03.194 4.870 - 4.897: 99.3971% ( 1) 00:11:03.194 4.953 - 4.981: 99.4033% ( 1) 00:11:03.194 5.064 - 5.092: 99.4094% ( 1) 00:11:03.194 5.287 - 5.315: 99.4156% ( 1) 00:11:03.194 5.426 - 5.454: 99.4217% ( 1) 00:11:03.194 5.482 - 5.510: 99.4279% ( 1) 00:11:03.194 5.510 - 5.537: 99.4340% ( 1) 00:11:03.194 5.649 - 5.677: 99.4402% ( 1) 00:11:03.194 5.816 - 5.843: 99.4463% ( 1) 00:11:03.194 6.650 - 6.678: 99.4525% ( 1) 00:11:03.194 8.237 - 8.292: 99.4586% ( 1) 00:11:03.194 9.683 - 9.739: 99.4648% ( 1) 00:11:03.194 40.292 - 40.515: 99.4709% ( 1) 00:11:03.194 194.115 - 195.005: 99.4771% ( 1) 00:11:03.194 2991.861 - 3006.108: 99.4832% ( 1) 00:11:03.194 3575.986 - 3590.233: 99.4894% ( 1) 00:11:03.194 3989.148 - 4017.642: 99.9877% ( 81) 00:11:03.194 4986.435 - 5014.929: 100.0000% ( 2) 00:11:03.194 00:11:03.194 11:03:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:11:03.194 11:03:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:11:03.194 11:03:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:11:03.194 11:03:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:11:03.194 11:03:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:03.453 [ 00:11:03.453 { 00:11:03.453 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:03.453 "subtype": "Discovery", 00:11:03.453 "listen_addresses": [], 00:11:03.453 "allow_any_host": true, 00:11:03.453 "hosts": [] 00:11:03.453 }, 00:11:03.453 { 00:11:03.453 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:03.453 "subtype": "NVMe", 00:11:03.453 "listen_addresses": [ 00:11:03.453 { 00:11:03.453 "trtype": "VFIOUSER", 00:11:03.453 "adrfam": "IPv4", 00:11:03.453 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:03.453 "trsvcid": "0" 00:11:03.453 } 00:11:03.453 ], 00:11:03.453 "allow_any_host": true, 00:11:03.453 "hosts": [], 00:11:03.453 "serial_number": "SPDK1", 00:11:03.453 "model_number": "SPDK bdev Controller", 00:11:03.453 "max_namespaces": 32, 00:11:03.453 "min_cntlid": 1, 00:11:03.453 "max_cntlid": 65519, 00:11:03.453 "namespaces": [ 00:11:03.453 { 00:11:03.453 "nsid": 1, 00:11:03.453 "bdev_name": "Malloc1", 00:11:03.453 "name": "Malloc1", 00:11:03.453 "nguid": "964CDAD04E5244BF96AD1AD847F68CD5", 00:11:03.453 "uuid": "964cdad0-4e52-44bf-96ad-1ad847f68cd5" 00:11:03.453 } 00:11:03.453 ] 00:11:03.453 }, 00:11:03.453 { 00:11:03.453 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:03.453 "subtype": "NVMe", 00:11:03.453 "listen_addresses": [ 00:11:03.453 { 00:11:03.453 "trtype": "VFIOUSER", 00:11:03.453 "adrfam": "IPv4", 00:11:03.453 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:03.453 "trsvcid": "0" 00:11:03.453 } 00:11:03.453 ], 00:11:03.453 "allow_any_host": true, 00:11:03.453 "hosts": [], 00:11:03.453 "serial_number": "SPDK2", 00:11:03.453 "model_number": "SPDK bdev Controller", 00:11:03.453 "max_namespaces": 32, 00:11:03.453 "min_cntlid": 1, 00:11:03.453 "max_cntlid": 65519, 00:11:03.453 "namespaces": [ 00:11:03.453 { 00:11:03.453 "nsid": 1, 00:11:03.453 "bdev_name": "Malloc2", 00:11:03.453 "name": "Malloc2", 00:11:03.453 "nguid": "B399F388FDBA48BA90B016B7A9CB9315", 00:11:03.453 "uuid": "b399f388-fdba-48ba-90b0-16b7a9cb9315" 00:11:03.453 } 00:11:03.453 ] 00:11:03.453 } 00:11:03.453 ] 00:11:03.453 11:03:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:11:03.453 11:03:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2175788 00:11:03.453 11:03:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:11:03.453 11:03:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:11:03.453 11:03:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # local i=0 00:11:03.453 11:03:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1263 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:03.453 11:03:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:03.453 11:03:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # return 0 00:11:03.453 11:03:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:11:03.453 11:03:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:11:03.453 EAL: No free 2048 kB hugepages reported on node 1 00:11:03.453 [2024-05-15 11:03:00.667566] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:03.453 Malloc3 00:11:03.453 11:03:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:11:03.712 [2024-05-15 11:03:00.883261] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:03.712 11:03:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:03.712 Asynchronous Event Request test 00:11:03.712 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:03.712 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:03.712 Registering asynchronous event callbacks... 00:11:03.712 Starting namespace attribute notice tests for all controllers... 00:11:03.712 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:11:03.712 aer_cb - Changed Namespace 00:11:03.712 Cleaning up... 00:11:03.971 [ 00:11:03.971 { 00:11:03.971 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:03.971 "subtype": "Discovery", 00:11:03.971 "listen_addresses": [], 00:11:03.971 "allow_any_host": true, 00:11:03.971 "hosts": [] 00:11:03.971 }, 00:11:03.971 { 00:11:03.971 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:03.971 "subtype": "NVMe", 00:11:03.971 "listen_addresses": [ 00:11:03.971 { 00:11:03.971 "trtype": "VFIOUSER", 00:11:03.971 "adrfam": "IPv4", 00:11:03.971 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:03.971 "trsvcid": "0" 00:11:03.971 } 00:11:03.971 ], 00:11:03.971 "allow_any_host": true, 00:11:03.971 "hosts": [], 00:11:03.971 "serial_number": "SPDK1", 00:11:03.971 "model_number": "SPDK bdev Controller", 00:11:03.971 "max_namespaces": 32, 00:11:03.971 "min_cntlid": 1, 00:11:03.971 "max_cntlid": 65519, 00:11:03.971 "namespaces": [ 00:11:03.971 { 00:11:03.971 "nsid": 1, 00:11:03.971 "bdev_name": "Malloc1", 00:11:03.971 "name": "Malloc1", 00:11:03.971 "nguid": "964CDAD04E5244BF96AD1AD847F68CD5", 00:11:03.971 "uuid": "964cdad0-4e52-44bf-96ad-1ad847f68cd5" 00:11:03.971 }, 00:11:03.971 { 00:11:03.971 "nsid": 2, 00:11:03.971 "bdev_name": "Malloc3", 00:11:03.971 "name": "Malloc3", 00:11:03.971 "nguid": "EA04667F30D7431C82E710A12E3A90EA", 00:11:03.971 "uuid": "ea04667f-30d7-431c-82e7-10a12e3a90ea" 00:11:03.971 } 00:11:03.971 ] 00:11:03.971 }, 00:11:03.971 { 00:11:03.971 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:03.971 "subtype": "NVMe", 00:11:03.971 "listen_addresses": [ 00:11:03.971 { 00:11:03.971 "trtype": "VFIOUSER", 00:11:03.971 "adrfam": "IPv4", 00:11:03.971 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:03.971 "trsvcid": "0" 00:11:03.971 } 00:11:03.971 ], 00:11:03.971 "allow_any_host": true, 00:11:03.971 "hosts": [], 00:11:03.971 "serial_number": "SPDK2", 00:11:03.971 "model_number": "SPDK bdev Controller", 00:11:03.971 "max_namespaces": 32, 00:11:03.971 "min_cntlid": 1, 00:11:03.971 "max_cntlid": 65519, 00:11:03.971 "namespaces": [ 00:11:03.971 { 00:11:03.971 "nsid": 1, 00:11:03.971 "bdev_name": "Malloc2", 00:11:03.971 "name": "Malloc2", 00:11:03.971 "nguid": "B399F388FDBA48BA90B016B7A9CB9315", 00:11:03.971 "uuid": "b399f388-fdba-48ba-90b0-16b7a9cb9315" 00:11:03.971 } 00:11:03.971 ] 00:11:03.971 } 00:11:03.971 ] 00:11:03.971 11:03:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2175788 00:11:03.971 11:03:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:03.971 11:03:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:11:03.971 11:03:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:11:03.972 11:03:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:11:03.972 [2024-05-15 11:03:01.111991] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:11:03.972 [2024-05-15 11:03:01.112027] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2175826 ] 00:11:03.972 EAL: No free 2048 kB hugepages reported on node 1 00:11:03.972 [2024-05-15 11:03:01.141547] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:11:03.972 [2024-05-15 11:03:01.152264] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:03.972 [2024-05-15 11:03:01.152284] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f4cfcfe0000 00:11:03.972 [2024-05-15 11:03:01.153264] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:03.972 [2024-05-15 11:03:01.154266] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:03.972 [2024-05-15 11:03:01.155275] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:03.972 [2024-05-15 11:03:01.156281] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:03.972 [2024-05-15 11:03:01.157288] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:03.972 [2024-05-15 11:03:01.158302] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:03.972 [2024-05-15 11:03:01.159307] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:03.972 [2024-05-15 11:03:01.160316] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:03.972 [2024-05-15 11:03:01.161327] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:03.972 [2024-05-15 11:03:01.161340] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f4cfcfd5000 00:11:03.972 [2024-05-15 11:03:01.162282] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:03.972 [2024-05-15 11:03:01.173794] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:11:03.972 [2024-05-15 11:03:01.173816] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:11:03.972 [2024-05-15 11:03:01.175862] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:11:03.972 [2024-05-15 11:03:01.175899] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:11:03.972 [2024-05-15 11:03:01.175972] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:11:03.972 [2024-05-15 11:03:01.175985] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:11:03.972 [2024-05-15 11:03:01.175991] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:11:03.972 [2024-05-15 11:03:01.176863] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:11:03.972 [2024-05-15 11:03:01.176873] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:11:03.972 [2024-05-15 11:03:01.176884] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:11:03.972 [2024-05-15 11:03:01.177868] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:11:03.972 [2024-05-15 11:03:01.177877] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:11:03.972 [2024-05-15 11:03:01.177883] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:11:03.972 [2024-05-15 11:03:01.178877] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:11:03.972 [2024-05-15 11:03:01.178886] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:11:03.972 [2024-05-15 11:03:01.179888] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:11:03.972 [2024-05-15 11:03:01.179896] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:11:03.972 [2024-05-15 11:03:01.179900] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:11:03.972 [2024-05-15 11:03:01.179907] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:11:03.972 [2024-05-15 11:03:01.180012] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:11:03.972 [2024-05-15 11:03:01.180016] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:11:03.972 [2024-05-15 11:03:01.180021] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:11:03.972 [2024-05-15 11:03:01.180890] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:11:03.972 [2024-05-15 11:03:01.181899] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:11:03.972 [2024-05-15 11:03:01.182904] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:11:03.972 [2024-05-15 11:03:01.183907] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:03.972 [2024-05-15 11:03:01.183945] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:11:03.972 [2024-05-15 11:03:01.184926] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:11:03.972 [2024-05-15 11:03:01.184935] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:11:03.972 [2024-05-15 11:03:01.184939] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:11:03.972 [2024-05-15 11:03:01.184956] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:11:03.972 [2024-05-15 11:03:01.184963] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:11:03.972 [2024-05-15 11:03:01.184976] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:03.972 [2024-05-15 11:03:01.184982] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:03.972 [2024-05-15 11:03:01.184994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:03.972 [2024-05-15 11:03:01.195173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:11:03.972 [2024-05-15 11:03:01.195184] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:11:03.972 [2024-05-15 11:03:01.195189] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:11:03.972 [2024-05-15 11:03:01.195194] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:11:03.972 [2024-05-15 11:03:01.195198] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:11:03.972 [2024-05-15 11:03:01.195203] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:11:03.972 [2024-05-15 11:03:01.195208] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:11:03.972 [2024-05-15 11:03:01.195212] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:11:03.972 [2024-05-15 11:03:01.195221] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:11:03.972 [2024-05-15 11:03:01.195232] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:11:03.972 [2024-05-15 11:03:01.203169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:11:03.972 [2024-05-15 11:03:01.203182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:03.972 [2024-05-15 11:03:01.203190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:03.972 [2024-05-15 11:03:01.203197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:03.972 [2024-05-15 11:03:01.203205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:03.972 [2024-05-15 11:03:01.203210] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:11:03.972 [2024-05-15 11:03:01.203216] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:11:03.973 [2024-05-15 11:03:01.203225] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:11:03.973 [2024-05-15 11:03:01.211170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:11:03.973 [2024-05-15 11:03:01.211177] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:11:03.973 [2024-05-15 11:03:01.211184] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:11:03.973 [2024-05-15 11:03:01.211190] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:11:03.973 [2024-05-15 11:03:01.211195] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:11:03.973 [2024-05-15 11:03:01.211205] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:03.973 [2024-05-15 11:03:01.217967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:11:03.973 [2024-05-15 11:03:01.218012] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:11:03.973 [2024-05-15 11:03:01.218020] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:11:03.973 [2024-05-15 11:03:01.218026] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:11:03.973 [2024-05-15 11:03:01.218030] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:11:03.973 [2024-05-15 11:03:01.218036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:11:03.973 [2024-05-15 11:03:01.222171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:11:03.973 [2024-05-15 11:03:01.222183] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:11:03.973 [2024-05-15 11:03:01.222194] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:11:03.973 [2024-05-15 11:03:01.222201] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:11:03.973 [2024-05-15 11:03:01.222207] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:03.973 [2024-05-15 11:03:01.222211] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:03.973 [2024-05-15 11:03:01.222217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:03.973 [2024-05-15 11:03:01.230169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:11:03.973 [2024-05-15 11:03:01.230181] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:11:03.973 [2024-05-15 11:03:01.230188] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:11:03.973 [2024-05-15 11:03:01.230194] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:03.973 [2024-05-15 11:03:01.230198] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:03.973 [2024-05-15 11:03:01.230204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:04.232 [2024-05-15 11:03:01.238170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:11:04.232 [2024-05-15 11:03:01.238185] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:11:04.232 [2024-05-15 11:03:01.238192] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:11:04.232 [2024-05-15 11:03:01.238198] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:11:04.232 [2024-05-15 11:03:01.238204] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:11:04.232 [2024-05-15 11:03:01.238208] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:11:04.232 [2024-05-15 11:03:01.238215] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:11:04.232 [2024-05-15 11:03:01.238219] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:11:04.232 [2024-05-15 11:03:01.238223] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:11:04.232 [2024-05-15 11:03:01.238241] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:11:04.232 [2024-05-15 11:03:01.246170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:11:04.232 [2024-05-15 11:03:01.246183] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:11:04.232 [2024-05-15 11:03:01.254170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:11:04.232 [2024-05-15 11:03:01.254182] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:11:04.232 [2024-05-15 11:03:01.262170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:11:04.232 [2024-05-15 11:03:01.262183] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:04.232 [2024-05-15 11:03:01.270170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:11:04.232 [2024-05-15 11:03:01.270182] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:11:04.232 [2024-05-15 11:03:01.270187] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:11:04.232 [2024-05-15 11:03:01.270190] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:11:04.232 [2024-05-15 11:03:01.270193] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:11:04.232 [2024-05-15 11:03:01.270199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:11:04.232 [2024-05-15 11:03:01.270206] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:11:04.232 [2024-05-15 11:03:01.270210] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:11:04.232 [2024-05-15 11:03:01.270215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:11:04.232 [2024-05-15 11:03:01.270222] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:11:04.232 [2024-05-15 11:03:01.270226] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:04.232 [2024-05-15 11:03:01.270231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:04.232 [2024-05-15 11:03:01.270240] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:11:04.232 [2024-05-15 11:03:01.270245] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:11:04.232 [2024-05-15 11:03:01.270250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:11:04.232 [2024-05-15 11:03:01.278172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:11:04.232 [2024-05-15 11:03:01.278189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:11:04.232 [2024-05-15 11:03:01.278199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:11:04.232 [2024-05-15 11:03:01.278207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:11:04.232 ===================================================== 00:11:04.232 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:04.232 ===================================================== 00:11:04.232 Controller Capabilities/Features 00:11:04.232 ================================ 00:11:04.232 Vendor ID: 4e58 00:11:04.232 Subsystem Vendor ID: 4e58 00:11:04.232 Serial Number: SPDK2 00:11:04.232 Model Number: SPDK bdev Controller 00:11:04.232 Firmware Version: 24.05 00:11:04.232 Recommended Arb Burst: 6 00:11:04.232 IEEE OUI Identifier: 8d 6b 50 00:11:04.232 Multi-path I/O 00:11:04.232 May have multiple subsystem ports: Yes 00:11:04.232 May have multiple controllers: Yes 00:11:04.232 Associated with SR-IOV VF: No 00:11:04.232 Max Data Transfer Size: 131072 00:11:04.233 Max Number of Namespaces: 32 00:11:04.233 Max Number of I/O Queues: 127 00:11:04.233 NVMe Specification Version (VS): 1.3 00:11:04.233 NVMe Specification Version (Identify): 1.3 00:11:04.233 Maximum Queue Entries: 256 00:11:04.233 Contiguous Queues Required: Yes 00:11:04.233 Arbitration Mechanisms Supported 00:11:04.233 Weighted Round Robin: Not Supported 00:11:04.233 Vendor Specific: Not Supported 00:11:04.233 Reset Timeout: 15000 ms 00:11:04.233 Doorbell Stride: 4 bytes 00:11:04.233 NVM Subsystem Reset: Not Supported 00:11:04.233 Command Sets Supported 00:11:04.233 NVM Command Set: Supported 00:11:04.233 Boot Partition: Not Supported 00:11:04.233 Memory Page Size Minimum: 4096 bytes 00:11:04.233 Memory Page Size Maximum: 4096 bytes 00:11:04.233 Persistent Memory Region: Not Supported 00:11:04.233 Optional Asynchronous Events Supported 00:11:04.233 Namespace Attribute Notices: Supported 00:11:04.233 Firmware Activation Notices: Not Supported 00:11:04.233 ANA Change Notices: Not Supported 00:11:04.233 PLE Aggregate Log Change Notices: Not Supported 00:11:04.233 LBA Status Info Alert Notices: Not Supported 00:11:04.233 EGE Aggregate Log Change Notices: Not Supported 00:11:04.233 Normal NVM Subsystem Shutdown event: Not Supported 00:11:04.233 Zone Descriptor Change Notices: Not Supported 00:11:04.233 Discovery Log Change Notices: Not Supported 00:11:04.233 Controller Attributes 00:11:04.233 128-bit Host Identifier: Supported 00:11:04.233 Non-Operational Permissive Mode: Not Supported 00:11:04.233 NVM Sets: Not Supported 00:11:04.233 Read Recovery Levels: Not Supported 00:11:04.233 Endurance Groups: Not Supported 00:11:04.233 Predictable Latency Mode: Not Supported 00:11:04.233 Traffic Based Keep ALive: Not Supported 00:11:04.233 Namespace Granularity: Not Supported 00:11:04.233 SQ Associations: Not Supported 00:11:04.233 UUID List: Not Supported 00:11:04.233 Multi-Domain Subsystem: Not Supported 00:11:04.233 Fixed Capacity Management: Not Supported 00:11:04.233 Variable Capacity Management: Not Supported 00:11:04.233 Delete Endurance Group: Not Supported 00:11:04.233 Delete NVM Set: Not Supported 00:11:04.233 Extended LBA Formats Supported: Not Supported 00:11:04.233 Flexible Data Placement Supported: Not Supported 00:11:04.233 00:11:04.233 Controller Memory Buffer Support 00:11:04.233 ================================ 00:11:04.233 Supported: No 00:11:04.233 00:11:04.233 Persistent Memory Region Support 00:11:04.233 ================================ 00:11:04.233 Supported: No 00:11:04.233 00:11:04.233 Admin Command Set Attributes 00:11:04.233 ============================ 00:11:04.233 Security Send/Receive: Not Supported 00:11:04.233 Format NVM: Not Supported 00:11:04.233 Firmware Activate/Download: Not Supported 00:11:04.233 Namespace Management: Not Supported 00:11:04.233 Device Self-Test: Not Supported 00:11:04.233 Directives: Not Supported 00:11:04.233 NVMe-MI: Not Supported 00:11:04.233 Virtualization Management: Not Supported 00:11:04.233 Doorbell Buffer Config: Not Supported 00:11:04.233 Get LBA Status Capability: Not Supported 00:11:04.233 Command & Feature Lockdown Capability: Not Supported 00:11:04.233 Abort Command Limit: 4 00:11:04.233 Async Event Request Limit: 4 00:11:04.233 Number of Firmware Slots: N/A 00:11:04.233 Firmware Slot 1 Read-Only: N/A 00:11:04.233 Firmware Activation Without Reset: N/A 00:11:04.233 Multiple Update Detection Support: N/A 00:11:04.233 Firmware Update Granularity: No Information Provided 00:11:04.233 Per-Namespace SMART Log: No 00:11:04.233 Asymmetric Namespace Access Log Page: Not Supported 00:11:04.233 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:11:04.233 Command Effects Log Page: Supported 00:11:04.233 Get Log Page Extended Data: Supported 00:11:04.233 Telemetry Log Pages: Not Supported 00:11:04.233 Persistent Event Log Pages: Not Supported 00:11:04.233 Supported Log Pages Log Page: May Support 00:11:04.233 Commands Supported & Effects Log Page: Not Supported 00:11:04.233 Feature Identifiers & Effects Log Page:May Support 00:11:04.233 NVMe-MI Commands & Effects Log Page: May Support 00:11:04.233 Data Area 4 for Telemetry Log: Not Supported 00:11:04.233 Error Log Page Entries Supported: 128 00:11:04.233 Keep Alive: Supported 00:11:04.233 Keep Alive Granularity: 10000 ms 00:11:04.233 00:11:04.233 NVM Command Set Attributes 00:11:04.233 ========================== 00:11:04.233 Submission Queue Entry Size 00:11:04.233 Max: 64 00:11:04.233 Min: 64 00:11:04.233 Completion Queue Entry Size 00:11:04.233 Max: 16 00:11:04.233 Min: 16 00:11:04.233 Number of Namespaces: 32 00:11:04.233 Compare Command: Supported 00:11:04.233 Write Uncorrectable Command: Not Supported 00:11:04.233 Dataset Management Command: Supported 00:11:04.233 Write Zeroes Command: Supported 00:11:04.233 Set Features Save Field: Not Supported 00:11:04.233 Reservations: Not Supported 00:11:04.233 Timestamp: Not Supported 00:11:04.233 Copy: Supported 00:11:04.233 Volatile Write Cache: Present 00:11:04.233 Atomic Write Unit (Normal): 1 00:11:04.233 Atomic Write Unit (PFail): 1 00:11:04.233 Atomic Compare & Write Unit: 1 00:11:04.233 Fused Compare & Write: Supported 00:11:04.233 Scatter-Gather List 00:11:04.233 SGL Command Set: Supported (Dword aligned) 00:11:04.233 SGL Keyed: Not Supported 00:11:04.233 SGL Bit Bucket Descriptor: Not Supported 00:11:04.233 SGL Metadata Pointer: Not Supported 00:11:04.233 Oversized SGL: Not Supported 00:11:04.233 SGL Metadata Address: Not Supported 00:11:04.233 SGL Offset: Not Supported 00:11:04.233 Transport SGL Data Block: Not Supported 00:11:04.233 Replay Protected Memory Block: Not Supported 00:11:04.233 00:11:04.233 Firmware Slot Information 00:11:04.233 ========================= 00:11:04.233 Active slot: 1 00:11:04.233 Slot 1 Firmware Revision: 24.05 00:11:04.233 00:11:04.233 00:11:04.233 Commands Supported and Effects 00:11:04.233 ============================== 00:11:04.233 Admin Commands 00:11:04.233 -------------- 00:11:04.233 Get Log Page (02h): Supported 00:11:04.233 Identify (06h): Supported 00:11:04.233 Abort (08h): Supported 00:11:04.233 Set Features (09h): Supported 00:11:04.233 Get Features (0Ah): Supported 00:11:04.233 Asynchronous Event Request (0Ch): Supported 00:11:04.233 Keep Alive (18h): Supported 00:11:04.233 I/O Commands 00:11:04.233 ------------ 00:11:04.233 Flush (00h): Supported LBA-Change 00:11:04.233 Write (01h): Supported LBA-Change 00:11:04.233 Read (02h): Supported 00:11:04.233 Compare (05h): Supported 00:11:04.233 Write Zeroes (08h): Supported LBA-Change 00:11:04.233 Dataset Management (09h): Supported LBA-Change 00:11:04.233 Copy (19h): Supported LBA-Change 00:11:04.233 Unknown (79h): Supported LBA-Change 00:11:04.233 Unknown (7Ah): Supported 00:11:04.233 00:11:04.233 Error Log 00:11:04.233 ========= 00:11:04.233 00:11:04.233 Arbitration 00:11:04.233 =========== 00:11:04.233 Arbitration Burst: 1 00:11:04.233 00:11:04.233 Power Management 00:11:04.233 ================ 00:11:04.233 Number of Power States: 1 00:11:04.233 Current Power State: Power State #0 00:11:04.233 Power State #0: 00:11:04.233 Max Power: 0.00 W 00:11:04.233 Non-Operational State: Operational 00:11:04.233 Entry Latency: Not Reported 00:11:04.233 Exit Latency: Not Reported 00:11:04.233 Relative Read Throughput: 0 00:11:04.233 Relative Read Latency: 0 00:11:04.233 Relative Write Throughput: 0 00:11:04.233 Relative Write Latency: 0 00:11:04.233 Idle Power: Not Reported 00:11:04.233 Active Power: Not Reported 00:11:04.233 Non-Operational Permissive Mode: Not Supported 00:11:04.233 00:11:04.233 Health Information 00:11:04.233 ================== 00:11:04.233 Critical Warnings: 00:11:04.233 Available Spare Space: OK 00:11:04.233 Temperature: OK 00:11:04.233 Device Reliability: OK 00:11:04.233 Read Only: No 00:11:04.233 Volatile Memory Backup: OK 00:11:04.233 Current Temperature: 0 Kelvin (-2[2024-05-15 11:03:01.278297] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:11:04.233 [2024-05-15 11:03:01.286170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:11:04.233 [2024-05-15 11:03:01.286196] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:11:04.233 [2024-05-15 11:03:01.286204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.233 [2024-05-15 11:03:01.286209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.233 [2024-05-15 11:03:01.286214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.233 [2024-05-15 11:03:01.286220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.233 [2024-05-15 11:03:01.286263] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:11:04.233 [2024-05-15 11:03:01.286273] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:11:04.233 [2024-05-15 11:03:01.287271] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:04.233 [2024-05-15 11:03:01.287313] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:11:04.233 [2024-05-15 11:03:01.287319] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:11:04.233 [2024-05-15 11:03:01.288271] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:11:04.233 [2024-05-15 11:03:01.288281] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:11:04.233 [2024-05-15 11:03:01.288327] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:11:04.233 [2024-05-15 11:03:01.291169] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:04.233 73 Celsius) 00:11:04.233 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:11:04.233 Available Spare: 0% 00:11:04.233 Available Spare Threshold: 0% 00:11:04.233 Life Percentage Used: 0% 00:11:04.233 Data Units Read: 0 00:11:04.233 Data Units Written: 0 00:11:04.233 Host Read Commands: 0 00:11:04.233 Host Write Commands: 0 00:11:04.233 Controller Busy Time: 0 minutes 00:11:04.233 Power Cycles: 0 00:11:04.233 Power On Hours: 0 hours 00:11:04.233 Unsafe Shutdowns: 0 00:11:04.233 Unrecoverable Media Errors: 0 00:11:04.233 Lifetime Error Log Entries: 0 00:11:04.233 Warning Temperature Time: 0 minutes 00:11:04.233 Critical Temperature Time: 0 minutes 00:11:04.233 00:11:04.233 Number of Queues 00:11:04.233 ================ 00:11:04.233 Number of I/O Submission Queues: 127 00:11:04.233 Number of I/O Completion Queues: 127 00:11:04.233 00:11:04.233 Active Namespaces 00:11:04.233 ================= 00:11:04.233 Namespace ID:1 00:11:04.233 Error Recovery Timeout: Unlimited 00:11:04.233 Command Set Identifier: NVM (00h) 00:11:04.233 Deallocate: Supported 00:11:04.233 Deallocated/Unwritten Error: Not Supported 00:11:04.233 Deallocated Read Value: Unknown 00:11:04.233 Deallocate in Write Zeroes: Not Supported 00:11:04.233 Deallocated Guard Field: 0xFFFF 00:11:04.233 Flush: Supported 00:11:04.233 Reservation: Supported 00:11:04.233 Namespace Sharing Capabilities: Multiple Controllers 00:11:04.233 Size (in LBAs): 131072 (0GiB) 00:11:04.233 Capacity (in LBAs): 131072 (0GiB) 00:11:04.233 Utilization (in LBAs): 131072 (0GiB) 00:11:04.233 NGUID: B399F388FDBA48BA90B016B7A9CB9315 00:11:04.233 UUID: b399f388-fdba-48ba-90b0-16b7a9cb9315 00:11:04.234 Thin Provisioning: Not Supported 00:11:04.234 Per-NS Atomic Units: Yes 00:11:04.234 Atomic Boundary Size (Normal): 0 00:11:04.234 Atomic Boundary Size (PFail): 0 00:11:04.234 Atomic Boundary Offset: 0 00:11:04.234 Maximum Single Source Range Length: 65535 00:11:04.234 Maximum Copy Length: 65535 00:11:04.234 Maximum Source Range Count: 1 00:11:04.234 NGUID/EUI64 Never Reused: No 00:11:04.234 Namespace Write Protected: No 00:11:04.234 Number of LBA Formats: 1 00:11:04.234 Current LBA Format: LBA Format #00 00:11:04.234 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:04.234 00:11:04.234 11:03:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:11:04.234 EAL: No free 2048 kB hugepages reported on node 1 00:11:04.492 [2024-05-15 11:03:01.515586] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:09.762 Initializing NVMe Controllers 00:11:09.762 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:09.762 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:11:09.762 Initialization complete. Launching workers. 00:11:09.762 ======================================================== 00:11:09.762 Latency(us) 00:11:09.762 Device Information : IOPS MiB/s Average min max 00:11:09.762 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39957.41 156.08 3203.25 956.58 8606.61 00:11:09.762 ======================================================== 00:11:09.762 Total : 39957.41 156.08 3203.25 956.58 8606.61 00:11:09.762 00:11:09.762 [2024-05-15 11:03:06.621411] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:09.762 11:03:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:11:09.762 EAL: No free 2048 kB hugepages reported on node 1 00:11:09.762 [2024-05-15 11:03:06.837043] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:15.038 Initializing NVMe Controllers 00:11:15.038 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:15.038 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:11:15.038 Initialization complete. Launching workers. 00:11:15.038 ======================================================== 00:11:15.038 Latency(us) 00:11:15.038 Device Information : IOPS MiB/s Average min max 00:11:15.038 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39953.25 156.07 3203.57 980.51 6616.60 00:11:15.038 ======================================================== 00:11:15.038 Total : 39953.25 156.07 3203.57 980.51 6616.60 00:11:15.038 00:11:15.038 [2024-05-15 11:03:11.858601] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:15.039 11:03:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:11:15.039 EAL: No free 2048 kB hugepages reported on node 1 00:11:15.039 [2024-05-15 11:03:12.043740] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:20.313 [2024-05-15 11:03:17.180260] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:20.313 Initializing NVMe Controllers 00:11:20.313 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:20.313 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:20.313 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:11:20.313 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:11:20.313 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:11:20.313 Initialization complete. Launching workers. 00:11:20.313 Starting thread on core 2 00:11:20.313 Starting thread on core 3 00:11:20.313 Starting thread on core 1 00:11:20.313 11:03:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:11:20.313 EAL: No free 2048 kB hugepages reported on node 1 00:11:20.313 [2024-05-15 11:03:17.455606] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:23.604 [2024-05-15 11:03:20.509511] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:23.604 Initializing NVMe Controllers 00:11:23.604 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:23.604 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:23.604 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:11:23.604 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:11:23.604 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:11:23.604 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:11:23.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:11:23.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:11:23.604 Initialization complete. Launching workers. 00:11:23.604 Starting thread on core 1 with urgent priority queue 00:11:23.604 Starting thread on core 2 with urgent priority queue 00:11:23.604 Starting thread on core 3 with urgent priority queue 00:11:23.604 Starting thread on core 0 with urgent priority queue 00:11:23.604 SPDK bdev Controller (SPDK2 ) core 0: 9155.00 IO/s 10.92 secs/100000 ios 00:11:23.604 SPDK bdev Controller (SPDK2 ) core 1: 8516.33 IO/s 11.74 secs/100000 ios 00:11:23.604 SPDK bdev Controller (SPDK2 ) core 2: 7625.00 IO/s 13.11 secs/100000 ios 00:11:23.604 SPDK bdev Controller (SPDK2 ) core 3: 11542.00 IO/s 8.66 secs/100000 ios 00:11:23.604 ======================================================== 00:11:23.604 00:11:23.604 11:03:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:11:23.604 EAL: No free 2048 kB hugepages reported on node 1 00:11:23.604 [2024-05-15 11:03:20.774604] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:23.604 Initializing NVMe Controllers 00:11:23.604 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:23.604 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:23.604 Namespace ID: 1 size: 0GB 00:11:23.604 Initialization complete. 00:11:23.604 INFO: using host memory buffer for IO 00:11:23.604 Hello world! 00:11:23.604 [2024-05-15 11:03:20.786682] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:23.604 11:03:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:11:23.863 EAL: No free 2048 kB hugepages reported on node 1 00:11:23.863 [2024-05-15 11:03:21.056140] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:25.241 Initializing NVMe Controllers 00:11:25.241 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:25.241 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:25.241 Initialization complete. Launching workers. 00:11:25.241 submit (in ns) avg, min, max = 6425.1, 3289.6, 4002373.9 00:11:25.241 complete (in ns) avg, min, max = 21557.5, 1824.3, 4001234.8 00:11:25.241 00:11:25.241 Submit histogram 00:11:25.241 ================ 00:11:25.241 Range in us Cumulative Count 00:11:25.241 3.283 - 3.297: 0.0182% ( 3) 00:11:25.241 3.297 - 3.311: 0.3035% ( 47) 00:11:25.241 3.311 - 3.325: 0.7649% ( 76) 00:11:25.241 3.325 - 3.339: 1.4266% ( 109) 00:11:25.241 3.339 - 3.353: 2.3554% ( 153) 00:11:25.241 3.353 - 3.367: 4.3404% ( 327) 00:11:25.241 3.367 - 3.381: 8.1527% ( 628) 00:11:25.241 3.381 - 3.395: 13.5130% ( 883) 00:11:25.241 3.395 - 3.409: 19.4136% ( 972) 00:11:25.241 3.409 - 3.423: 25.2049% ( 954) 00:11:25.241 3.423 - 3.437: 30.5530% ( 881) 00:11:25.241 3.437 - 3.450: 35.5612% ( 825) 00:11:25.241 3.450 - 3.464: 41.2493% ( 937) 00:11:25.241 3.464 - 3.478: 45.8569% ( 759) 00:11:25.241 3.478 - 3.492: 49.7724% ( 645) 00:11:25.241 3.492 - 3.506: 53.7546% ( 656) 00:11:25.241 3.506 - 3.520: 59.5216% ( 950) 00:11:25.241 3.520 - 3.534: 66.0353% ( 1073) 00:11:25.241 3.534 - 3.548: 70.6186% ( 755) 00:11:25.241 3.548 - 3.562: 75.4750% ( 800) 00:11:25.241 3.562 - 3.590: 83.0086% ( 1241) 00:11:25.241 3.590 - 3.617: 86.3595% ( 552) 00:11:25.241 3.617 - 3.645: 87.3672% ( 166) 00:11:25.241 3.645 - 3.673: 88.4538% ( 179) 00:11:25.241 3.673 - 3.701: 90.1657% ( 282) 00:11:25.241 3.701 - 3.729: 91.9869% ( 300) 00:11:25.241 3.729 - 3.757: 93.6381% ( 272) 00:11:25.241 3.757 - 3.784: 95.3742% ( 286) 00:11:25.241 3.784 - 3.812: 97.0012% ( 268) 00:11:25.241 3.812 - 3.840: 98.1363% ( 187) 00:11:25.241 3.840 - 3.868: 98.8952% ( 125) 00:11:25.241 3.868 - 3.896: 99.2655% ( 61) 00:11:25.241 3.896 - 3.923: 99.5751% ( 51) 00:11:25.241 3.923 - 3.951: 99.6722% ( 16) 00:11:25.241 3.951 - 3.979: 99.6965% ( 4) 00:11:25.241 4.035 - 4.063: 99.7025% ( 1) 00:11:25.241 4.981 - 5.009: 99.7086% ( 1) 00:11:25.241 5.148 - 5.176: 99.7147% ( 1) 00:11:25.241 5.176 - 5.203: 99.7208% ( 1) 00:11:25.241 5.315 - 5.343: 99.7268% ( 1) 00:11:25.241 5.370 - 5.398: 99.7329% ( 1) 00:11:25.241 5.454 - 5.482: 99.7511% ( 3) 00:11:25.241 5.510 - 5.537: 99.7572% ( 1) 00:11:25.241 5.593 - 5.621: 99.7693% ( 2) 00:11:25.241 5.621 - 5.649: 99.7815% ( 2) 00:11:25.241 5.732 - 5.760: 99.7875% ( 1) 00:11:25.241 5.843 - 5.871: 99.7936% ( 1) 00:11:25.241 5.871 - 5.899: 99.7997% ( 1) 00:11:25.241 5.927 - 5.955: 99.8057% ( 1) 00:11:25.241 6.010 - 6.038: 99.8118% ( 1) 00:11:25.241 6.066 - 6.094: 99.8179% ( 1) 00:11:25.241 6.122 - 6.150: 99.8240% ( 1) 00:11:25.241 6.233 - 6.261: 99.8361% ( 2) 00:11:25.242 6.317 - 6.344: 99.8482% ( 2) 00:11:25.242 6.344 - 6.372: 99.8604% ( 2) 00:11:25.242 6.372 - 6.400: 99.8664% ( 1) 00:11:25.242 6.428 - 6.456: 99.8725% ( 1) 00:11:25.242 6.456 - 6.483: 99.8847% ( 2) 00:11:25.242 6.650 - 6.678: 99.8907% ( 1) 00:11:25.242 6.817 - 6.845: 99.8968% ( 1) 00:11:25.242 6.984 - 7.012: 99.9029% ( 1) 00:11:25.242 7.235 - 7.290: 99.9089% ( 1) 00:11:25.242 7.457 - 7.513: 99.9150% ( 1) 00:11:25.242 7.624 - 7.680: 99.9211% ( 1) 00:11:25.242 9.628 - 9.683: 99.9272% ( 1) 00:11:25.242 3989.148 - 4017.642: 100.0000% ( 12) 00:11:25.242 00:11:25.242 Complete histogram 00:11:25.242 ================== 00:11:25.242 Range in us Cumulative Count 00:11:25.242 1.823 - 1.837: 0.4310% ( 71) 00:11:25.242 1.837 - 1.850: 2.7621% ( 384) 00:11:25.242 1.850 - 1.864: 4.5893% ( 301) 00:11:25.242 1.864 - 1.878: 5.7124% ( 185) 00:11:25.242 1.878 - 1.892: 14.4964% ( 1447) 00:11:25.242 1.892 - 1.906: 67.0734% ( 8661) 00:11:25.242 1.906 - 1.920: 88.7027% ( 3563) 00:11:25.242 1.920 - 1.934: 92.8368% ( 681) 00:11:25.242 1.934 - 1.948: 96.4912% ( 602) 00:11:25.242 1.948 - 1.962: 97.6325% ( 188) 00:11:25.242 1.962 - 1.976: 98.4824% ( 140) 00:11:25.242 1.976 - [2024-05-15 11:03:22.150216] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:25.242 1.990: 99.0105% ( 87) 00:11:25.242 1.990 - 2.003: 99.1987% ( 31) 00:11:25.242 2.003 - 2.017: 99.2533% ( 9) 00:11:25.242 2.017 - 2.031: 99.2715% ( 3) 00:11:25.242 2.031 - 2.045: 99.2776% ( 1) 00:11:25.242 2.045 - 2.059: 99.2837% ( 1) 00:11:25.242 2.087 - 2.101: 99.2897% ( 1) 00:11:25.242 2.101 - 2.115: 99.2958% ( 1) 00:11:25.242 2.240 - 2.254: 99.3019% ( 1) 00:11:25.242 3.464 - 3.478: 99.3080% ( 1) 00:11:25.242 3.506 - 3.520: 99.3201% ( 2) 00:11:25.242 3.645 - 3.673: 99.3262% ( 1) 00:11:25.242 3.757 - 3.784: 99.3322% ( 1) 00:11:25.242 3.784 - 3.812: 99.3383% ( 1) 00:11:25.242 3.812 - 3.840: 99.3444% ( 1) 00:11:25.242 4.090 - 4.118: 99.3505% ( 1) 00:11:25.242 4.118 - 4.146: 99.3687% ( 3) 00:11:25.242 4.230 - 4.257: 99.3747% ( 1) 00:11:25.242 4.285 - 4.313: 99.3808% ( 1) 00:11:25.242 4.369 - 4.397: 99.3869% ( 1) 00:11:25.242 4.424 - 4.452: 99.3929% ( 1) 00:11:25.242 4.452 - 4.480: 99.4051% ( 2) 00:11:25.242 4.536 - 4.563: 99.4172% ( 2) 00:11:25.242 4.675 - 4.703: 99.4233% ( 1) 00:11:25.242 4.703 - 4.730: 99.4294% ( 1) 00:11:25.242 4.730 - 4.758: 99.4354% ( 1) 00:11:25.242 4.925 - 4.953: 99.4476% ( 2) 00:11:25.242 4.953 - 4.981: 99.4537% ( 1) 00:11:25.242 5.009 - 5.037: 99.4658% ( 2) 00:11:25.242 5.176 - 5.203: 99.4719% ( 1) 00:11:25.242 5.621 - 5.649: 99.4779% ( 1) 00:11:25.242 5.955 - 5.983: 99.4901% ( 2) 00:11:25.242 7.513 - 7.569: 99.4961% ( 1) 00:11:25.242 17.697 - 17.809: 99.5022% ( 1) 00:11:25.242 178.977 - 179.868: 99.5083% ( 1) 00:11:25.242 3989.148 - 4017.642: 100.0000% ( 81) 00:11:25.242 00:11:25.242 11:03:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:11:25.242 11:03:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:11:25.242 11:03:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:11:25.242 11:03:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:11:25.242 11:03:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:25.242 [ 00:11:25.242 { 00:11:25.242 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:25.242 "subtype": "Discovery", 00:11:25.242 "listen_addresses": [], 00:11:25.242 "allow_any_host": true, 00:11:25.242 "hosts": [] 00:11:25.242 }, 00:11:25.242 { 00:11:25.242 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:25.242 "subtype": "NVMe", 00:11:25.242 "listen_addresses": [ 00:11:25.242 { 00:11:25.242 "trtype": "VFIOUSER", 00:11:25.242 "adrfam": "IPv4", 00:11:25.242 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:25.242 "trsvcid": "0" 00:11:25.242 } 00:11:25.242 ], 00:11:25.242 "allow_any_host": true, 00:11:25.242 "hosts": [], 00:11:25.242 "serial_number": "SPDK1", 00:11:25.242 "model_number": "SPDK bdev Controller", 00:11:25.242 "max_namespaces": 32, 00:11:25.242 "min_cntlid": 1, 00:11:25.242 "max_cntlid": 65519, 00:11:25.242 "namespaces": [ 00:11:25.242 { 00:11:25.242 "nsid": 1, 00:11:25.242 "bdev_name": "Malloc1", 00:11:25.242 "name": "Malloc1", 00:11:25.242 "nguid": "964CDAD04E5244BF96AD1AD847F68CD5", 00:11:25.242 "uuid": "964cdad0-4e52-44bf-96ad-1ad847f68cd5" 00:11:25.242 }, 00:11:25.242 { 00:11:25.242 "nsid": 2, 00:11:25.242 "bdev_name": "Malloc3", 00:11:25.242 "name": "Malloc3", 00:11:25.242 "nguid": "EA04667F30D7431C82E710A12E3A90EA", 00:11:25.242 "uuid": "ea04667f-30d7-431c-82e7-10a12e3a90ea" 00:11:25.242 } 00:11:25.242 ] 00:11:25.242 }, 00:11:25.242 { 00:11:25.242 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:25.242 "subtype": "NVMe", 00:11:25.242 "listen_addresses": [ 00:11:25.242 { 00:11:25.242 "trtype": "VFIOUSER", 00:11:25.242 "adrfam": "IPv4", 00:11:25.242 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:25.242 "trsvcid": "0" 00:11:25.242 } 00:11:25.242 ], 00:11:25.242 "allow_any_host": true, 00:11:25.242 "hosts": [], 00:11:25.242 "serial_number": "SPDK2", 00:11:25.242 "model_number": "SPDK bdev Controller", 00:11:25.242 "max_namespaces": 32, 00:11:25.242 "min_cntlid": 1, 00:11:25.242 "max_cntlid": 65519, 00:11:25.242 "namespaces": [ 00:11:25.242 { 00:11:25.242 "nsid": 1, 00:11:25.242 "bdev_name": "Malloc2", 00:11:25.242 "name": "Malloc2", 00:11:25.242 "nguid": "B399F388FDBA48BA90B016B7A9CB9315", 00:11:25.242 "uuid": "b399f388-fdba-48ba-90b0-16b7a9cb9315" 00:11:25.242 } 00:11:25.242 ] 00:11:25.242 } 00:11:25.242 ] 00:11:25.242 11:03:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:11:25.242 11:03:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2179851 00:11:25.242 11:03:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:11:25.242 11:03:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:11:25.242 11:03:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # local i=0 00:11:25.242 11:03:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1263 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:25.242 11:03:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:25.242 11:03:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # return 0 00:11:25.242 11:03:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:11:25.242 11:03:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:11:25.242 EAL: No free 2048 kB hugepages reported on node 1 00:11:25.502 [2024-05-15 11:03:22.523563] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:25.502 Malloc4 00:11:25.502 11:03:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:11:25.502 [2024-05-15 11:03:22.736189] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:25.502 11:03:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:25.761 Asynchronous Event Request test 00:11:25.761 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:25.761 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:25.761 Registering asynchronous event callbacks... 00:11:25.761 Starting namespace attribute notice tests for all controllers... 00:11:25.761 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:11:25.761 aer_cb - Changed Namespace 00:11:25.761 Cleaning up... 00:11:25.761 [ 00:11:25.761 { 00:11:25.761 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:25.761 "subtype": "Discovery", 00:11:25.761 "listen_addresses": [], 00:11:25.761 "allow_any_host": true, 00:11:25.761 "hosts": [] 00:11:25.761 }, 00:11:25.761 { 00:11:25.761 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:25.761 "subtype": "NVMe", 00:11:25.761 "listen_addresses": [ 00:11:25.761 { 00:11:25.761 "trtype": "VFIOUSER", 00:11:25.761 "adrfam": "IPv4", 00:11:25.761 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:25.761 "trsvcid": "0" 00:11:25.761 } 00:11:25.761 ], 00:11:25.761 "allow_any_host": true, 00:11:25.761 "hosts": [], 00:11:25.761 "serial_number": "SPDK1", 00:11:25.761 "model_number": "SPDK bdev Controller", 00:11:25.761 "max_namespaces": 32, 00:11:25.761 "min_cntlid": 1, 00:11:25.761 "max_cntlid": 65519, 00:11:25.761 "namespaces": [ 00:11:25.761 { 00:11:25.761 "nsid": 1, 00:11:25.761 "bdev_name": "Malloc1", 00:11:25.761 "name": "Malloc1", 00:11:25.761 "nguid": "964CDAD04E5244BF96AD1AD847F68CD5", 00:11:25.761 "uuid": "964cdad0-4e52-44bf-96ad-1ad847f68cd5" 00:11:25.761 }, 00:11:25.761 { 00:11:25.761 "nsid": 2, 00:11:25.761 "bdev_name": "Malloc3", 00:11:25.761 "name": "Malloc3", 00:11:25.761 "nguid": "EA04667F30D7431C82E710A12E3A90EA", 00:11:25.761 "uuid": "ea04667f-30d7-431c-82e7-10a12e3a90ea" 00:11:25.761 } 00:11:25.761 ] 00:11:25.761 }, 00:11:25.761 { 00:11:25.761 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:25.761 "subtype": "NVMe", 00:11:25.761 "listen_addresses": [ 00:11:25.761 { 00:11:25.761 "trtype": "VFIOUSER", 00:11:25.761 "adrfam": "IPv4", 00:11:25.761 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:25.761 "trsvcid": "0" 00:11:25.761 } 00:11:25.761 ], 00:11:25.761 "allow_any_host": true, 00:11:25.761 "hosts": [], 00:11:25.761 "serial_number": "SPDK2", 00:11:25.761 "model_number": "SPDK bdev Controller", 00:11:25.761 "max_namespaces": 32, 00:11:25.761 "min_cntlid": 1, 00:11:25.761 "max_cntlid": 65519, 00:11:25.761 "namespaces": [ 00:11:25.761 { 00:11:25.761 "nsid": 1, 00:11:25.761 "bdev_name": "Malloc2", 00:11:25.761 "name": "Malloc2", 00:11:25.761 "nguid": "B399F388FDBA48BA90B016B7A9CB9315", 00:11:25.761 "uuid": "b399f388-fdba-48ba-90b0-16b7a9cb9315" 00:11:25.761 }, 00:11:25.761 { 00:11:25.761 "nsid": 2, 00:11:25.761 "bdev_name": "Malloc4", 00:11:25.761 "name": "Malloc4", 00:11:25.761 "nguid": "1531C930EE1D408EA28C084CCD05E6EF", 00:11:25.761 "uuid": "1531c930-ee1d-408e-a28c-084ccd05e6ef" 00:11:25.761 } 00:11:25.761 ] 00:11:25.761 } 00:11:25.761 ] 00:11:25.761 11:03:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2179851 00:11:25.761 11:03:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:11:25.761 11:03:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2171593 00:11:25.761 11:03:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@947 -- # '[' -z 2171593 ']' 00:11:25.761 11:03:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # kill -0 2171593 00:11:25.761 11:03:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # uname 00:11:25.761 11:03:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:11:25.761 11:03:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2171593 00:11:25.761 11:03:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:11:25.761 11:03:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:11:25.761 11:03:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2171593' 00:11:25.761 killing process with pid 2171593 00:11:25.761 11:03:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # kill 2171593 00:11:25.761 [2024-05-15 11:03:22.973627] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:25.761 11:03:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@971 -- # wait 2171593 00:11:26.067 11:03:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:11:26.067 11:03:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:26.067 11:03:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:11:26.067 11:03:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:11:26.067 11:03:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:11:26.067 11:03:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2179967 00:11:26.067 11:03:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2179967' 00:11:26.067 Process pid: 2179967 00:11:26.067 11:03:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:11:26.067 11:03:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:26.067 11:03:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2179967 00:11:26.067 11:03:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@828 -- # '[' -z 2179967 ']' 00:11:26.068 11:03:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.068 11:03:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local max_retries=100 00:11:26.068 11:03:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.068 11:03:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@837 -- # xtrace_disable 00:11:26.068 11:03:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:11:26.328 [2024-05-15 11:03:23.297114] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:11:26.328 [2024-05-15 11:03:23.298021] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:11:26.328 [2024-05-15 11:03:23.298060] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:26.328 EAL: No free 2048 kB hugepages reported on node 1 00:11:26.328 [2024-05-15 11:03:23.352937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:26.328 [2024-05-15 11:03:23.422111] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:26.328 [2024-05-15 11:03:23.422156] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:26.328 [2024-05-15 11:03:23.422163] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:26.328 [2024-05-15 11:03:23.422173] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:26.328 [2024-05-15 11:03:23.422177] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:26.328 [2024-05-15 11:03:23.422278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:26.328 [2024-05-15 11:03:23.422366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:26.328 [2024-05-15 11:03:23.422457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:26.328 [2024-05-15 11:03:23.422458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.328 [2024-05-15 11:03:23.508101] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:11:26.328 [2024-05-15 11:03:23.508225] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:11:26.328 [2024-05-15 11:03:23.508447] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:11:26.328 [2024-05-15 11:03:23.508798] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:11:26.328 [2024-05-15 11:03:23.509019] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:11:26.896 11:03:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:11:26.896 11:03:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@861 -- # return 0 00:11:26.896 11:03:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:11:28.271 11:03:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:11:28.271 11:03:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:11:28.271 11:03:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:11:28.271 11:03:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:28.271 11:03:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:11:28.271 11:03:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:28.271 Malloc1 00:11:28.271 11:03:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:11:28.530 11:03:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:11:28.789 11:03:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:11:28.789 [2024-05-15 11:03:26.002837] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:28.789 11:03:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:28.789 11:03:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:11:28.789 11:03:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:29.048 Malloc2 00:11:29.048 11:03:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:11:29.307 11:03:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:11:29.566 11:03:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:11:29.566 11:03:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:11:29.566 11:03:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2179967 00:11:29.566 11:03:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@947 -- # '[' -z 2179967 ']' 00:11:29.566 11:03:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # kill -0 2179967 00:11:29.566 11:03:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # uname 00:11:29.566 11:03:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:11:29.566 11:03:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2179967 00:11:29.566 11:03:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:11:29.566 11:03:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:11:29.566 11:03:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2179967' 00:11:29.566 killing process with pid 2179967 00:11:29.566 11:03:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # kill 2179967 00:11:29.566 [2024-05-15 11:03:26.823287] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:29.566 11:03:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@971 -- # wait 2179967 00:11:29.825 11:03:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:11:29.825 11:03:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:29.825 00:11:29.825 real 0m51.270s 00:11:29.825 user 3m22.847s 00:11:29.825 sys 0m3.626s 00:11:29.825 11:03:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # xtrace_disable 00:11:29.825 11:03:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:11:29.825 ************************************ 00:11:29.825 END TEST nvmf_vfio_user 00:11:29.825 ************************************ 00:11:30.085 11:03:27 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:11:30.085 11:03:27 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:11:30.085 11:03:27 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:11:30.085 11:03:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:30.085 ************************************ 00:11:30.085 START TEST nvmf_vfio_user_nvme_compliance 00:11:30.085 ************************************ 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:11:30.085 * Looking for test storage... 00:11:30.085 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2180730 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2180730' 00:11:30.085 Process pid: 2180730 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2180730 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@828 -- # '[' -z 2180730 ']' 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local max_retries=100 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # xtrace_disable 00:11:30.085 11:03:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:30.085 [2024-05-15 11:03:27.292805] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:11:30.086 [2024-05-15 11:03:27.292853] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:30.086 EAL: No free 2048 kB hugepages reported on node 1 00:11:30.086 [2024-05-15 11:03:27.346714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:30.345 [2024-05-15 11:03:27.425861] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:30.345 [2024-05-15 11:03:27.425898] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:30.345 [2024-05-15 11:03:27.425904] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:30.345 [2024-05-15 11:03:27.425910] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:30.345 [2024-05-15 11:03:27.425915] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:30.345 [2024-05-15 11:03:27.425965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:30.345 [2024-05-15 11:03:27.426064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:30.345 [2024-05-15 11:03:27.426065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.914 11:03:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:11:30.914 11:03:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@861 -- # return 0 00:11:30.914 11:03:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:11:32.292 11:03:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:11:32.292 11:03:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:11:32.292 11:03:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:11:32.292 11:03:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:32.292 11:03:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:32.292 11:03:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:32.292 11:03:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:11:32.292 11:03:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:11:32.292 11:03:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:32.292 11:03:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:32.292 malloc0 00:11:32.292 11:03:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:32.293 11:03:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:11:32.293 11:03:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:32.293 11:03:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:32.293 11:03:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:32.293 11:03:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:11:32.293 11:03:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:32.293 11:03:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:32.293 11:03:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:32.293 11:03:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:11:32.293 11:03:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:32.293 11:03:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:32.293 [2024-05-15 11:03:29.190459] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:32.293 11:03:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:32.293 11:03:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:11:32.293 EAL: No free 2048 kB hugepages reported on node 1 00:11:32.293 00:11:32.293 00:11:32.293 CUnit - A unit testing framework for C - Version 2.1-3 00:11:32.293 http://cunit.sourceforge.net/ 00:11:32.293 00:11:32.293 00:11:32.293 Suite: nvme_compliance 00:11:32.293 Test: admin_identify_ctrlr_verify_dptr ...[2024-05-15 11:03:29.340616] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:32.293 [2024-05-15 11:03:29.341944] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:11:32.293 [2024-05-15 11:03:29.341958] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:11:32.293 [2024-05-15 11:03:29.341964] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:11:32.293 [2024-05-15 11:03:29.343645] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:32.293 passed 00:11:32.293 Test: admin_identify_ctrlr_verify_fused ...[2024-05-15 11:03:29.423208] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:32.293 [2024-05-15 11:03:29.426235] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:32.293 passed 00:11:32.293 Test: admin_identify_ns ...[2024-05-15 11:03:29.502052] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:32.552 [2024-05-15 11:03:29.562174] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:32.552 [2024-05-15 11:03:29.570178] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:11:32.552 [2024-05-15 11:03:29.591274] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:32.552 passed 00:11:32.552 Test: admin_get_features_mandatory_features ...[2024-05-15 11:03:29.667476] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:32.552 [2024-05-15 11:03:29.671501] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:32.552 passed 00:11:32.552 Test: admin_get_features_optional_features ...[2024-05-15 11:03:29.750036] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:32.552 [2024-05-15 11:03:29.753052] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:32.552 passed 00:11:32.811 Test: admin_set_features_number_of_queues ...[2024-05-15 11:03:29.831022] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:32.811 [2024-05-15 11:03:29.935262] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:32.811 passed 00:11:32.811 Test: admin_get_log_page_mandatory_logs ...[2024-05-15 11:03:30.012204] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:32.811 [2024-05-15 11:03:30.015223] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:32.811 passed 00:11:33.070 Test: admin_get_log_page_with_lpo ...[2024-05-15 11:03:30.094268] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:33.070 [2024-05-15 11:03:30.164174] ctrlr.c:2654:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:11:33.070 [2024-05-15 11:03:30.177244] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:33.070 passed 00:11:33.070 Test: fabric_property_get ...[2024-05-15 11:03:30.252575] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:33.070 [2024-05-15 11:03:30.253797] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:11:33.070 [2024-05-15 11:03:30.255596] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:33.070 passed 00:11:33.070 Test: admin_delete_io_sq_use_admin_qid ...[2024-05-15 11:03:30.335065] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:33.071 [2024-05-15 11:03:30.336294] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:11:33.329 [2024-05-15 11:03:30.338088] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:33.329 passed 00:11:33.329 Test: admin_delete_io_sq_delete_sq_twice ...[2024-05-15 11:03:30.416069] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:33.329 [2024-05-15 11:03:30.501177] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:11:33.329 [2024-05-15 11:03:30.517173] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:11:33.329 [2024-05-15 11:03:30.522314] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:33.329 passed 00:11:33.588 Test: admin_delete_io_cq_use_admin_qid ...[2024-05-15 11:03:30.598633] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:33.588 [2024-05-15 11:03:30.599853] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:11:33.588 [2024-05-15 11:03:30.603673] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:33.588 passed 00:11:33.588 Test: admin_delete_io_cq_delete_cq_first ...[2024-05-15 11:03:30.682729] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:33.588 [2024-05-15 11:03:30.759172] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:11:33.588 [2024-05-15 11:03:30.783174] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:11:33.588 [2024-05-15 11:03:30.788259] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:33.588 passed 00:11:33.847 Test: admin_create_io_cq_verify_iv_pc ...[2024-05-15 11:03:30.862517] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:33.847 [2024-05-15 11:03:30.863747] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:11:33.847 [2024-05-15 11:03:30.863771] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:11:33.847 [2024-05-15 11:03:30.865539] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:33.847 passed 00:11:33.847 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-05-15 11:03:30.943447] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:33.847 [2024-05-15 11:03:31.035180] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:11:33.847 [2024-05-15 11:03:31.043173] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:11:33.847 [2024-05-15 11:03:31.051171] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:11:33.847 [2024-05-15 11:03:31.059172] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:11:33.847 [2024-05-15 11:03:31.088252] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:34.106 passed 00:11:34.106 Test: admin_create_io_sq_verify_pc ...[2024-05-15 11:03:31.163405] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:34.106 [2024-05-15 11:03:31.180179] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:11:34.106 [2024-05-15 11:03:31.197593] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:34.106 passed 00:11:34.106 Test: admin_create_io_qp_max_qps ...[2024-05-15 11:03:31.278114] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:35.481 [2024-05-15 11:03:32.394172] nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:11:35.740 [2024-05-15 11:03:32.773540] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:35.740 passed 00:11:35.740 Test: admin_create_io_sq_shared_cq ...[2024-05-15 11:03:32.851654] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:35.740 [2024-05-15 11:03:32.983171] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:11:35.999 [2024-05-15 11:03:33.020215] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:35.999 passed 00:11:35.999 00:11:35.999 Run Summary: Type Total Ran Passed Failed Inactive 00:11:35.999 suites 1 1 n/a 0 0 00:11:35.999 tests 18 18 18 0 0 00:11:35.999 asserts 360 360 360 0 n/a 00:11:35.999 00:11:35.999 Elapsed time = 1.517 seconds 00:11:35.999 11:03:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2180730 00:11:35.999 11:03:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@947 -- # '[' -z 2180730 ']' 00:11:35.999 11:03:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # kill -0 2180730 00:11:35.999 11:03:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # uname 00:11:35.999 11:03:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:11:35.999 11:03:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2180730 00:11:35.999 11:03:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:11:35.999 11:03:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:11:35.999 11:03:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2180730' 00:11:35.999 killing process with pid 2180730 00:11:35.999 11:03:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # kill 2180730 00:11:35.999 [2024-05-15 11:03:33.107185] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:35.999 11:03:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@971 -- # wait 2180730 00:11:36.259 11:03:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:11:36.259 11:03:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:11:36.259 00:11:36.259 real 0m6.192s 00:11:36.259 user 0m17.745s 00:11:36.259 sys 0m0.451s 00:11:36.259 11:03:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # xtrace_disable 00:11:36.259 11:03:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:36.259 ************************************ 00:11:36.259 END TEST nvmf_vfio_user_nvme_compliance 00:11:36.259 ************************************ 00:11:36.259 11:03:33 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:11:36.259 11:03:33 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:11:36.259 11:03:33 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:11:36.259 11:03:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:36.259 ************************************ 00:11:36.259 START TEST nvmf_vfio_user_fuzz 00:11:36.259 ************************************ 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:11:36.260 * Looking for test storage... 00:11:36.260 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2181867 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2181867' 00:11:36.260 Process pid: 2181867 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2181867 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@828 -- # '[' -z 2181867 ']' 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local max_retries=100 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # xtrace_disable 00:11:36.260 11:03:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:37.196 11:03:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:11:37.196 11:03:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@861 -- # return 0 00:11:37.196 11:03:34 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:11:38.133 11:03:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:11:38.133 11:03:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:38.133 11:03:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:38.133 11:03:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:38.133 11:03:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:11:38.133 11:03:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:11:38.133 11:03:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:38.133 11:03:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:38.392 malloc0 00:11:38.392 11:03:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:38.392 11:03:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:11:38.392 11:03:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:38.392 11:03:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:38.392 11:03:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:38.392 11:03:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:11:38.392 11:03:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:38.392 11:03:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:38.392 11:03:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:38.392 11:03:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:11:38.392 11:03:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:38.392 11:03:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:38.392 11:03:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:38.392 11:03:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:11:38.392 11:03:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:12:10.497 Fuzzing completed. Shutting down the fuzz application 00:12:10.497 00:12:10.497 Dumping successful admin opcodes: 00:12:10.497 8, 9, 10, 24, 00:12:10.497 Dumping successful io opcodes: 00:12:10.497 0, 00:12:10.497 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1106009, total successful commands: 4356, random_seed: 1826295424 00:12:10.497 NS: 0x200003a1ef00 admin qp, Total commands completed: 272045, total successful commands: 2192, random_seed: 448991872 00:12:10.497 11:04:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:12:10.497 11:04:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:10.497 11:04:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:10.497 11:04:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:10.497 11:04:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2181867 00:12:10.497 11:04:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@947 -- # '[' -z 2181867 ']' 00:12:10.498 11:04:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # kill -0 2181867 00:12:10.498 11:04:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # uname 00:12:10.498 11:04:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:12:10.498 11:04:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2181867 00:12:10.498 11:04:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:12:10.498 11:04:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:12:10.498 11:04:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2181867' 00:12:10.498 killing process with pid 2181867 00:12:10.498 11:04:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # kill 2181867 00:12:10.498 11:04:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@971 -- # wait 2181867 00:12:10.498 11:04:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:12:10.498 11:04:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:12:10.498 00:12:10.498 real 0m32.807s 00:12:10.498 user 0m35.483s 00:12:10.498 sys 0m25.478s 00:12:10.498 11:04:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:10.498 11:04:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:10.498 ************************************ 00:12:10.498 END TEST nvmf_vfio_user_fuzz 00:12:10.498 ************************************ 00:12:10.498 11:04:06 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:10.498 11:04:06 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:12:10.498 11:04:06 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:10.498 11:04:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:10.498 ************************************ 00:12:10.498 START TEST nvmf_host_management 00:12:10.498 ************************************ 00:12:10.498 11:04:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:10.498 * Looking for test storage... 00:12:10.498 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:10.498 11:04:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:10.498 11:04:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:12:10.498 11:04:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:10.498 11:04:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:10.498 11:04:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:10.498 11:04:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:10.498 11:04:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:10.498 11:04:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:10.498 11:04:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:10.498 11:04:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:10.498 11:04:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:10.498 11:04:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:10.498 11:04:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:10.498 11:04:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:10.498 11:04:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:10.498 11:04:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:10.498 11:04:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:10.498 11:04:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:10.498 11:04:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:10.498 11:04:06 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:10.499 11:04:06 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:10.499 11:04:06 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:10.499 11:04:06 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.499 11:04:06 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.499 11:04:06 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.499 11:04:06 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:12:10.499 11:04:06 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.499 11:04:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:12:10.499 11:04:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:10.499 11:04:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:10.499 11:04:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:10.499 11:04:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:10.499 11:04:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:10.499 11:04:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:10.499 11:04:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:10.499 11:04:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:10.499 11:04:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:10.499 11:04:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:10.499 11:04:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:12:10.499 11:04:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:10.499 11:04:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:10.499 11:04:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:10.499 11:04:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:10.499 11:04:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:10.499 11:04:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.499 11:04:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:10.499 11:04:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.499 11:04:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:10.499 11:04:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:10.499 11:04:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:12:10.499 11:04:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:14.786 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:14.786 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:12:14.786 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:14.786 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:14.786 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:14.786 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:14.786 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:14.786 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:12:14.786 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:14.786 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:12:14.786 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:12:14.786 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:12:14.786 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:12:14.786 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:12:14.786 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:12:14.786 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:14.786 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:14.786 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:14.786 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:14.786 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:14.786 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:14.786 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:14.786 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:14.786 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:14.786 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:14.786 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:14.786 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:14.786 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:14.786 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:14.786 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:14.786 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:14.786 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:14.786 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:14.786 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:14.786 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:14.786 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:14.786 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:14.786 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:14.786 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:14.786 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:14.786 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:14.786 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:14.786 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:14.786 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:14.786 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:14.786 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:14.786 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:14.786 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:14.786 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:14.787 Found net devices under 0000:86:00.0: cvl_0_0 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:14.787 Found net devices under 0000:86:00.1: cvl_0_1 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:14.787 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:14.787 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:12:14.787 00:12:14.787 --- 10.0.0.2 ping statistics --- 00:12:14.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.787 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:14.787 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:14.787 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:12:14.787 00:12:14.787 --- 10.0.0.1 ping statistics --- 00:12:14.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.787 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@721 -- # xtrace_disable 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2190238 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2190238 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@828 -- # '[' -z 2190238 ']' 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local max_retries=100 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # xtrace_disable 00:12:14.787 11:04:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:14.787 [2024-05-15 11:04:11.851320] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:12:14.787 [2024-05-15 11:04:11.851359] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:14.787 EAL: No free 2048 kB hugepages reported on node 1 00:12:14.787 [2024-05-15 11:04:11.908673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:14.787 [2024-05-15 11:04:11.981847] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:14.787 [2024-05-15 11:04:11.981888] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:14.787 [2024-05-15 11:04:11.981895] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:14.787 [2024-05-15 11:04:11.981901] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:14.787 [2024-05-15 11:04:11.981906] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:14.787 [2024-05-15 11:04:11.981967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:14.787 [2024-05-15 11:04:11.982055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:14.787 [2024-05-15 11:04:11.982141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:14.787 [2024-05-15 11:04:11.982142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:12:15.723 11:04:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:12:15.723 11:04:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@861 -- # return 0 00:12:15.723 11:04:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:15.723 11:04:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@727 -- # xtrace_disable 00:12:15.723 11:04:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:15.723 11:04:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:15.723 11:04:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:15.723 11:04:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:15.723 11:04:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:15.723 [2024-05-15 11:04:12.698056] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:15.723 11:04:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:15.723 11:04:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:12:15.723 11:04:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@721 -- # xtrace_disable 00:12:15.723 11:04:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:15.723 11:04:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:12:15.723 11:04:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:12:15.723 11:04:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:12:15.723 11:04:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:15.723 11:04:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:15.723 Malloc0 00:12:15.723 [2024-05-15 11:04:12.757692] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:15.723 [2024-05-15 11:04:12.757927] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:15.723 11:04:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:15.723 11:04:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:12:15.723 11:04:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@727 -- # xtrace_disable 00:12:15.723 11:04:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:15.723 11:04:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2190507 00:12:15.723 11:04:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2190507 /var/tmp/bdevperf.sock 00:12:15.723 11:04:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@828 -- # '[' -z 2190507 ']' 00:12:15.723 11:04:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:15.723 11:04:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:12:15.723 11:04:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:12:15.723 11:04:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local max_retries=100 00:12:15.723 11:04:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:15.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:15.723 11:04:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:12:15.723 11:04:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # xtrace_disable 00:12:15.723 11:04:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:12:15.723 11:04:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:15.723 11:04:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:15.723 11:04:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:15.723 { 00:12:15.723 "params": { 00:12:15.723 "name": "Nvme$subsystem", 00:12:15.723 "trtype": "$TEST_TRANSPORT", 00:12:15.723 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:15.723 "adrfam": "ipv4", 00:12:15.723 "trsvcid": "$NVMF_PORT", 00:12:15.723 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:15.723 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:15.723 "hdgst": ${hdgst:-false}, 00:12:15.723 "ddgst": ${ddgst:-false} 00:12:15.723 }, 00:12:15.723 "method": "bdev_nvme_attach_controller" 00:12:15.723 } 00:12:15.723 EOF 00:12:15.723 )") 00:12:15.723 11:04:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:12:15.723 11:04:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:12:15.723 11:04:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:12:15.723 11:04:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:15.723 "params": { 00:12:15.723 "name": "Nvme0", 00:12:15.723 "trtype": "tcp", 00:12:15.723 "traddr": "10.0.0.2", 00:12:15.723 "adrfam": "ipv4", 00:12:15.723 "trsvcid": "4420", 00:12:15.723 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:15.723 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:15.723 "hdgst": false, 00:12:15.723 "ddgst": false 00:12:15.723 }, 00:12:15.723 "method": "bdev_nvme_attach_controller" 00:12:15.723 }' 00:12:15.723 [2024-05-15 11:04:12.850837] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:12:15.723 [2024-05-15 11:04:12.850879] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2190507 ] 00:12:15.723 EAL: No free 2048 kB hugepages reported on node 1 00:12:15.723 [2024-05-15 11:04:12.904207] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.723 [2024-05-15 11:04:12.976915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.982 Running I/O for 10 seconds... 00:12:16.550 11:04:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:12:16.550 11:04:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@861 -- # return 0 00:12:16.550 11:04:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:12:16.550 11:04:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:16.550 11:04:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:16.550 11:04:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:16.550 11:04:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:16.550 11:04:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:12:16.550 11:04:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:12:16.550 11:04:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:12:16.550 11:04:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:12:16.550 11:04:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:12:16.550 11:04:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:12:16.550 11:04:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:12:16.550 11:04:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:12:16.550 11:04:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:12:16.550 11:04:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:16.550 11:04:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:16.550 11:04:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:16.550 11:04:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1038 00:12:16.550 11:04:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1038 -ge 100 ']' 00:12:16.550 11:04:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:12:16.550 11:04:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:12:16.550 11:04:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:12:16.550 11:04:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:16.550 11:04:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:16.550 11:04:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:16.550 [2024-05-15 11:04:13.761290] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d11c0 is same with the state(5) to be set 00:12:16.550 [2024-05-15 11:04:13.761362] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d11c0 is same with the state(5) to be set 00:12:16.550 [2024-05-15 11:04:13.761374] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d11c0 is same with the state(5) to be set 00:12:16.550 [2024-05-15 11:04:13.761380] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d11c0 is same with the state(5) to be set 00:12:16.550 [2024-05-15 11:04:13.761386] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d11c0 is same with the state(5) to be set 00:12:16.550 11:04:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:16.550 11:04:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:16.550 11:04:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:16.550 [2024-05-15 11:04:13.767418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:12:16.550 [2024-05-15 11:04:13.767449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.550 [2024-05-15 11:04:13.767459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:12:16.550 [2024-05-15 11:04:13.767467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.550 [2024-05-15 11:04:13.767474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:12:16.550 [2024-05-15 11:04:13.767482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.550 [2024-05-15 11:04:13.767490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:12:16.550 11:04:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:16.550 [2024-05-15 11:04:13.767497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.550 [2024-05-15 11:04:13.767504] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff2840 is same with the state(5) to be set 00:12:16.550 [2024-05-15 11:04:13.767538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.550 [2024-05-15 11:04:13.767547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.550 [2024-05-15 11:04:13.767561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.550 [2024-05-15 11:04:13.767568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.550 [2024-05-15 11:04:13.767577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.550 [2024-05-15 11:04:13.767584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.550 [2024-05-15 11:04:13.767592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.550 [2024-05-15 11:04:13.767600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.550 [2024-05-15 11:04:13.767608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.550 [2024-05-15 11:04:13.767615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.551 [2024-05-15 11:04:13.767624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.551 [2024-05-15 11:04:13.767636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.551 [2024-05-15 11:04:13.767644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.551 [2024-05-15 11:04:13.767651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.551 [2024-05-15 11:04:13.767660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.551 [2024-05-15 11:04:13.767667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.551 [2024-05-15 11:04:13.767675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.551 [2024-05-15 11:04:13.767682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.551 [2024-05-15 11:04:13.767691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.551 [2024-05-15 11:04:13.767698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.551 [2024-05-15 11:04:13.767706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.551 [2024-05-15 11:04:13.767713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.551 [2024-05-15 11:04:13.767721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.551 [2024-05-15 11:04:13.767729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.551 [2024-05-15 11:04:13.767737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.551 [2024-05-15 11:04:13.767744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.551 [2024-05-15 11:04:13.767752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.551 [2024-05-15 11:04:13.767759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.551 [2024-05-15 11:04:13.767768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.551 [2024-05-15 11:04:13.767775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.551 [2024-05-15 11:04:13.767783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.551 [2024-05-15 11:04:13.767791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.551 [2024-05-15 11:04:13.767799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.551 [2024-05-15 11:04:13.767806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.551 [2024-05-15 11:04:13.767815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.551 [2024-05-15 11:04:13.767822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.551 [2024-05-15 11:04:13.767833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.551 [2024-05-15 11:04:13.767840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.551 [2024-05-15 11:04:13.767848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.551 [2024-05-15 11:04:13.767855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.551 [2024-05-15 11:04:13.767864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.551 [2024-05-15 11:04:13.767871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.551 [2024-05-15 11:04:13.767879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.551 [2024-05-15 11:04:13.767886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.551 [2024-05-15 11:04:13.767895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.551 [2024-05-15 11:04:13.767902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.551 [2024-05-15 11:04:13.767910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.551 [2024-05-15 11:04:13.767918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.551 [2024-05-15 11:04:13.767926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.551 [2024-05-15 11:04:13.767933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.551 [2024-05-15 11:04:13.767942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.551 [2024-05-15 11:04:13.767949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.551 [2024-05-15 11:04:13.767959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.551 [2024-05-15 11:04:13.767966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.551 [2024-05-15 11:04:13.767975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.551 [2024-05-15 11:04:13.767982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.551 [2024-05-15 11:04:13.767991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.551 [2024-05-15 11:04:13.767998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.551 [2024-05-15 11:04:13.768007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.551 [2024-05-15 11:04:13.768014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.551 [2024-05-15 11:04:13.768022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.551 [2024-05-15 11:04:13.768030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.551 [2024-05-15 11:04:13.768039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.551 [2024-05-15 11:04:13.768046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.551 [2024-05-15 11:04:13.768054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.552 [2024-05-15 11:04:13.768061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.552 [2024-05-15 11:04:13.768069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.552 [2024-05-15 11:04:13.768076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.552 [2024-05-15 11:04:13.768084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.552 [2024-05-15 11:04:13.768091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.552 [2024-05-15 11:04:13.768100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.552 [2024-05-15 11:04:13.768107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.552 [2024-05-15 11:04:13.768115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.552 [2024-05-15 11:04:13.768122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.552 [2024-05-15 11:04:13.768130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.552 [2024-05-15 11:04:13.768138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.552 [2024-05-15 11:04:13.768146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.552 [2024-05-15 11:04:13.768153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.552 [2024-05-15 11:04:13.768161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.552 [2024-05-15 11:04:13.768177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.552 [2024-05-15 11:04:13.768185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.552 [2024-05-15 11:04:13.768192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.552 [2024-05-15 11:04:13.768200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.552 [2024-05-15 11:04:13.768208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.552 [2024-05-15 11:04:13.768216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.552 [2024-05-15 11:04:13.768223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.552 [2024-05-15 11:04:13.768234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.552 [2024-05-15 11:04:13.768241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.552 [2024-05-15 11:04:13.768249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.552 [2024-05-15 11:04:13.768257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.552 [2024-05-15 11:04:13.768265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.552 [2024-05-15 11:04:13.768272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.552 [2024-05-15 11:04:13.768280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.552 [2024-05-15 11:04:13.768287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.552 [2024-05-15 11:04:13.768295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.552 [2024-05-15 11:04:13.768303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.552 [2024-05-15 11:04:13.768311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.552 [2024-05-15 11:04:13.768318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.552 [2024-05-15 11:04:13.768326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.552 [2024-05-15 11:04:13.768333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.552 [2024-05-15 11:04:13.768342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.552 [2024-05-15 11:04:13.768349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.552 [2024-05-15 11:04:13.768357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.552 [2024-05-15 11:04:13.768365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.552 [2024-05-15 11:04:13.768373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.552 [2024-05-15 11:04:13.768381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.552 [2024-05-15 11:04:13.768389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.552 [2024-05-15 11:04:13.768396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.552 [2024-05-15 11:04:13.768406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.552 [2024-05-15 11:04:13.768414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.552 [2024-05-15 11:04:13.768422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.552 [2024-05-15 11:04:13.768431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.552 [2024-05-15 11:04:13.768439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.552 [2024-05-15 11:04:13.768446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.552 [2024-05-15 11:04:13.768455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.552 [2024-05-15 11:04:13.768461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.552 [2024-05-15 11:04:13.768470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.552 [2024-05-15 11:04:13.768477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.552 [2024-05-15 11:04:13.768486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.552 [2024-05-15 11:04:13.768493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.552 [2024-05-15 11:04:13.768501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.552 [2024-05-15 11:04:13.768509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.552 [2024-05-15 11:04:13.768517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.552 [2024-05-15 11:04:13.768524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.552 [2024-05-15 11:04:13.768532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.552 [2024-05-15 11:04:13.768539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.553 [2024-05-15 11:04:13.768547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.553 [2024-05-15 11:04:13.768554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.553 [2024-05-15 11:04:13.768616] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2403740 was disconnected and freed. reset controller. 00:12:16.553 [2024-05-15 11:04:13.769517] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:12:16.553 task offset: 16384 on job bdev=Nvme0n1 fails 00:12:16.553 00:12:16.553 Latency(us) 00:12:16.553 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:16.553 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:16.553 Job: Nvme0n1 ended in about 0.60 seconds with error 00:12:16.553 Verification LBA range: start 0x0 length 0x400 00:12:16.553 Nvme0n1 : 0.60 1934.73 120.92 107.48 0.00 30682.81 1666.89 27354.16 00:12:16.553 =================================================================================================================== 00:12:16.553 Total : 1934.73 120.92 107.48 0.00 30682.81 1666.89 27354.16 00:12:16.553 [2024-05-15 11:04:13.771104] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:16.553 [2024-05-15 11:04:13.771118] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ff2840 (9): Bad file descriptor 00:12:16.553 11:04:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:16.553 11:04:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:12:16.812 [2024-05-15 11:04:13.823237] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:17.750 11:04:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2190507 00:12:17.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2190507) - No such process 00:12:17.750 11:04:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:12:17.750 11:04:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:12:17.750 11:04:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:12:17.750 11:04:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:12:17.750 11:04:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:12:17.750 11:04:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:12:17.750 11:04:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:17.750 11:04:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:17.750 { 00:12:17.750 "params": { 00:12:17.750 "name": "Nvme$subsystem", 00:12:17.750 "trtype": "$TEST_TRANSPORT", 00:12:17.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:17.750 "adrfam": "ipv4", 00:12:17.750 "trsvcid": "$NVMF_PORT", 00:12:17.750 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:17.750 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:17.750 "hdgst": ${hdgst:-false}, 00:12:17.750 "ddgst": ${ddgst:-false} 00:12:17.750 }, 00:12:17.750 "method": "bdev_nvme_attach_controller" 00:12:17.750 } 00:12:17.750 EOF 00:12:17.750 )") 00:12:17.750 11:04:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:12:17.750 11:04:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:12:17.750 11:04:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:12:17.750 11:04:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:17.750 "params": { 00:12:17.750 "name": "Nvme0", 00:12:17.750 "trtype": "tcp", 00:12:17.750 "traddr": "10.0.0.2", 00:12:17.750 "adrfam": "ipv4", 00:12:17.750 "trsvcid": "4420", 00:12:17.750 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:17.750 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:17.750 "hdgst": false, 00:12:17.750 "ddgst": false 00:12:17.750 }, 00:12:17.750 "method": "bdev_nvme_attach_controller" 00:12:17.750 }' 00:12:17.750 [2024-05-15 11:04:14.826487] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:12:17.750 [2024-05-15 11:04:14.826533] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2190758 ] 00:12:17.750 EAL: No free 2048 kB hugepages reported on node 1 00:12:17.750 [2024-05-15 11:04:14.880299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.750 [2024-05-15 11:04:14.951250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.010 Running I/O for 1 seconds... 00:12:18.947 00:12:18.947 Latency(us) 00:12:18.947 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:18.947 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:18.947 Verification LBA range: start 0x0 length 0x400 00:12:18.947 Nvme0n1 : 1.02 1954.37 122.15 0.00 0.00 32229.35 6382.64 27240.18 00:12:18.947 =================================================================================================================== 00:12:18.947 Total : 1954.37 122.15 0.00 0.00 32229.35 6382.64 27240.18 00:12:19.206 11:04:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:12:19.206 11:04:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:12:19.206 11:04:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:12:19.206 11:04:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:12:19.206 11:04:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:12:19.206 11:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:19.206 11:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:12:19.206 11:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:19.206 11:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:12:19.206 11:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:19.206 11:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:19.206 rmmod nvme_tcp 00:12:19.206 rmmod nvme_fabrics 00:12:19.206 rmmod nvme_keyring 00:12:19.206 11:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:19.206 11:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:12:19.206 11:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:12:19.206 11:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2190238 ']' 00:12:19.206 11:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2190238 00:12:19.206 11:04:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@947 -- # '[' -z 2190238 ']' 00:12:19.206 11:04:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # kill -0 2190238 00:12:19.206 11:04:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # uname 00:12:19.206 11:04:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:12:19.206 11:04:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2190238 00:12:19.206 11:04:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:12:19.206 11:04:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:12:19.206 11:04:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2190238' 00:12:19.206 killing process with pid 2190238 00:12:19.206 11:04:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # kill 2190238 00:12:19.206 [2024-05-15 11:04:16.442260] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:19.206 11:04:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@971 -- # wait 2190238 00:12:19.465 [2024-05-15 11:04:16.651858] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:12:19.465 11:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:19.465 11:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:19.465 11:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:19.465 11:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:19.465 11:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:19.465 11:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.465 11:04:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:19.465 11:04:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.006 11:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:22.006 11:04:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:12:22.006 00:12:22.006 real 0m12.459s 00:12:22.006 user 0m22.619s 00:12:22.006 sys 0m5.221s 00:12:22.006 11:04:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:22.006 11:04:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:22.006 ************************************ 00:12:22.006 END TEST nvmf_host_management 00:12:22.006 ************************************ 00:12:22.006 11:04:18 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:22.006 11:04:18 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:12:22.006 11:04:18 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:22.006 11:04:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:22.006 ************************************ 00:12:22.006 START TEST nvmf_lvol 00:12:22.006 ************************************ 00:12:22.006 11:04:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:22.006 * Looking for test storage... 00:12:22.007 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:12:22.007 11:04:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:27.277 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:27.277 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:12:27.277 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:27.277 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:27.277 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:27.277 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:27.277 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:27.277 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:12:27.277 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:27.277 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:12:27.277 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:12:27.277 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:12:27.277 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:12:27.277 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:27.278 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:27.278 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:27.278 Found net devices under 0000:86:00.0: cvl_0_0 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:27.278 Found net devices under 0000:86:00.1: cvl_0_1 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:27.278 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:27.278 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:12:27.278 00:12:27.278 --- 10.0.0.2 ping statistics --- 00:12:27.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.278 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:27.278 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:27.278 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:12:27.278 00:12:27.278 --- 10.0.0.1 ping statistics --- 00:12:27.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.278 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@721 -- # xtrace_disable 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2194509 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2194509 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@828 -- # '[' -z 2194509 ']' 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local max_retries=100 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@837 -- # xtrace_disable 00:12:27.278 11:04:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:27.278 [2024-05-15 11:04:24.425996] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:12:27.278 [2024-05-15 11:04:24.426040] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:27.278 EAL: No free 2048 kB hugepages reported on node 1 00:12:27.278 [2024-05-15 11:04:24.483983] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:27.538 [2024-05-15 11:04:24.558282] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:27.538 [2024-05-15 11:04:24.558320] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:27.539 [2024-05-15 11:04:24.558327] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:27.539 [2024-05-15 11:04:24.558333] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:27.539 [2024-05-15 11:04:24.558338] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:27.539 [2024-05-15 11:04:24.558388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:27.539 [2024-05-15 11:04:24.558457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:27.539 [2024-05-15 11:04:24.558462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.106 11:04:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:12:28.106 11:04:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@861 -- # return 0 00:12:28.106 11:04:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:28.106 11:04:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@727 -- # xtrace_disable 00:12:28.106 11:04:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:28.106 11:04:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:28.106 11:04:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:28.365 [2024-05-15 11:04:25.412209] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:28.365 11:04:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:28.624 11:04:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:12:28.624 11:04:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:28.624 11:04:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:12:28.624 11:04:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:12:28.884 11:04:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:12:29.142 11:04:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=a50ac5ff-c3f1-4a1a-a663-8faab147abaf 00:12:29.142 11:04:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a50ac5ff-c3f1-4a1a-a663-8faab147abaf lvol 20 00:12:29.142 11:04:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=60c2f424-bae3-436a-a0b7-ae6fdf9b3b13 00:12:29.142 11:04:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:29.401 11:04:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 60c2f424-bae3-436a-a0b7-ae6fdf9b3b13 00:12:29.660 11:04:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:29.660 [2024-05-15 11:04:26.897621] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:29.660 [2024-05-15 11:04:26.897871] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:29.660 11:04:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:29.919 11:04:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2195007 00:12:29.919 11:04:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:12:29.919 11:04:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:12:29.919 EAL: No free 2048 kB hugepages reported on node 1 00:12:30.855 11:04:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 60c2f424-bae3-436a-a0b7-ae6fdf9b3b13 MY_SNAPSHOT 00:12:31.113 11:04:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=0a41c4aa-618e-4daa-a937-95ce673d6dfb 00:12:31.113 11:04:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 60c2f424-bae3-436a-a0b7-ae6fdf9b3b13 30 00:12:31.372 11:04:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 0a41c4aa-618e-4daa-a937-95ce673d6dfb MY_CLONE 00:12:31.631 11:04:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=e70c8eec-2869-4459-b366-1d6b694146f8 00:12:31.631 11:04:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate e70c8eec-2869-4459-b366-1d6b694146f8 00:12:32.201 11:04:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2195007 00:12:40.356 Initializing NVMe Controllers 00:12:40.356 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:40.356 Controller IO queue size 128, less than required. 00:12:40.356 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:40.356 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:12:40.356 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:12:40.356 Initialization complete. Launching workers. 00:12:40.356 ======================================================== 00:12:40.356 Latency(us) 00:12:40.356 Device Information : IOPS MiB/s Average min max 00:12:40.356 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12128.20 47.38 10560.85 1756.89 61716.13 00:12:40.356 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12031.90 47.00 10640.23 2645.19 59965.22 00:12:40.356 ======================================================== 00:12:40.356 Total : 24160.10 94.38 10600.38 1756.89 61716.13 00:12:40.356 00:12:40.356 11:04:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:40.356 11:04:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 60c2f424-bae3-436a-a0b7-ae6fdf9b3b13 00:12:40.616 11:04:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a50ac5ff-c3f1-4a1a-a663-8faab147abaf 00:12:40.875 11:04:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:12:40.875 11:04:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:12:40.875 11:04:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:12:40.875 11:04:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:40.875 11:04:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:12:40.875 11:04:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:40.875 11:04:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:12:40.875 11:04:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:40.875 11:04:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:40.875 rmmod nvme_tcp 00:12:40.875 rmmod nvme_fabrics 00:12:40.875 rmmod nvme_keyring 00:12:40.875 11:04:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:40.875 11:04:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:12:40.875 11:04:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:12:40.875 11:04:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2194509 ']' 00:12:40.875 11:04:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2194509 00:12:40.875 11:04:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@947 -- # '[' -z 2194509 ']' 00:12:40.875 11:04:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # kill -0 2194509 00:12:40.875 11:04:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # uname 00:12:40.875 11:04:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:12:40.875 11:04:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2194509 00:12:40.875 11:04:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:12:40.875 11:04:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:12:40.875 11:04:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2194509' 00:12:40.875 killing process with pid 2194509 00:12:40.875 11:04:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # kill 2194509 00:12:40.875 [2024-05-15 11:04:38.065724] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:40.875 11:04:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@971 -- # wait 2194509 00:12:41.134 11:04:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:41.134 11:04:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:41.134 11:04:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:41.134 11:04:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:41.135 11:04:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:41.135 11:04:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.135 11:04:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:41.135 11:04:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:43.670 00:12:43.670 real 0m21.564s 00:12:43.670 user 1m3.877s 00:12:43.670 sys 0m6.681s 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:43.670 ************************************ 00:12:43.670 END TEST nvmf_lvol 00:12:43.670 ************************************ 00:12:43.670 11:04:40 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:43.670 11:04:40 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:12:43.670 11:04:40 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:43.670 11:04:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:43.670 ************************************ 00:12:43.670 START TEST nvmf_lvs_grow 00:12:43.670 ************************************ 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:43.670 * Looking for test storage... 00:12:43.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:12:43.670 11:04:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:48.989 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:48.989 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:48.989 Found net devices under 0000:86:00.0: cvl_0_0 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:48.989 Found net devices under 0000:86:00.1: cvl_0_1 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:48.989 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:48.989 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:48.989 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:12:48.989 00:12:48.989 --- 10.0.0.2 ping statistics --- 00:12:48.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.990 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:12:48.990 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:48.990 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:48.990 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:12:48.990 00:12:48.990 --- 10.0.0.1 ping statistics --- 00:12:48.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.990 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:12:48.990 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:48.990 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:12:48.990 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:48.990 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:48.990 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:48.990 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:48.990 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:48.990 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:48.990 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:48.990 11:04:45 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:12:48.990 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:48.990 11:04:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@721 -- # xtrace_disable 00:12:48.990 11:04:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:48.990 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2200358 00:12:48.990 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:48.990 11:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2200358 00:12:48.990 11:04:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@828 -- # '[' -z 2200358 ']' 00:12:48.990 11:04:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.990 11:04:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local max_retries=100 00:12:48.990 11:04:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.990 11:04:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # xtrace_disable 00:12:48.990 11:04:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:48.990 [2024-05-15 11:04:45.968073] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:12:48.990 [2024-05-15 11:04:45.968113] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:48.990 EAL: No free 2048 kB hugepages reported on node 1 00:12:48.990 [2024-05-15 11:04:46.024749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.990 [2024-05-15 11:04:46.100939] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:48.990 [2024-05-15 11:04:46.100977] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:48.990 [2024-05-15 11:04:46.100983] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:48.990 [2024-05-15 11:04:46.100989] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:48.990 [2024-05-15 11:04:46.100994] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:48.990 [2024-05-15 11:04:46.101019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.557 11:04:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:12:49.557 11:04:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@861 -- # return 0 00:12:49.557 11:04:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:49.557 11:04:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@727 -- # xtrace_disable 00:12:49.557 11:04:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:49.557 11:04:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:49.557 11:04:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:49.815 [2024-05-15 11:04:46.953445] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:49.815 11:04:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:12:49.815 11:04:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:12:49.815 11:04:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:49.815 11:04:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:49.815 ************************************ 00:12:49.815 START TEST lvs_grow_clean 00:12:49.815 ************************************ 00:12:49.815 11:04:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # lvs_grow 00:12:49.815 11:04:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:49.815 11:04:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:49.815 11:04:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:49.815 11:04:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:49.815 11:04:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:49.816 11:04:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:49.816 11:04:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:49.816 11:04:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:49.816 11:04:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:50.074 11:04:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:50.074 11:04:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:50.333 11:04:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=4e8a0e23-ad3a-410b-8417-22605ca97a21 00:12:50.333 11:04:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4e8a0e23-ad3a-410b-8417-22605ca97a21 00:12:50.333 11:04:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:50.333 11:04:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:50.333 11:04:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:50.333 11:04:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4e8a0e23-ad3a-410b-8417-22605ca97a21 lvol 150 00:12:50.592 11:04:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=6e2d35af-d8ff-4373-9caf-64ed5484dec6 00:12:50.592 11:04:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:50.592 11:04:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:50.851 [2024-05-15 11:04:47.890927] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:50.851 [2024-05-15 11:04:47.890977] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:50.851 true 00:12:50.851 11:04:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4e8a0e23-ad3a-410b-8417-22605ca97a21 00:12:50.851 11:04:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:50.851 11:04:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:50.851 11:04:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:51.110 11:04:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6e2d35af-d8ff-4373-9caf-64ed5484dec6 00:12:51.368 11:04:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:51.368 [2024-05-15 11:04:48.540699] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:51.368 [2024-05-15 11:04:48.540922] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:51.368 11:04:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:51.628 11:04:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2200858 00:12:51.628 11:04:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:51.628 11:04:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2200858 /var/tmp/bdevperf.sock 00:12:51.628 11:04:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@828 -- # '[' -z 2200858 ']' 00:12:51.628 11:04:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:51.628 11:04:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:51.628 11:04:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:12:51.628 11:04:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:51.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:51.628 11:04:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:12:51.628 11:04:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:51.628 [2024-05-15 11:04:48.747953] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:12:51.628 [2024-05-15 11:04:48.747997] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2200858 ] 00:12:51.628 EAL: No free 2048 kB hugepages reported on node 1 00:12:51.628 [2024-05-15 11:04:48.800487] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.628 [2024-05-15 11:04:48.871950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.564 11:04:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:12:52.564 11:04:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@861 -- # return 0 00:12:52.564 11:04:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:52.564 Nvme0n1 00:12:52.564 11:04:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:52.823 [ 00:12:52.823 { 00:12:52.823 "name": "Nvme0n1", 00:12:52.823 "aliases": [ 00:12:52.823 "6e2d35af-d8ff-4373-9caf-64ed5484dec6" 00:12:52.823 ], 00:12:52.823 "product_name": "NVMe disk", 00:12:52.823 "block_size": 4096, 00:12:52.823 "num_blocks": 38912, 00:12:52.823 "uuid": "6e2d35af-d8ff-4373-9caf-64ed5484dec6", 00:12:52.823 "assigned_rate_limits": { 00:12:52.823 "rw_ios_per_sec": 0, 00:12:52.823 "rw_mbytes_per_sec": 0, 00:12:52.823 "r_mbytes_per_sec": 0, 00:12:52.823 "w_mbytes_per_sec": 0 00:12:52.823 }, 00:12:52.823 "claimed": false, 00:12:52.823 "zoned": false, 00:12:52.823 "supported_io_types": { 00:12:52.823 "read": true, 00:12:52.823 "write": true, 00:12:52.823 "unmap": true, 00:12:52.823 "write_zeroes": true, 00:12:52.823 "flush": true, 00:12:52.823 "reset": true, 00:12:52.823 "compare": true, 00:12:52.823 "compare_and_write": true, 00:12:52.823 "abort": true, 00:12:52.823 "nvme_admin": true, 00:12:52.823 "nvme_io": true 00:12:52.823 }, 00:12:52.823 "memory_domains": [ 00:12:52.823 { 00:12:52.823 "dma_device_id": "system", 00:12:52.823 "dma_device_type": 1 00:12:52.823 } 00:12:52.823 ], 00:12:52.823 "driver_specific": { 00:12:52.823 "nvme": [ 00:12:52.823 { 00:12:52.823 "trid": { 00:12:52.823 "trtype": "TCP", 00:12:52.823 "adrfam": "IPv4", 00:12:52.823 "traddr": "10.0.0.2", 00:12:52.823 "trsvcid": "4420", 00:12:52.823 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:52.823 }, 00:12:52.823 "ctrlr_data": { 00:12:52.823 "cntlid": 1, 00:12:52.823 "vendor_id": "0x8086", 00:12:52.823 "model_number": "SPDK bdev Controller", 00:12:52.823 "serial_number": "SPDK0", 00:12:52.823 "firmware_revision": "24.05", 00:12:52.823 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:52.823 "oacs": { 00:12:52.823 "security": 0, 00:12:52.823 "format": 0, 00:12:52.823 "firmware": 0, 00:12:52.823 "ns_manage": 0 00:12:52.823 }, 00:12:52.823 "multi_ctrlr": true, 00:12:52.823 "ana_reporting": false 00:12:52.823 }, 00:12:52.823 "vs": { 00:12:52.823 "nvme_version": "1.3" 00:12:52.823 }, 00:12:52.823 "ns_data": { 00:12:52.823 "id": 1, 00:12:52.823 "can_share": true 00:12:52.823 } 00:12:52.823 } 00:12:52.823 ], 00:12:52.823 "mp_policy": "active_passive" 00:12:52.823 } 00:12:52.823 } 00:12:52.823 ] 00:12:52.823 11:04:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2201090 00:12:52.823 11:04:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:52.823 11:04:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:52.823 Running I/O for 10 seconds... 00:12:54.201 Latency(us) 00:12:54.201 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:54.201 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:54.201 Nvme0n1 : 1.00 22749.00 88.86 0.00 0.00 0.00 0.00 0.00 00:12:54.201 =================================================================================================================== 00:12:54.201 Total : 22749.00 88.86 0.00 0.00 0.00 0.00 0.00 00:12:54.201 00:12:54.769 11:04:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4e8a0e23-ad3a-410b-8417-22605ca97a21 00:12:55.029 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:55.029 Nvme0n1 : 2.00 23160.50 90.47 0.00 0.00 0.00 0.00 0.00 00:12:55.029 =================================================================================================================== 00:12:55.029 Total : 23160.50 90.47 0.00 0.00 0.00 0.00 0.00 00:12:55.029 00:12:55.029 true 00:12:55.029 11:04:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4e8a0e23-ad3a-410b-8417-22605ca97a21 00:12:55.029 11:04:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:55.288 11:04:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:55.288 11:04:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:55.288 11:04:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2201090 00:12:55.856 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:55.856 Nvme0n1 : 3.00 23251.00 90.82 0.00 0.00 0.00 0.00 0.00 00:12:55.856 =================================================================================================================== 00:12:55.856 Total : 23251.00 90.82 0.00 0.00 0.00 0.00 0.00 00:12:55.856 00:12:56.791 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:56.791 Nvme0n1 : 4.00 23363.75 91.26 0.00 0.00 0.00 0.00 0.00 00:12:56.791 =================================================================================================================== 00:12:56.791 Total : 23363.75 91.26 0.00 0.00 0.00 0.00 0.00 00:12:56.791 00:12:58.169 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:58.169 Nvme0n1 : 5.00 23440.80 91.57 0.00 0.00 0.00 0.00 0.00 00:12:58.169 =================================================================================================================== 00:12:58.169 Total : 23440.80 91.57 0.00 0.00 0.00 0.00 0.00 00:12:58.169 00:12:59.106 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:59.106 Nvme0n1 : 6.00 23478.67 91.71 0.00 0.00 0.00 0.00 0.00 00:12:59.106 =================================================================================================================== 00:12:59.106 Total : 23478.67 91.71 0.00 0.00 0.00 0.00 0.00 00:12:59.106 00:13:00.043 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:00.043 Nvme0n1 : 7.00 23492.00 91.77 0.00 0.00 0.00 0.00 0.00 00:13:00.043 =================================================================================================================== 00:13:00.043 Total : 23492.00 91.77 0.00 0.00 0.00 0.00 0.00 00:13:00.043 00:13:00.980 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:00.980 Nvme0n1 : 8.00 23520.00 91.88 0.00 0.00 0.00 0.00 0.00 00:13:00.980 =================================================================================================================== 00:13:00.980 Total : 23520.00 91.88 0.00 0.00 0.00 0.00 0.00 00:13:00.980 00:13:01.942 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:01.942 Nvme0n1 : 9.00 23541.11 91.96 0.00 0.00 0.00 0.00 0.00 00:13:01.942 =================================================================================================================== 00:13:01.942 Total : 23541.11 91.96 0.00 0.00 0.00 0.00 0.00 00:13:01.942 00:13:02.879 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:02.879 Nvme0n1 : 10.00 23551.20 92.00 0.00 0.00 0.00 0.00 0.00 00:13:02.879 =================================================================================================================== 00:13:02.879 Total : 23551.20 92.00 0.00 0.00 0.00 0.00 0.00 00:13:02.879 00:13:02.879 00:13:02.879 Latency(us) 00:13:02.879 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:02.879 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:02.879 Nvme0n1 : 10.00 23554.93 92.01 0.00 0.00 5430.64 2664.18 10086.85 00:13:02.879 =================================================================================================================== 00:13:02.879 Total : 23554.93 92.01 0.00 0.00 5430.64 2664.18 10086.85 00:13:02.879 0 00:13:02.879 11:05:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2200858 00:13:02.879 11:05:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@947 -- # '[' -z 2200858 ']' 00:13:02.879 11:05:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # kill -0 2200858 00:13:02.879 11:05:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # uname 00:13:02.879 11:05:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:02.879 11:05:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2200858 00:13:02.879 11:05:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:13:02.879 11:05:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:13:02.879 11:05:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2200858' 00:13:02.879 killing process with pid 2200858 00:13:02.879 11:05:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # kill 2200858 00:13:02.879 Received shutdown signal, test time was about 10.000000 seconds 00:13:02.879 00:13:02.879 Latency(us) 00:13:02.879 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:02.879 =================================================================================================================== 00:13:02.879 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:02.879 11:05:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # wait 2200858 00:13:03.138 11:05:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:03.396 11:05:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:03.655 11:05:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4e8a0e23-ad3a-410b-8417-22605ca97a21 00:13:03.655 11:05:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:03.655 11:05:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:03.655 11:05:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:13:03.655 11:05:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:03.915 [2024-05-15 11:05:01.042477] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:03.915 11:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4e8a0e23-ad3a-410b-8417-22605ca97a21 00:13:03.915 11:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@649 -- # local es=0 00:13:03.915 11:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4e8a0e23-ad3a-410b-8417-22605ca97a21 00:13:03.915 11:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:03.915 11:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:03.915 11:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:03.915 11:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:03.915 11:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:03.915 11:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:03.915 11:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:03.915 11:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:03.915 11:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4e8a0e23-ad3a-410b-8417-22605ca97a21 00:13:04.174 request: 00:13:04.174 { 00:13:04.174 "uuid": "4e8a0e23-ad3a-410b-8417-22605ca97a21", 00:13:04.174 "method": "bdev_lvol_get_lvstores", 00:13:04.174 "req_id": 1 00:13:04.174 } 00:13:04.174 Got JSON-RPC error response 00:13:04.174 response: 00:13:04.174 { 00:13:04.174 "code": -19, 00:13:04.174 "message": "No such device" 00:13:04.174 } 00:13:04.174 11:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # es=1 00:13:04.174 11:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:13:04.174 11:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:13:04.174 11:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:13:04.174 11:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:04.174 aio_bdev 00:13:04.174 11:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6e2d35af-d8ff-4373-9caf-64ed5484dec6 00:13:04.174 11:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_name=6e2d35af-d8ff-4373-9caf-64ed5484dec6 00:13:04.174 11:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_timeout= 00:13:04.174 11:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local i 00:13:04.174 11:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # [[ -z '' ]] 00:13:04.174 11:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # bdev_timeout=2000 00:13:04.174 11:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:04.432 11:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6e2d35af-d8ff-4373-9caf-64ed5484dec6 -t 2000 00:13:04.691 [ 00:13:04.691 { 00:13:04.691 "name": "6e2d35af-d8ff-4373-9caf-64ed5484dec6", 00:13:04.691 "aliases": [ 00:13:04.691 "lvs/lvol" 00:13:04.691 ], 00:13:04.691 "product_name": "Logical Volume", 00:13:04.691 "block_size": 4096, 00:13:04.691 "num_blocks": 38912, 00:13:04.691 "uuid": "6e2d35af-d8ff-4373-9caf-64ed5484dec6", 00:13:04.691 "assigned_rate_limits": { 00:13:04.691 "rw_ios_per_sec": 0, 00:13:04.691 "rw_mbytes_per_sec": 0, 00:13:04.691 "r_mbytes_per_sec": 0, 00:13:04.691 "w_mbytes_per_sec": 0 00:13:04.691 }, 00:13:04.691 "claimed": false, 00:13:04.691 "zoned": false, 00:13:04.691 "supported_io_types": { 00:13:04.691 "read": true, 00:13:04.691 "write": true, 00:13:04.691 "unmap": true, 00:13:04.691 "write_zeroes": true, 00:13:04.691 "flush": false, 00:13:04.691 "reset": true, 00:13:04.691 "compare": false, 00:13:04.691 "compare_and_write": false, 00:13:04.691 "abort": false, 00:13:04.691 "nvme_admin": false, 00:13:04.691 "nvme_io": false 00:13:04.691 }, 00:13:04.691 "driver_specific": { 00:13:04.691 "lvol": { 00:13:04.691 "lvol_store_uuid": "4e8a0e23-ad3a-410b-8417-22605ca97a21", 00:13:04.691 "base_bdev": "aio_bdev", 00:13:04.691 "thin_provision": false, 00:13:04.691 "num_allocated_clusters": 38, 00:13:04.691 "snapshot": false, 00:13:04.691 "clone": false, 00:13:04.691 "esnap_clone": false 00:13:04.691 } 00:13:04.691 } 00:13:04.691 } 00:13:04.691 ] 00:13:04.691 11:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # return 0 00:13:04.691 11:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4e8a0e23-ad3a-410b-8417-22605ca97a21 00:13:04.691 11:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:04.691 11:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:04.691 11:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4e8a0e23-ad3a-410b-8417-22605ca97a21 00:13:04.691 11:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:04.950 11:05:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:04.950 11:05:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6e2d35af-d8ff-4373-9caf-64ed5484dec6 00:13:05.209 11:05:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4e8a0e23-ad3a-410b-8417-22605ca97a21 00:13:05.209 11:05:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:05.468 11:05:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:05.468 00:13:05.468 real 0m15.649s 00:13:05.468 user 0m15.412s 00:13:05.468 sys 0m1.326s 00:13:05.468 11:05:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:05.468 11:05:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:05.468 ************************************ 00:13:05.468 END TEST lvs_grow_clean 00:13:05.468 ************************************ 00:13:05.468 11:05:02 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:13:05.468 11:05:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:13:05.468 11:05:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:05.468 11:05:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:05.468 ************************************ 00:13:05.468 START TEST lvs_grow_dirty 00:13:05.468 ************************************ 00:13:05.468 11:05:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # lvs_grow dirty 00:13:05.468 11:05:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:05.468 11:05:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:05.468 11:05:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:05.468 11:05:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:05.468 11:05:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:05.468 11:05:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:05.468 11:05:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:05.468 11:05:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:05.727 11:05:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:05.727 11:05:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:05.727 11:05:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:05.986 11:05:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=da0afd9a-08bf-4b43-b20c-47f4caa1e4f7 00:13:05.986 11:05:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da0afd9a-08bf-4b43-b20c-47f4caa1e4f7 00:13:05.986 11:05:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:06.245 11:05:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:06.245 11:05:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:06.245 11:05:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u da0afd9a-08bf-4b43-b20c-47f4caa1e4f7 lvol 150 00:13:06.245 11:05:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=04168fb2-1afc-4992-89b3-85b859507f93 00:13:06.245 11:05:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:06.245 11:05:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:06.504 [2024-05-15 11:05:03.607867] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:06.504 [2024-05-15 11:05:03.607915] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:06.504 true 00:13:06.504 11:05:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da0afd9a-08bf-4b43-b20c-47f4caa1e4f7 00:13:06.504 11:05:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:06.763 11:05:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:06.763 11:05:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:06.763 11:05:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 04168fb2-1afc-4992-89b3-85b859507f93 00:13:07.022 11:05:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:07.022 [2024-05-15 11:05:04.265872] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:07.281 11:05:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:07.281 11:05:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2203475 00:13:07.281 11:05:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:07.281 11:05:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2203475 /var/tmp/bdevperf.sock 00:13:07.281 11:05:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@828 -- # '[' -z 2203475 ']' 00:13:07.281 11:05:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:07.281 11:05:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:07.281 11:05:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:07.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:07.281 11:05:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:07.281 11:05:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:07.281 11:05:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:07.281 [2024-05-15 11:05:04.510992] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:13:07.281 [2024-05-15 11:05:04.511036] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2203475 ] 00:13:07.281 EAL: No free 2048 kB hugepages reported on node 1 00:13:07.540 [2024-05-15 11:05:04.565021] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.540 [2024-05-15 11:05:04.637071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:08.108 11:05:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:08.108 11:05:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@861 -- # return 0 00:13:08.108 11:05:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:08.676 Nvme0n1 00:13:08.676 11:05:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:08.676 [ 00:13:08.676 { 00:13:08.676 "name": "Nvme0n1", 00:13:08.676 "aliases": [ 00:13:08.676 "04168fb2-1afc-4992-89b3-85b859507f93" 00:13:08.676 ], 00:13:08.676 "product_name": "NVMe disk", 00:13:08.676 "block_size": 4096, 00:13:08.676 "num_blocks": 38912, 00:13:08.676 "uuid": "04168fb2-1afc-4992-89b3-85b859507f93", 00:13:08.676 "assigned_rate_limits": { 00:13:08.676 "rw_ios_per_sec": 0, 00:13:08.676 "rw_mbytes_per_sec": 0, 00:13:08.676 "r_mbytes_per_sec": 0, 00:13:08.676 "w_mbytes_per_sec": 0 00:13:08.676 }, 00:13:08.676 "claimed": false, 00:13:08.676 "zoned": false, 00:13:08.676 "supported_io_types": { 00:13:08.676 "read": true, 00:13:08.676 "write": true, 00:13:08.676 "unmap": true, 00:13:08.676 "write_zeroes": true, 00:13:08.676 "flush": true, 00:13:08.676 "reset": true, 00:13:08.676 "compare": true, 00:13:08.676 "compare_and_write": true, 00:13:08.676 "abort": true, 00:13:08.676 "nvme_admin": true, 00:13:08.676 "nvme_io": true 00:13:08.676 }, 00:13:08.676 "memory_domains": [ 00:13:08.676 { 00:13:08.676 "dma_device_id": "system", 00:13:08.676 "dma_device_type": 1 00:13:08.676 } 00:13:08.676 ], 00:13:08.676 "driver_specific": { 00:13:08.676 "nvme": [ 00:13:08.676 { 00:13:08.676 "trid": { 00:13:08.676 "trtype": "TCP", 00:13:08.676 "adrfam": "IPv4", 00:13:08.676 "traddr": "10.0.0.2", 00:13:08.676 "trsvcid": "4420", 00:13:08.676 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:08.676 }, 00:13:08.676 "ctrlr_data": { 00:13:08.676 "cntlid": 1, 00:13:08.676 "vendor_id": "0x8086", 00:13:08.676 "model_number": "SPDK bdev Controller", 00:13:08.676 "serial_number": "SPDK0", 00:13:08.676 "firmware_revision": "24.05", 00:13:08.676 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:08.676 "oacs": { 00:13:08.676 "security": 0, 00:13:08.677 "format": 0, 00:13:08.677 "firmware": 0, 00:13:08.677 "ns_manage": 0 00:13:08.677 }, 00:13:08.677 "multi_ctrlr": true, 00:13:08.677 "ana_reporting": false 00:13:08.677 }, 00:13:08.677 "vs": { 00:13:08.677 "nvme_version": "1.3" 00:13:08.677 }, 00:13:08.677 "ns_data": { 00:13:08.677 "id": 1, 00:13:08.677 "can_share": true 00:13:08.677 } 00:13:08.677 } 00:13:08.677 ], 00:13:08.677 "mp_policy": "active_passive" 00:13:08.677 } 00:13:08.677 } 00:13:08.677 ] 00:13:08.677 11:05:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2203713 00:13:08.677 11:05:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:08.677 11:05:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:08.936 Running I/O for 10 seconds... 00:13:09.873 Latency(us) 00:13:09.873 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:09.873 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:09.873 Nvme0n1 : 1.00 23268.00 90.89 0.00 0.00 0.00 0.00 0.00 00:13:09.873 =================================================================================================================== 00:13:09.873 Total : 23268.00 90.89 0.00 0.00 0.00 0.00 0.00 00:13:09.873 00:13:10.809 11:05:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u da0afd9a-08bf-4b43-b20c-47f4caa1e4f7 00:13:10.809 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:10.809 Nvme0n1 : 2.00 23427.50 91.51 0.00 0.00 0.00 0.00 0.00 00:13:10.809 =================================================================================================================== 00:13:10.809 Total : 23427.50 91.51 0.00 0.00 0.00 0.00 0.00 00:13:10.809 00:13:11.068 true 00:13:11.068 11:05:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da0afd9a-08bf-4b43-b20c-47f4caa1e4f7 00:13:11.068 11:05:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:11.068 11:05:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:11.068 11:05:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:11.068 11:05:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2203713 00:13:12.004 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:12.004 Nvme0n1 : 3.00 23457.67 91.63 0.00 0.00 0.00 0.00 0.00 00:13:12.004 =================================================================================================================== 00:13:12.004 Total : 23457.67 91.63 0.00 0.00 0.00 0.00 0.00 00:13:12.004 00:13:12.941 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:12.941 Nvme0n1 : 4.00 23501.50 91.80 0.00 0.00 0.00 0.00 0.00 00:13:12.941 =================================================================================================================== 00:13:12.941 Total : 23501.50 91.80 0.00 0.00 0.00 0.00 0.00 00:13:12.941 00:13:13.877 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:13.877 Nvme0n1 : 5.00 23491.40 91.76 0.00 0.00 0.00 0.00 0.00 00:13:13.877 =================================================================================================================== 00:13:13.877 Total : 23491.40 91.76 0.00 0.00 0.00 0.00 0.00 00:13:13.877 00:13:14.814 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:14.814 Nvme0n1 : 6.00 23528.83 91.91 0.00 0.00 0.00 0.00 0.00 00:13:14.814 =================================================================================================================== 00:13:14.814 Total : 23528.83 91.91 0.00 0.00 0.00 0.00 0.00 00:13:14.814 00:13:15.750 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:15.750 Nvme0n1 : 7.00 23561.14 92.04 0.00 0.00 0.00 0.00 0.00 00:13:15.750 =================================================================================================================== 00:13:15.750 Total : 23561.14 92.04 0.00 0.00 0.00 0.00 0.00 00:13:15.750 00:13:17.127 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:17.127 Nvme0n1 : 8.00 23594.25 92.17 0.00 0.00 0.00 0.00 0.00 00:13:17.127 =================================================================================================================== 00:13:17.127 Total : 23594.25 92.17 0.00 0.00 0.00 0.00 0.00 00:13:17.127 00:13:18.064 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:18.064 Nvme0n1 : 9.00 23618.67 92.26 0.00 0.00 0.00 0.00 0.00 00:13:18.064 =================================================================================================================== 00:13:18.064 Total : 23618.67 92.26 0.00 0.00 0.00 0.00 0.00 00:13:18.064 00:13:19.001 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:19.001 Nvme0n1 : 10.00 23629.10 92.30 0.00 0.00 0.00 0.00 0.00 00:13:19.001 =================================================================================================================== 00:13:19.001 Total : 23629.10 92.30 0.00 0.00 0.00 0.00 0.00 00:13:19.001 00:13:19.001 00:13:19.001 Latency(us) 00:13:19.001 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:19.001 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:19.001 Nvme0n1 : 10.01 23628.74 92.30 0.00 0.00 5413.54 3262.55 11397.57 00:13:19.001 =================================================================================================================== 00:13:19.001 Total : 23628.74 92.30 0.00 0.00 5413.54 3262.55 11397.57 00:13:19.001 0 00:13:19.001 11:05:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2203475 00:13:19.001 11:05:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@947 -- # '[' -z 2203475 ']' 00:13:19.001 11:05:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # kill -0 2203475 00:13:19.001 11:05:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # uname 00:13:19.001 11:05:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:19.001 11:05:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2203475 00:13:19.001 11:05:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:13:19.001 11:05:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:13:19.001 11:05:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2203475' 00:13:19.001 killing process with pid 2203475 00:13:19.001 11:05:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # kill 2203475 00:13:19.001 Received shutdown signal, test time was about 10.000000 seconds 00:13:19.001 00:13:19.001 Latency(us) 00:13:19.001 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:19.001 =================================================================================================================== 00:13:19.001 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:19.001 11:05:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # wait 2203475 00:13:19.259 11:05:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:19.259 11:05:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:19.518 11:05:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da0afd9a-08bf-4b43-b20c-47f4caa1e4f7 00:13:19.518 11:05:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:19.776 11:05:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:19.776 11:05:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:13:19.776 11:05:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2200358 00:13:19.776 11:05:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2200358 00:13:19.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2200358 Killed "${NVMF_APP[@]}" "$@" 00:13:19.776 11:05:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:13:19.776 11:05:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:13:19.776 11:05:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:19.776 11:05:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@721 -- # xtrace_disable 00:13:19.776 11:05:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:19.776 11:05:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2205552 00:13:19.776 11:05:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:19.776 11:05:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2205552 00:13:19.776 11:05:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@828 -- # '[' -z 2205552 ']' 00:13:19.776 11:05:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.776 11:05:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:19.776 11:05:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.776 11:05:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:19.776 11:05:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:19.776 [2024-05-15 11:05:16.913470] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:13:19.776 [2024-05-15 11:05:16.913514] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:19.776 EAL: No free 2048 kB hugepages reported on node 1 00:13:19.776 [2024-05-15 11:05:16.971323] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.034 [2024-05-15 11:05:17.050942] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:20.034 [2024-05-15 11:05:17.050975] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:20.034 [2024-05-15 11:05:17.050982] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:20.034 [2024-05-15 11:05:17.050988] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:20.034 [2024-05-15 11:05:17.050993] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:20.034 [2024-05-15 11:05:17.051010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.601 11:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:20.601 11:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@861 -- # return 0 00:13:20.601 11:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:20.601 11:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@727 -- # xtrace_disable 00:13:20.601 11:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:20.601 11:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:20.601 11:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:20.860 [2024-05-15 11:05:17.908657] blobstore.c:4838:bs_recover: *NOTICE*: Performing recovery on blobstore 00:13:20.860 [2024-05-15 11:05:17.908751] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:13:20.860 [2024-05-15 11:05:17.908779] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:13:20.860 11:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:13:20.860 11:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 04168fb2-1afc-4992-89b3-85b859507f93 00:13:20.860 11:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_name=04168fb2-1afc-4992-89b3-85b859507f93 00:13:20.860 11:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_timeout= 00:13:20.860 11:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local i 00:13:20.860 11:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # [[ -z '' ]] 00:13:20.860 11:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # bdev_timeout=2000 00:13:20.860 11:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:20.860 11:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 04168fb2-1afc-4992-89b3-85b859507f93 -t 2000 00:13:21.118 [ 00:13:21.118 { 00:13:21.118 "name": "04168fb2-1afc-4992-89b3-85b859507f93", 00:13:21.118 "aliases": [ 00:13:21.118 "lvs/lvol" 00:13:21.118 ], 00:13:21.118 "product_name": "Logical Volume", 00:13:21.118 "block_size": 4096, 00:13:21.118 "num_blocks": 38912, 00:13:21.118 "uuid": "04168fb2-1afc-4992-89b3-85b859507f93", 00:13:21.118 "assigned_rate_limits": { 00:13:21.118 "rw_ios_per_sec": 0, 00:13:21.118 "rw_mbytes_per_sec": 0, 00:13:21.118 "r_mbytes_per_sec": 0, 00:13:21.118 "w_mbytes_per_sec": 0 00:13:21.118 }, 00:13:21.118 "claimed": false, 00:13:21.118 "zoned": false, 00:13:21.118 "supported_io_types": { 00:13:21.118 "read": true, 00:13:21.118 "write": true, 00:13:21.119 "unmap": true, 00:13:21.119 "write_zeroes": true, 00:13:21.119 "flush": false, 00:13:21.119 "reset": true, 00:13:21.119 "compare": false, 00:13:21.119 "compare_and_write": false, 00:13:21.119 "abort": false, 00:13:21.119 "nvme_admin": false, 00:13:21.119 "nvme_io": false 00:13:21.119 }, 00:13:21.119 "driver_specific": { 00:13:21.119 "lvol": { 00:13:21.119 "lvol_store_uuid": "da0afd9a-08bf-4b43-b20c-47f4caa1e4f7", 00:13:21.119 "base_bdev": "aio_bdev", 00:13:21.119 "thin_provision": false, 00:13:21.119 "num_allocated_clusters": 38, 00:13:21.119 "snapshot": false, 00:13:21.119 "clone": false, 00:13:21.119 "esnap_clone": false 00:13:21.119 } 00:13:21.119 } 00:13:21.119 } 00:13:21.119 ] 00:13:21.119 11:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # return 0 00:13:21.119 11:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da0afd9a-08bf-4b43-b20c-47f4caa1e4f7 00:13:21.119 11:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:13:21.377 11:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:13:21.377 11:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da0afd9a-08bf-4b43-b20c-47f4caa1e4f7 00:13:21.377 11:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:13:21.377 11:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:13:21.377 11:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:21.636 [2024-05-15 11:05:18.769121] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:21.636 11:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da0afd9a-08bf-4b43-b20c-47f4caa1e4f7 00:13:21.636 11:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@649 -- # local es=0 00:13:21.636 11:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da0afd9a-08bf-4b43-b20c-47f4caa1e4f7 00:13:21.636 11:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:21.636 11:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:21.636 11:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:21.636 11:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:21.636 11:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:21.636 11:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:21.636 11:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:21.636 11:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:21.636 11:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da0afd9a-08bf-4b43-b20c-47f4caa1e4f7 00:13:21.894 request: 00:13:21.895 { 00:13:21.895 "uuid": "da0afd9a-08bf-4b43-b20c-47f4caa1e4f7", 00:13:21.895 "method": "bdev_lvol_get_lvstores", 00:13:21.895 "req_id": 1 00:13:21.895 } 00:13:21.895 Got JSON-RPC error response 00:13:21.895 response: 00:13:21.895 { 00:13:21.895 "code": -19, 00:13:21.895 "message": "No such device" 00:13:21.895 } 00:13:21.895 11:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # es=1 00:13:21.895 11:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:13:21.895 11:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:13:21.895 11:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:13:21.895 11:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:21.895 aio_bdev 00:13:22.152 11:05:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 04168fb2-1afc-4992-89b3-85b859507f93 00:13:22.152 11:05:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_name=04168fb2-1afc-4992-89b3-85b859507f93 00:13:22.152 11:05:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_timeout= 00:13:22.152 11:05:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local i 00:13:22.152 11:05:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # [[ -z '' ]] 00:13:22.152 11:05:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # bdev_timeout=2000 00:13:22.152 11:05:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:22.152 11:05:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 04168fb2-1afc-4992-89b3-85b859507f93 -t 2000 00:13:22.409 [ 00:13:22.409 { 00:13:22.409 "name": "04168fb2-1afc-4992-89b3-85b859507f93", 00:13:22.409 "aliases": [ 00:13:22.409 "lvs/lvol" 00:13:22.409 ], 00:13:22.409 "product_name": "Logical Volume", 00:13:22.409 "block_size": 4096, 00:13:22.409 "num_blocks": 38912, 00:13:22.409 "uuid": "04168fb2-1afc-4992-89b3-85b859507f93", 00:13:22.409 "assigned_rate_limits": { 00:13:22.409 "rw_ios_per_sec": 0, 00:13:22.409 "rw_mbytes_per_sec": 0, 00:13:22.409 "r_mbytes_per_sec": 0, 00:13:22.409 "w_mbytes_per_sec": 0 00:13:22.409 }, 00:13:22.409 "claimed": false, 00:13:22.409 "zoned": false, 00:13:22.409 "supported_io_types": { 00:13:22.409 "read": true, 00:13:22.409 "write": true, 00:13:22.409 "unmap": true, 00:13:22.409 "write_zeroes": true, 00:13:22.409 "flush": false, 00:13:22.409 "reset": true, 00:13:22.409 "compare": false, 00:13:22.409 "compare_and_write": false, 00:13:22.409 "abort": false, 00:13:22.409 "nvme_admin": false, 00:13:22.409 "nvme_io": false 00:13:22.409 }, 00:13:22.409 "driver_specific": { 00:13:22.409 "lvol": { 00:13:22.409 "lvol_store_uuid": "da0afd9a-08bf-4b43-b20c-47f4caa1e4f7", 00:13:22.409 "base_bdev": "aio_bdev", 00:13:22.409 "thin_provision": false, 00:13:22.409 "num_allocated_clusters": 38, 00:13:22.409 "snapshot": false, 00:13:22.409 "clone": false, 00:13:22.409 "esnap_clone": false 00:13:22.409 } 00:13:22.409 } 00:13:22.409 } 00:13:22.409 ] 00:13:22.409 11:05:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # return 0 00:13:22.409 11:05:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da0afd9a-08bf-4b43-b20c-47f4caa1e4f7 00:13:22.409 11:05:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:22.667 11:05:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:22.667 11:05:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da0afd9a-08bf-4b43-b20c-47f4caa1e4f7 00:13:22.667 11:05:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:22.667 11:05:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:22.667 11:05:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 04168fb2-1afc-4992-89b3-85b859507f93 00:13:22.925 11:05:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u da0afd9a-08bf-4b43-b20c-47f4caa1e4f7 00:13:23.184 11:05:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:23.184 11:05:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:23.184 00:13:23.184 real 0m17.673s 00:13:23.184 user 0m45.391s 00:13:23.184 sys 0m3.560s 00:13:23.184 11:05:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:23.184 11:05:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:23.184 ************************************ 00:13:23.184 END TEST lvs_grow_dirty 00:13:23.184 ************************************ 00:13:23.184 11:05:20 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:13:23.184 11:05:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # type=--id 00:13:23.184 11:05:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # id=0 00:13:23.184 11:05:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # '[' --id = --pid ']' 00:13:23.184 11:05:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:23.184 11:05:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # shm_files=nvmf_trace.0 00:13:23.184 11:05:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # [[ -z nvmf_trace.0 ]] 00:13:23.184 11:05:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # for n in $shm_files 00:13:23.184 11:05:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:23.184 nvmf_trace.0 00:13:23.442 11:05:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # return 0 00:13:23.442 11:05:20 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:13:23.442 11:05:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:23.442 11:05:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:13:23.442 11:05:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:23.442 11:05:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:13:23.442 11:05:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:23.442 11:05:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:23.442 rmmod nvme_tcp 00:13:23.442 rmmod nvme_fabrics 00:13:23.442 rmmod nvme_keyring 00:13:23.442 11:05:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:23.442 11:05:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:13:23.442 11:05:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:13:23.442 11:05:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2205552 ']' 00:13:23.442 11:05:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2205552 00:13:23.442 11:05:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@947 -- # '[' -z 2205552 ']' 00:13:23.442 11:05:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # kill -0 2205552 00:13:23.442 11:05:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # uname 00:13:23.442 11:05:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:23.442 11:05:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2205552 00:13:23.442 11:05:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:13:23.442 11:05:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:13:23.442 11:05:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2205552' 00:13:23.442 killing process with pid 2205552 00:13:23.442 11:05:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # kill 2205552 00:13:23.442 11:05:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # wait 2205552 00:13:23.700 11:05:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:23.700 11:05:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:23.700 11:05:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:23.700 11:05:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:23.700 11:05:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:23.700 11:05:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.700 11:05:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:23.700 11:05:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.231 11:05:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:26.231 00:13:26.231 real 0m42.431s 00:13:26.231 user 1m6.571s 00:13:26.231 sys 0m9.326s 00:13:26.231 11:05:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:26.231 11:05:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:26.231 ************************************ 00:13:26.231 END TEST nvmf_lvs_grow 00:13:26.231 ************************************ 00:13:26.231 11:05:22 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:26.231 11:05:22 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:13:26.231 11:05:22 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:26.231 11:05:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:26.231 ************************************ 00:13:26.231 START TEST nvmf_bdev_io_wait 00:13:26.231 ************************************ 00:13:26.231 11:05:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:26.231 * Looking for test storage... 00:13:26.231 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:26.231 11:05:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:26.231 11:05:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:13:26.231 11:05:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:26.231 11:05:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:26.231 11:05:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:26.231 11:05:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:26.231 11:05:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:26.231 11:05:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:26.231 11:05:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:26.231 11:05:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:26.231 11:05:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:26.231 11:05:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:26.231 11:05:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:26.231 11:05:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:26.231 11:05:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:26.231 11:05:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:26.231 11:05:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:26.231 11:05:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:26.231 11:05:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:26.231 11:05:23 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:26.231 11:05:23 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:26.231 11:05:23 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:26.232 11:05:23 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.232 11:05:23 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.232 11:05:23 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.232 11:05:23 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:13:26.232 11:05:23 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.232 11:05:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:13:26.232 11:05:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:26.232 11:05:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:26.232 11:05:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:26.232 11:05:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:26.232 11:05:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:26.232 11:05:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:26.232 11:05:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:26.232 11:05:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:26.232 11:05:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:26.232 11:05:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:26.232 11:05:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:13:26.232 11:05:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:26.232 11:05:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:26.232 11:05:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:26.232 11:05:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:26.232 11:05:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:26.232 11:05:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.232 11:05:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:26.232 11:05:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.232 11:05:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:26.232 11:05:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:26.232 11:05:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:13:26.232 11:05:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:31.506 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:31.506 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:13:31.506 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:31.506 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:31.506 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:31.506 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:31.506 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:31.506 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:13:31.506 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:31.506 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:13:31.506 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:13:31.506 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:13:31.506 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:13:31.506 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:13:31.506 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:13:31.506 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:31.506 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:31.506 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:31.506 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:31.506 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:31.506 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:31.506 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:31.506 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:31.506 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:31.506 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:31.506 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:31.506 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:31.506 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:31.506 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:31.506 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:31.506 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:31.506 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:31.506 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:31.506 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:31.506 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:31.506 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:31.506 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:31.506 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:31.506 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:31.506 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:31.506 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:31.506 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:31.506 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:31.506 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:31.506 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:31.506 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:31.506 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:31.506 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:31.506 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:31.507 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:31.507 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:31.507 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:31.507 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:31.507 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:31.507 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:31.507 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:31.507 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:31.507 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:31.507 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:31.507 Found net devices under 0000:86:00.0: cvl_0_0 00:13:31.507 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:31.507 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:31.507 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:31.507 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:31.507 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:31.507 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:31.507 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:31.507 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:31.507 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:31.507 Found net devices under 0000:86:00.1: cvl_0_1 00:13:31.507 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:31.507 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:31.507 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:13:31.507 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:31.507 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:31.507 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:31.507 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:31.507 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:31.507 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:31.507 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:31.507 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:31.507 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:31.507 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:31.507 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:31.507 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:31.507 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:31.507 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:31.507 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:31.507 11:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:31.507 11:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:31.507 11:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:31.507 11:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:31.507 11:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:31.507 11:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:31.507 11:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:31.507 11:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:31.507 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:31.507 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:13:31.507 00:13:31.507 --- 10.0.0.2 ping statistics --- 00:13:31.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.507 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:13:31.507 11:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:31.507 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:31.507 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:13:31.507 00:13:31.507 --- 10.0.0.1 ping statistics --- 00:13:31.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.507 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:13:31.507 11:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:31.507 11:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:13:31.507 11:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:31.507 11:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:31.507 11:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:31.507 11:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:31.507 11:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:31.507 11:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:31.507 11:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:31.507 11:05:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:13:31.507 11:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:31.507 11:05:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@721 -- # xtrace_disable 00:13:31.507 11:05:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:31.507 11:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2209662 00:13:31.507 11:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2209662 00:13:31.507 11:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:13:31.507 11:05:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@828 -- # '[' -z 2209662 ']' 00:13:31.507 11:05:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.507 11:05:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:31.507 11:05:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.507 11:05:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:31.507 11:05:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:31.507 [2024-05-15 11:05:28.273618] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:13:31.507 [2024-05-15 11:05:28.273666] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:31.507 EAL: No free 2048 kB hugepages reported on node 1 00:13:31.507 [2024-05-15 11:05:28.332083] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:31.507 [2024-05-15 11:05:28.406215] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:31.507 [2024-05-15 11:05:28.406259] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:31.507 [2024-05-15 11:05:28.406266] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:31.507 [2024-05-15 11:05:28.406272] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:31.507 [2024-05-15 11:05:28.406276] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:31.507 [2024-05-15 11:05:28.406349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:31.507 [2024-05-15 11:05:28.406372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:31.507 [2024-05-15 11:05:28.406458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:31.507 [2024-05-15 11:05:28.406459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.077 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:32.077 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@861 -- # return 0 00:13:32.077 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:32.077 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@727 -- # xtrace_disable 00:13:32.077 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:32.077 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:32.077 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:13:32.077 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:32.077 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:32.077 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:32.077 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:13:32.077 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:32.077 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:32.077 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:32.077 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:32.077 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:32.077 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:32.077 [2024-05-15 11:05:29.189092] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:32.077 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:32.077 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:32.077 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:32.077 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:32.077 Malloc0 00:13:32.077 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:32.077 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:32.077 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:32.077 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:32.077 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:32.077 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:32.077 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:32.077 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:32.077 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:32.077 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:32.077 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:32.077 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:32.077 [2024-05-15 11:05:29.245705] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:32.077 [2024-05-15 11:05:29.245952] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:32.077 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:32.077 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2209842 00:13:32.078 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:13:32.078 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:13:32.078 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2209844 00:13:32.078 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:32.078 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:32.078 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:32.078 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:32.078 { 00:13:32.078 "params": { 00:13:32.078 "name": "Nvme$subsystem", 00:13:32.078 "trtype": "$TEST_TRANSPORT", 00:13:32.078 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:32.078 "adrfam": "ipv4", 00:13:32.078 "trsvcid": "$NVMF_PORT", 00:13:32.078 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:32.078 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:32.078 "hdgst": ${hdgst:-false}, 00:13:32.078 "ddgst": ${ddgst:-false} 00:13:32.078 }, 00:13:32.078 "method": "bdev_nvme_attach_controller" 00:13:32.078 } 00:13:32.078 EOF 00:13:32.078 )") 00:13:32.078 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:13:32.078 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:13:32.078 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2209846 00:13:32.078 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:32.078 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:32.078 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:32.078 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:32.078 { 00:13:32.078 "params": { 00:13:32.078 "name": "Nvme$subsystem", 00:13:32.078 "trtype": "$TEST_TRANSPORT", 00:13:32.078 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:32.078 "adrfam": "ipv4", 00:13:32.078 "trsvcid": "$NVMF_PORT", 00:13:32.078 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:32.078 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:32.078 "hdgst": ${hdgst:-false}, 00:13:32.078 "ddgst": ${ddgst:-false} 00:13:32.078 }, 00:13:32.078 "method": "bdev_nvme_attach_controller" 00:13:32.078 } 00:13:32.078 EOF 00:13:32.078 )") 00:13:32.078 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:13:32.078 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2209849 00:13:32.078 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:13:32.078 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:32.078 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:13:32.078 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:32.078 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:32.078 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:32.078 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:13:32.078 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:13:32.078 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:32.078 { 00:13:32.078 "params": { 00:13:32.078 "name": "Nvme$subsystem", 00:13:32.078 "trtype": "$TEST_TRANSPORT", 00:13:32.078 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:32.078 "adrfam": "ipv4", 00:13:32.078 "trsvcid": "$NVMF_PORT", 00:13:32.078 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:32.078 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:32.078 "hdgst": ${hdgst:-false}, 00:13:32.078 "ddgst": ${ddgst:-false} 00:13:32.078 }, 00:13:32.078 "method": "bdev_nvme_attach_controller" 00:13:32.078 } 00:13:32.078 EOF 00:13:32.078 )") 00:13:32.078 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:32.078 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:32.078 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:32.078 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:32.078 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:32.078 { 00:13:32.078 "params": { 00:13:32.078 "name": "Nvme$subsystem", 00:13:32.078 "trtype": "$TEST_TRANSPORT", 00:13:32.078 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:32.078 "adrfam": "ipv4", 00:13:32.078 "trsvcid": "$NVMF_PORT", 00:13:32.078 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:32.078 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:32.078 "hdgst": ${hdgst:-false}, 00:13:32.078 "ddgst": ${ddgst:-false} 00:13:32.078 }, 00:13:32.078 "method": "bdev_nvme_attach_controller" 00:13:32.078 } 00:13:32.078 EOF 00:13:32.078 )") 00:13:32.078 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:32.078 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2209842 00:13:32.078 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:32.078 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:32.078 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:32.078 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:32.078 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:32.078 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:32.078 "params": { 00:13:32.078 "name": "Nvme1", 00:13:32.078 "trtype": "tcp", 00:13:32.078 "traddr": "10.0.0.2", 00:13:32.078 "adrfam": "ipv4", 00:13:32.078 "trsvcid": "4420", 00:13:32.078 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:32.078 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:32.078 "hdgst": false, 00:13:32.078 "ddgst": false 00:13:32.078 }, 00:13:32.078 "method": "bdev_nvme_attach_controller" 00:13:32.078 }' 00:13:32.078 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:32.078 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:32.078 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:32.078 "params": { 00:13:32.078 "name": "Nvme1", 00:13:32.078 "trtype": "tcp", 00:13:32.078 "traddr": "10.0.0.2", 00:13:32.078 "adrfam": "ipv4", 00:13:32.078 "trsvcid": "4420", 00:13:32.078 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:32.078 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:32.078 "hdgst": false, 00:13:32.078 "ddgst": false 00:13:32.078 }, 00:13:32.078 "method": "bdev_nvme_attach_controller" 00:13:32.078 }' 00:13:32.078 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:32.078 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:32.078 "params": { 00:13:32.078 "name": "Nvme1", 00:13:32.078 "trtype": "tcp", 00:13:32.078 "traddr": "10.0.0.2", 00:13:32.078 "adrfam": "ipv4", 00:13:32.078 "trsvcid": "4420", 00:13:32.078 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:32.078 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:32.078 "hdgst": false, 00:13:32.078 "ddgst": false 00:13:32.078 }, 00:13:32.078 "method": "bdev_nvme_attach_controller" 00:13:32.078 }' 00:13:32.078 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:32.078 11:05:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:32.078 "params": { 00:13:32.078 "name": "Nvme1", 00:13:32.078 "trtype": "tcp", 00:13:32.078 "traddr": "10.0.0.2", 00:13:32.078 "adrfam": "ipv4", 00:13:32.078 "trsvcid": "4420", 00:13:32.078 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:32.078 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:32.078 "hdgst": false, 00:13:32.078 "ddgst": false 00:13:32.078 }, 00:13:32.078 "method": "bdev_nvme_attach_controller" 00:13:32.078 }' 00:13:32.079 [2024-05-15 11:05:29.295536] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:13:32.079 [2024-05-15 11:05:29.295538] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:13:32.079 [2024-05-15 11:05:29.295587] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-05-15 11:05:29.295588] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:13:32.079 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:13:32.079 [2024-05-15 11:05:29.295899] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:13:32.079 [2024-05-15 11:05:29.295934] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:13:32.079 [2024-05-15 11:05:29.296894] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:13:32.079 [2024-05-15 11:05:29.296940] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:13:32.338 EAL: No free 2048 kB hugepages reported on node 1 00:13:32.338 EAL: No free 2048 kB hugepages reported on node 1 00:13:32.339 [2024-05-15 11:05:29.475103] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.339 EAL: No free 2048 kB hugepages reported on node 1 00:13:32.339 [2024-05-15 11:05:29.552795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:13:32.339 [2024-05-15 11:05:29.576696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.598 EAL: No free 2048 kB hugepages reported on node 1 00:13:32.598 [2024-05-15 11:05:29.651682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:32.598 [2024-05-15 11:05:29.674893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.598 [2024-05-15 11:05:29.732782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.598 [2024-05-15 11:05:29.756194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:13:32.598 [2024-05-15 11:05:29.810284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:13:32.857 Running I/O for 1 seconds... 00:13:32.857 Running I/O for 1 seconds... 00:13:32.857 Running I/O for 1 seconds... 00:13:32.857 Running I/O for 1 seconds... 00:13:33.794 00:13:33.794 Latency(us) 00:13:33.794 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:33.794 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:13:33.794 Nvme1n1 : 1.01 13740.15 53.67 0.00 0.00 9288.23 5071.92 18122.13 00:13:33.794 =================================================================================================================== 00:13:33.794 Total : 13740.15 53.67 0.00 0.00 9288.23 5071.92 18122.13 00:13:33.794 00:13:33.794 Latency(us) 00:13:33.794 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:33.794 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:13:33.794 Nvme1n1 : 1.01 6126.04 23.93 0.00 0.00 20762.32 4245.59 34420.65 00:13:33.794 =================================================================================================================== 00:13:33.794 Total : 6126.04 23.93 0.00 0.00 20762.32 4245.59 34420.65 00:13:33.794 00:13:33.794 Latency(us) 00:13:33.794 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:33.794 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:13:33.794 Nvme1n1 : 1.00 244440.20 954.84 0.00 0.00 522.07 214.59 1175.37 00:13:33.794 =================================================================================================================== 00:13:33.794 Total : 244440.20 954.84 0.00 0.00 522.07 214.59 1175.37 00:13:33.794 00:13:33.794 Latency(us) 00:13:33.794 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:33.794 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:13:33.794 Nvme1n1 : 1.00 6417.72 25.07 0.00 0.00 19895.95 4673.00 46730.02 00:13:33.794 =================================================================================================================== 00:13:33.795 Total : 6417.72 25.07 0.00 0.00 19895.95 4673.00 46730.02 00:13:34.054 11:05:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2209844 00:13:34.054 11:05:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2209846 00:13:34.054 11:05:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2209849 00:13:34.313 11:05:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:34.313 11:05:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:34.313 11:05:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:34.313 11:05:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:34.313 11:05:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:13:34.313 11:05:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:13:34.313 11:05:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:34.313 11:05:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:13:34.313 11:05:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:34.313 11:05:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:13:34.313 11:05:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:34.313 11:05:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:34.313 rmmod nvme_tcp 00:13:34.313 rmmod nvme_fabrics 00:13:34.313 rmmod nvme_keyring 00:13:34.313 11:05:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:34.313 11:05:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:13:34.313 11:05:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:13:34.313 11:05:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2209662 ']' 00:13:34.313 11:05:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2209662 00:13:34.313 11:05:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@947 -- # '[' -z 2209662 ']' 00:13:34.313 11:05:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # kill -0 2209662 00:13:34.313 11:05:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # uname 00:13:34.313 11:05:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:34.313 11:05:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2209662 00:13:34.313 11:05:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:13:34.313 11:05:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:13:34.313 11:05:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2209662' 00:13:34.314 killing process with pid 2209662 00:13:34.314 11:05:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # kill 2209662 00:13:34.314 [2024-05-15 11:05:31.458659] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:34.314 11:05:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # wait 2209662 00:13:34.573 11:05:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:34.573 11:05:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:34.573 11:05:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:34.573 11:05:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:34.573 11:05:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:34.573 11:05:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:34.573 11:05:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:34.573 11:05:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.478 11:05:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:36.478 00:13:36.478 real 0m10.759s 00:13:36.478 user 0m20.014s 00:13:36.478 sys 0m5.481s 00:13:36.478 11:05:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:36.478 11:05:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:36.478 ************************************ 00:13:36.478 END TEST nvmf_bdev_io_wait 00:13:36.478 ************************************ 00:13:36.737 11:05:33 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:36.737 11:05:33 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:13:36.737 11:05:33 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:36.737 11:05:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:36.737 ************************************ 00:13:36.737 START TEST nvmf_queue_depth 00:13:36.737 ************************************ 00:13:36.737 11:05:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:36.737 * Looking for test storage... 00:13:36.737 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:36.737 11:05:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:36.737 11:05:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:13:36.737 11:05:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:36.737 11:05:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:36.737 11:05:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:36.737 11:05:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:36.737 11:05:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:36.737 11:05:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:36.737 11:05:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:36.737 11:05:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:36.737 11:05:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:36.737 11:05:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:36.737 11:05:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:36.737 11:05:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:36.737 11:05:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:36.737 11:05:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:36.737 11:05:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:36.737 11:05:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:36.737 11:05:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:36.737 11:05:33 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:36.737 11:05:33 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:36.737 11:05:33 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:36.737 11:05:33 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.737 11:05:33 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.737 11:05:33 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.737 11:05:33 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:13:36.737 11:05:33 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.737 11:05:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:13:36.737 11:05:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:36.737 11:05:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:36.737 11:05:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:36.737 11:05:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:36.738 11:05:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:36.738 11:05:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:36.738 11:05:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:36.738 11:05:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:36.738 11:05:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:13:36.738 11:05:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:13:36.738 11:05:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:36.738 11:05:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:13:36.738 11:05:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:36.738 11:05:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:36.738 11:05:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:36.738 11:05:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:36.738 11:05:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:36.738 11:05:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.738 11:05:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:36.738 11:05:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.738 11:05:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:36.738 11:05:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:36.738 11:05:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:13:36.738 11:05:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:42.011 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:42.011 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:13:42.011 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:42.011 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:42.011 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:42.011 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:42.011 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:42.011 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:13:42.011 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:42.011 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:13:42.011 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:13:42.011 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:13:42.011 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:13:42.011 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:13:42.011 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:13:42.011 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:42.011 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:42.011 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:42.011 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:42.011 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:42.011 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:42.011 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:42.011 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:42.011 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:42.011 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:42.011 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:42.011 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:42.011 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:42.011 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:42.011 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:42.011 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:42.011 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:42.011 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:42.011 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:42.011 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:42.011 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:42.011 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:42.011 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:42.011 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:42.011 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:42.011 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:42.011 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:42.011 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:42.011 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:42.011 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:42.011 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:42.012 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:42.012 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:42.012 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:42.012 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:42.012 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:42.012 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:42.012 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:42.012 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:42.012 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:42.012 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:42.012 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:42.012 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:42.012 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:42.012 Found net devices under 0000:86:00.0: cvl_0_0 00:13:42.012 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:42.012 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:42.012 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:42.012 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:42.012 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:42.012 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:42.012 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:42.012 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:42.012 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:42.012 Found net devices under 0000:86:00.1: cvl_0_1 00:13:42.012 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:42.012 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:42.012 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:13:42.012 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:42.012 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:42.012 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:42.012 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:42.012 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:42.012 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:42.012 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:42.012 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:42.012 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:42.012 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:42.012 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:42.012 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:42.012 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:42.012 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:42.012 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:42.012 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:42.012 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:42.012 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:42.012 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:42.012 11:05:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:42.012 11:05:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:42.012 11:05:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:42.012 11:05:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:42.012 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:42.012 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:13:42.012 00:13:42.012 --- 10.0.0.2 ping statistics --- 00:13:42.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.012 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:13:42.012 11:05:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:42.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:42.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:13:42.012 00:13:42.012 --- 10.0.0.1 ping statistics --- 00:13:42.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.012 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:13:42.012 11:05:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:42.012 11:05:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:13:42.012 11:05:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:42.012 11:05:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:42.012 11:05:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:42.012 11:05:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:42.012 11:05:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:42.012 11:05:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:42.012 11:05:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:42.012 11:05:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:13:42.012 11:05:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:42.012 11:05:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@721 -- # xtrace_disable 00:13:42.012 11:05:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:42.012 11:05:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:42.012 11:05:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=2213638 00:13:42.012 11:05:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 2213638 00:13:42.012 11:05:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@828 -- # '[' -z 2213638 ']' 00:13:42.012 11:05:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.012 11:05:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:42.012 11:05:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.012 11:05:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:42.012 11:05:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:42.012 [2024-05-15 11:05:39.170007] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:13:42.012 [2024-05-15 11:05:39.170049] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:42.012 EAL: No free 2048 kB hugepages reported on node 1 00:13:42.012 [2024-05-15 11:05:39.224479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.271 [2024-05-15 11:05:39.298341] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:42.271 [2024-05-15 11:05:39.298378] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:42.271 [2024-05-15 11:05:39.298385] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:42.271 [2024-05-15 11:05:39.298391] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:42.271 [2024-05-15 11:05:39.298396] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:42.271 [2024-05-15 11:05:39.298445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:42.838 11:05:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:42.838 11:05:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@861 -- # return 0 00:13:42.838 11:05:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:42.838 11:05:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@727 -- # xtrace_disable 00:13:42.838 11:05:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:42.838 11:05:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:42.838 11:05:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:42.838 11:05:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:42.838 11:05:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:42.838 [2024-05-15 11:05:40.025605] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:42.838 11:05:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:42.838 11:05:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:42.838 11:05:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:42.838 11:05:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:42.838 Malloc0 00:13:42.838 11:05:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:42.838 11:05:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:42.838 11:05:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:42.838 11:05:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:42.838 11:05:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:42.838 11:05:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:42.838 11:05:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:42.838 11:05:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:42.838 11:05:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:42.838 11:05:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:42.838 11:05:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:42.838 11:05:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:42.838 [2024-05-15 11:05:40.077563] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:42.838 [2024-05-15 11:05:40.077782] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:42.838 11:05:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:42.838 11:05:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:13:42.838 11:05:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2213869 00:13:42.838 11:05:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:42.838 11:05:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2213869 /var/tmp/bdevperf.sock 00:13:42.838 11:05:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@828 -- # '[' -z 2213869 ']' 00:13:42.838 11:05:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:42.839 11:05:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:42.839 11:05:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:42.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:42.839 11:05:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:42.839 11:05:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:43.097 [2024-05-15 11:05:40.113451] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:13:43.097 [2024-05-15 11:05:40.113493] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2213869 ] 00:13:43.097 EAL: No free 2048 kB hugepages reported on node 1 00:13:43.097 [2024-05-15 11:05:40.163207] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.097 [2024-05-15 11:05:40.242295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.693 11:05:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:43.693 11:05:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@861 -- # return 0 00:13:43.693 11:05:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:13:43.693 11:05:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:43.693 11:05:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:43.951 NVMe0n1 00:13:43.951 11:05:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:43.951 11:05:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:43.951 Running I/O for 10 seconds... 00:13:53.928 00:13:53.928 Latency(us) 00:13:53.928 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:53.928 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:13:53.928 Verification LBA range: start 0x0 length 0x4000 00:13:53.928 NVMe0n1 : 10.05 12335.71 48.19 0.00 0.00 82719.24 9118.05 54480.36 00:13:53.928 =================================================================================================================== 00:13:53.928 Total : 12335.71 48.19 0.00 0.00 82719.24 9118.05 54480.36 00:13:53.928 0 00:13:53.928 11:05:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2213869 00:13:53.928 11:05:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@947 -- # '[' -z 2213869 ']' 00:13:53.928 11:05:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # kill -0 2213869 00:13:53.928 11:05:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # uname 00:13:53.928 11:05:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:53.928 11:05:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2213869 00:13:54.186 11:05:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:13:54.186 11:05:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:13:54.186 11:05:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2213869' 00:13:54.186 killing process with pid 2213869 00:13:54.186 11:05:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # kill 2213869 00:13:54.186 Received shutdown signal, test time was about 10.000000 seconds 00:13:54.186 00:13:54.186 Latency(us) 00:13:54.186 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:54.186 =================================================================================================================== 00:13:54.186 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:54.186 11:05:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@971 -- # wait 2213869 00:13:54.186 11:05:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:54.186 11:05:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:13:54.186 11:05:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:54.186 11:05:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:13:54.186 11:05:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:54.186 11:05:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:13:54.186 11:05:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:54.186 11:05:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:54.186 rmmod nvme_tcp 00:13:54.445 rmmod nvme_fabrics 00:13:54.445 rmmod nvme_keyring 00:13:54.445 11:05:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:54.445 11:05:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:13:54.445 11:05:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:13:54.445 11:05:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 2213638 ']' 00:13:54.445 11:05:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 2213638 00:13:54.445 11:05:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@947 -- # '[' -z 2213638 ']' 00:13:54.445 11:05:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # kill -0 2213638 00:13:54.445 11:05:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # uname 00:13:54.445 11:05:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:54.445 11:05:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2213638 00:13:54.445 11:05:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:13:54.445 11:05:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:13:54.445 11:05:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2213638' 00:13:54.445 killing process with pid 2213638 00:13:54.445 11:05:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # kill 2213638 00:13:54.445 [2024-05-15 11:05:51.537677] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:54.445 11:05:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@971 -- # wait 2213638 00:13:54.704 11:05:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:54.704 11:05:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:54.704 11:05:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:54.704 11:05:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:54.704 11:05:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:54.704 11:05:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.704 11:05:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:54.704 11:05:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:56.608 11:05:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:56.608 00:13:56.608 real 0m20.041s 00:13:56.608 user 0m24.780s 00:13:56.608 sys 0m5.461s 00:13:56.608 11:05:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:56.608 11:05:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:56.608 ************************************ 00:13:56.608 END TEST nvmf_queue_depth 00:13:56.608 ************************************ 00:13:56.608 11:05:53 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:56.608 11:05:53 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:13:56.608 11:05:53 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:56.608 11:05:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:56.867 ************************************ 00:13:56.867 START TEST nvmf_target_multipath 00:13:56.867 ************************************ 00:13:56.867 11:05:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:56.867 * Looking for test storage... 00:13:56.867 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:56.867 11:05:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:56.867 11:05:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:13:56.867 11:05:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:56.867 11:05:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:56.867 11:05:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:56.867 11:05:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:56.867 11:05:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:56.867 11:05:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:56.867 11:05:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:56.867 11:05:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:56.867 11:05:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:56.867 11:05:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:56.867 11:05:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:56.867 11:05:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:56.867 11:05:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:56.867 11:05:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:56.867 11:05:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:56.868 11:05:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:56.868 11:05:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:56.868 11:05:54 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:56.868 11:05:54 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:56.868 11:05:54 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:56.868 11:05:54 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.868 11:05:54 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.868 11:05:54 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.868 11:05:54 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:13:56.868 11:05:54 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.868 11:05:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:13:56.868 11:05:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:56.868 11:05:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:56.868 11:05:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:56.868 11:05:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:56.868 11:05:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:56.868 11:05:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:56.868 11:05:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:56.868 11:05:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:56.868 11:05:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:56.868 11:05:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:56.868 11:05:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:13:56.868 11:05:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:56.868 11:05:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:13:56.868 11:05:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:56.868 11:05:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:56.868 11:05:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:56.868 11:05:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:56.868 11:05:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:56.868 11:05:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:56.868 11:05:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:56.868 11:05:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:56.868 11:05:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:56.868 11:05:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:56.868 11:05:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:13:56.868 11:05:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:02.142 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:02.142 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:02.142 Found net devices under 0000:86:00.0: cvl_0_0 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:02.142 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.143 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:02.143 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.143 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:02.143 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:02.143 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.143 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:02.143 Found net devices under 0000:86:00.1: cvl_0_1 00:14:02.143 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.143 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:02.143 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:14:02.143 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:02.143 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:02.143 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:02.143 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:02.143 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:02.143 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:02.143 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:02.143 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:02.143 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:02.143 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:02.143 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:02.143 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:02.143 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:02.143 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:02.143 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:02.143 11:05:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:02.143 11:05:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:02.143 11:05:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:02.143 11:05:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:02.143 11:05:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:02.143 11:05:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:02.143 11:05:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:02.143 11:05:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:02.143 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:02.143 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:14:02.143 00:14:02.143 --- 10.0.0.2 ping statistics --- 00:14:02.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.143 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:14:02.143 11:05:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:02.143 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:02.143 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:14:02.143 00:14:02.143 --- 10.0.0.1 ping statistics --- 00:14:02.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.143 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:14:02.143 11:05:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:02.143 11:05:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:14:02.143 11:05:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:02.143 11:05:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:02.143 11:05:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:02.143 11:05:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:02.143 11:05:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:02.143 11:05:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:02.143 11:05:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:02.143 11:05:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:14:02.143 11:05:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:14:02.143 only one NIC for nvmf test 00:14:02.143 11:05:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:14:02.143 11:05:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:02.143 11:05:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:14:02.143 11:05:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:02.143 11:05:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:14:02.143 11:05:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:02.143 11:05:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:02.143 rmmod nvme_tcp 00:14:02.143 rmmod nvme_fabrics 00:14:02.143 rmmod nvme_keyring 00:14:02.143 11:05:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:02.143 11:05:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:14:02.143 11:05:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:14:02.143 11:05:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:02.143 11:05:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:02.143 11:05:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:02.143 11:05:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:02.143 11:05:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:02.143 11:05:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:02.143 11:05:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:02.143 11:05:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:02.143 11:05:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:04.681 11:06:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:04.681 11:06:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:14:04.681 11:06:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:14:04.681 11:06:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:04.681 11:06:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:14:04.681 11:06:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:04.681 11:06:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:14:04.681 11:06:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:04.681 11:06:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:04.681 11:06:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:04.681 11:06:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:14:04.681 11:06:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:14:04.681 11:06:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:04.681 11:06:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:04.681 11:06:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:04.681 11:06:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:04.681 11:06:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:04.681 11:06:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:04.681 11:06:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:04.681 11:06:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:04.681 11:06:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:04.681 11:06:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:04.681 00:14:04.681 real 0m7.469s 00:14:04.681 user 0m1.489s 00:14:04.681 sys 0m3.980s 00:14:04.681 11:06:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:04.681 11:06:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:04.681 ************************************ 00:14:04.681 END TEST nvmf_target_multipath 00:14:04.681 ************************************ 00:14:04.681 11:06:01 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:04.681 11:06:01 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:14:04.681 11:06:01 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:04.681 11:06:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:04.681 ************************************ 00:14:04.681 START TEST nvmf_zcopy 00:14:04.681 ************************************ 00:14:04.681 11:06:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:04.681 * Looking for test storage... 00:14:04.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:04.681 11:06:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:04.681 11:06:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:14:04.681 11:06:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:04.681 11:06:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:04.681 11:06:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:04.681 11:06:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:04.681 11:06:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:04.681 11:06:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:04.681 11:06:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:04.681 11:06:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:04.681 11:06:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:04.681 11:06:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:04.681 11:06:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:04.681 11:06:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:04.681 11:06:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:04.681 11:06:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:04.681 11:06:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:04.681 11:06:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:04.681 11:06:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:04.681 11:06:01 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:04.681 11:06:01 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:04.681 11:06:01 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:04.681 11:06:01 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.681 11:06:01 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.681 11:06:01 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.681 11:06:01 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:14:04.681 11:06:01 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.681 11:06:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:14:04.681 11:06:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:04.681 11:06:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:04.681 11:06:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:04.681 11:06:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:04.681 11:06:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:04.681 11:06:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:04.681 11:06:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:04.681 11:06:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:04.681 11:06:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:14:04.681 11:06:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:04.681 11:06:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:04.681 11:06:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:04.681 11:06:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:04.681 11:06:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:04.681 11:06:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:04.681 11:06:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:04.681 11:06:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:04.681 11:06:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:04.681 11:06:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:04.682 11:06:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:14:04.682 11:06:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:09.953 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:09.953 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:09.953 Found net devices under 0000:86:00.0: cvl_0_0 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:09.953 Found net devices under 0000:86:00.1: cvl_0_1 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:09.953 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:09.953 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:14:09.953 00:14:09.953 --- 10.0.0.2 ping statistics --- 00:14:09.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.953 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:09.953 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:09.953 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:14:09.953 00:14:09.953 --- 10.0.0.1 ping statistics --- 00:14:09.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.953 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:09.953 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:09.954 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:09.954 11:06:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:14:09.954 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:09.954 11:06:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@721 -- # xtrace_disable 00:14:09.954 11:06:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:09.954 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=2223001 00:14:09.954 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 2223001 00:14:09.954 11:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:09.954 11:06:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@828 -- # '[' -z 2223001 ']' 00:14:09.954 11:06:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.954 11:06:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local max_retries=100 00:14:09.954 11:06:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.954 11:06:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@837 -- # xtrace_disable 00:14:09.954 11:06:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:09.954 [2024-05-15 11:06:06.910300] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:14:09.954 [2024-05-15 11:06:06.910346] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:09.954 EAL: No free 2048 kB hugepages reported on node 1 00:14:09.954 [2024-05-15 11:06:06.966800] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.954 [2024-05-15 11:06:07.045110] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:09.954 [2024-05-15 11:06:07.045145] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:09.954 [2024-05-15 11:06:07.045152] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:09.954 [2024-05-15 11:06:07.045158] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:09.954 [2024-05-15 11:06:07.045170] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:09.954 [2024-05-15 11:06:07.045187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:10.522 11:06:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:14:10.522 11:06:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@861 -- # return 0 00:14:10.522 11:06:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:10.522 11:06:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@727 -- # xtrace_disable 00:14:10.522 11:06:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:10.522 11:06:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:10.522 11:06:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:14:10.522 11:06:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:14:10.522 11:06:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:10.522 11:06:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:10.522 [2024-05-15 11:06:07.755235] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:10.522 11:06:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:10.522 11:06:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:10.522 11:06:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:10.522 11:06:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:10.522 11:06:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:10.522 11:06:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:10.522 11:06:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:10.522 11:06:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:10.522 [2024-05-15 11:06:07.771221] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:10.522 [2024-05-15 11:06:07.771400] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:10.522 11:06:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:10.522 11:06:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:10.522 11:06:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:10.522 11:06:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:10.522 11:06:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:10.522 11:06:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:14:10.522 11:06:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:10.522 11:06:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:10.781 malloc0 00:14:10.781 11:06:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:10.781 11:06:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:10.781 11:06:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:10.781 11:06:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:10.781 11:06:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:10.781 11:06:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:14:10.781 11:06:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:14:10.781 11:06:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:14:10.781 11:06:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:14:10.781 11:06:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:10.781 11:06:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:10.781 { 00:14:10.781 "params": { 00:14:10.781 "name": "Nvme$subsystem", 00:14:10.781 "trtype": "$TEST_TRANSPORT", 00:14:10.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:10.781 "adrfam": "ipv4", 00:14:10.781 "trsvcid": "$NVMF_PORT", 00:14:10.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:10.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:10.781 "hdgst": ${hdgst:-false}, 00:14:10.781 "ddgst": ${ddgst:-false} 00:14:10.781 }, 00:14:10.781 "method": "bdev_nvme_attach_controller" 00:14:10.781 } 00:14:10.781 EOF 00:14:10.781 )") 00:14:10.781 11:06:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:14:10.781 11:06:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:14:10.781 11:06:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:14:10.781 11:06:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:10.781 "params": { 00:14:10.781 "name": "Nvme1", 00:14:10.781 "trtype": "tcp", 00:14:10.781 "traddr": "10.0.0.2", 00:14:10.782 "adrfam": "ipv4", 00:14:10.782 "trsvcid": "4420", 00:14:10.782 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:10.782 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:10.782 "hdgst": false, 00:14:10.782 "ddgst": false 00:14:10.782 }, 00:14:10.782 "method": "bdev_nvme_attach_controller" 00:14:10.782 }' 00:14:10.782 [2024-05-15 11:06:07.848521] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:14:10.782 [2024-05-15 11:06:07.848568] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2223270 ] 00:14:10.782 EAL: No free 2048 kB hugepages reported on node 1 00:14:10.782 [2024-05-15 11:06:07.903032] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.782 [2024-05-15 11:06:07.976169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.041 Running I/O for 10 seconds... 00:14:23.316 00:14:23.316 Latency(us) 00:14:23.316 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:23.316 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:14:23.316 Verification LBA range: start 0x0 length 0x1000 00:14:23.316 Nvme1n1 : 10.01 8659.61 67.65 0.00 0.00 14738.07 548.51 25872.47 00:14:23.316 =================================================================================================================== 00:14:23.316 Total : 8659.61 67.65 0.00 0.00 14738.07 548.51 25872.47 00:14:23.316 11:06:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2225098 00:14:23.316 11:06:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:14:23.316 11:06:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:23.316 11:06:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:14:23.316 11:06:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:14:23.316 11:06:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:14:23.316 11:06:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:14:23.316 11:06:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:23.316 11:06:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:23.316 { 00:14:23.316 "params": { 00:14:23.316 "name": "Nvme$subsystem", 00:14:23.316 "trtype": "$TEST_TRANSPORT", 00:14:23.316 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:23.316 "adrfam": "ipv4", 00:14:23.316 "trsvcid": "$NVMF_PORT", 00:14:23.316 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:23.316 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:23.316 "hdgst": ${hdgst:-false}, 00:14:23.316 "ddgst": ${ddgst:-false} 00:14:23.316 }, 00:14:23.316 "method": "bdev_nvme_attach_controller" 00:14:23.316 } 00:14:23.316 EOF 00:14:23.316 )") 00:14:23.316 11:06:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:14:23.316 [2024-05-15 11:06:18.538613] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.316 [2024-05-15 11:06:18.538646] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.316 11:06:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:14:23.316 11:06:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:14:23.316 11:06:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:23.316 "params": { 00:14:23.316 "name": "Nvme1", 00:14:23.316 "trtype": "tcp", 00:14:23.316 "traddr": "10.0.0.2", 00:14:23.316 "adrfam": "ipv4", 00:14:23.316 "trsvcid": "4420", 00:14:23.316 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:23.316 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:23.316 "hdgst": false, 00:14:23.316 "ddgst": false 00:14:23.316 }, 00:14:23.316 "method": "bdev_nvme_attach_controller" 00:14:23.316 }' 00:14:23.316 [2024-05-15 11:06:18.546600] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.316 [2024-05-15 11:06:18.546618] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.316 [2024-05-15 11:06:18.554618] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.316 [2024-05-15 11:06:18.554631] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.316 [2024-05-15 11:06:18.562639] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.317 [2024-05-15 11:06:18.562650] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.317 [2024-05-15 11:06:18.570661] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.317 [2024-05-15 11:06:18.570671] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.317 [2024-05-15 11:06:18.574019] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:14:23.317 [2024-05-15 11:06:18.574057] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2225098 ] 00:14:23.317 [2024-05-15 11:06:18.578683] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.317 [2024-05-15 11:06:18.578693] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.317 [2024-05-15 11:06:18.586705] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.317 [2024-05-15 11:06:18.586715] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.317 [2024-05-15 11:06:18.594724] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.317 [2024-05-15 11:06:18.594734] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.317 EAL: No free 2048 kB hugepages reported on node 1 00:14:23.317 [2024-05-15 11:06:18.602745] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.317 [2024-05-15 11:06:18.602754] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.317 [2024-05-15 11:06:18.610765] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.317 [2024-05-15 11:06:18.610774] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.317 [2024-05-15 11:06:18.618787] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.317 [2024-05-15 11:06:18.618797] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.317 [2024-05-15 11:06:18.626135] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.317 [2024-05-15 11:06:18.626807] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.317 [2024-05-15 11:06:18.626817] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.317 [2024-05-15 11:06:18.638840] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.317 [2024-05-15 11:06:18.638852] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.317 [2024-05-15 11:06:18.650871] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.317 [2024-05-15 11:06:18.650881] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.317 [2024-05-15 11:06:18.662907] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.317 [2024-05-15 11:06:18.662923] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.317 [2024-05-15 11:06:18.674935] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.317 [2024-05-15 11:06:18.674952] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.317 [2024-05-15 11:06:18.686969] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.317 [2024-05-15 11:06:18.686979] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.317 [2024-05-15 11:06:18.699000] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.317 [2024-05-15 11:06:18.699011] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.317 [2024-05-15 11:06:18.700711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.317 [2024-05-15 11:06:18.711039] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.317 [2024-05-15 11:06:18.711055] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.317 [2024-05-15 11:06:18.723073] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.317 [2024-05-15 11:06:18.723090] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.317 [2024-05-15 11:06:18.735102] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.317 [2024-05-15 11:06:18.735114] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.317 [2024-05-15 11:06:18.747132] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.317 [2024-05-15 11:06:18.747143] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.317 [2024-05-15 11:06:18.759161] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.317 [2024-05-15 11:06:18.759176] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.317 [2024-05-15 11:06:18.771194] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.317 [2024-05-15 11:06:18.771205] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.317 [2024-05-15 11:06:18.783245] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.317 [2024-05-15 11:06:18.783261] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.317 [2024-05-15 11:06:18.795272] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.317 [2024-05-15 11:06:18.795288] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.317 [2024-05-15 11:06:18.807305] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.317 [2024-05-15 11:06:18.807320] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.317 [2024-05-15 11:06:18.819325] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.317 [2024-05-15 11:06:18.819340] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.317 [2024-05-15 11:06:18.831356] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.317 [2024-05-15 11:06:18.831368] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.317 [2024-05-15 11:06:18.843387] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.317 [2024-05-15 11:06:18.843397] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.317 [2024-05-15 11:06:18.855418] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.317 [2024-05-15 11:06:18.855427] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.317 [2024-05-15 11:06:18.867454] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.317 [2024-05-15 11:06:18.867467] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.317 [2024-05-15 11:06:18.879481] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.317 [2024-05-15 11:06:18.879490] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.317 [2024-05-15 11:06:18.891516] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.317 [2024-05-15 11:06:18.891526] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.317 [2024-05-15 11:06:18.903548] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.317 [2024-05-15 11:06:18.903558] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.317 [2024-05-15 11:06:18.915587] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.317 [2024-05-15 11:06:18.915601] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.317 [2024-05-15 11:06:18.927616] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.317 [2024-05-15 11:06:18.927625] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.317 [2024-05-15 11:06:18.939646] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.317 [2024-05-15 11:06:18.939655] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.317 [2024-05-15 11:06:18.951690] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.317 [2024-05-15 11:06:18.951704] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.317 [2024-05-15 11:06:19.004507] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.317 [2024-05-15 11:06:19.004524] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.317 [2024-05-15 11:06:19.015857] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.317 [2024-05-15 11:06:19.015869] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.317 Running I/O for 5 seconds... 00:14:23.317 [2024-05-15 11:06:19.032147] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.317 [2024-05-15 11:06:19.032171] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.317 [2024-05-15 11:06:19.042851] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.317 [2024-05-15 11:06:19.042870] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.317 [2024-05-15 11:06:19.051549] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.317 [2024-05-15 11:06:19.051567] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.317 [2024-05-15 11:06:19.061046] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.317 [2024-05-15 11:06:19.061064] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.317 [2024-05-15 11:06:19.070979] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.317 [2024-05-15 11:06:19.070998] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.317 [2024-05-15 11:06:19.079905] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.317 [2024-05-15 11:06:19.079929] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.317 [2024-05-15 11:06:19.094406] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.317 [2024-05-15 11:06:19.094424] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.317 [2024-05-15 11:06:19.103191] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.317 [2024-05-15 11:06:19.103209] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.317 [2024-05-15 11:06:19.117766] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.317 [2024-05-15 11:06:19.117785] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.317 [2024-05-15 11:06:19.128468] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.317 [2024-05-15 11:06:19.128485] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.317 [2024-05-15 11:06:19.143169] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.317 [2024-05-15 11:06:19.143189] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.317 [2024-05-15 11:06:19.157410] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.317 [2024-05-15 11:06:19.157432] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.317 [2024-05-15 11:06:19.168355] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.317 [2024-05-15 11:06:19.168372] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.317 [2024-05-15 11:06:19.177018] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.317 [2024-05-15 11:06:19.177036] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.186903] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.186922] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.201226] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.201245] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.214764] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.214782] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.223761] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.223779] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.233203] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.233221] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.241917] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.241935] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.250503] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.250521] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.264859] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.264878] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.278440] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.278458] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.287471] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.287489] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.296124] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.296146] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.305705] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.305723] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.320734] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.320752] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.331327] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.331356] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.345638] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.345657] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.359183] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.359201] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.373285] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.373304] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.387047] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.387065] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.400977] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.400995] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.414690] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.414712] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.428857] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.428875] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.437939] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.437958] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.452304] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.452322] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.461498] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.461517] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.475809] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.475828] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.489622] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.489641] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.498445] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.498463] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.512902] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.512921] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.526545] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.526566] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.535405] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.535428] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.549605] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.549625] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.563283] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.563304] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.577556] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.577577] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.586546] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.586566] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.595381] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.595400] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.604599] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.604617] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.613917] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.613935] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.628308] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.628327] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.641959] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.641978] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.651052] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.651070] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.664968] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.664987] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.673756] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.673774] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.688366] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.688389] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.699616] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.699634] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.714114] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.714133] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.723256] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.723274] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.732235] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.732254] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.746977] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.746999] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.757868] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.757892] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.772008] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.772027] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.785654] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.785673] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.798999] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.799018] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.812718] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.812737] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.821695] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.821714] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.836266] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.836285] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.318 [2024-05-15 11:06:19.849746] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.318 [2024-05-15 11:06:19.849765] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:19.863765] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:19.863783] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:19.877566] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:19.877585] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:19.891573] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:19.891592] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:19.900547] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:19.900565] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:19.914708] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:19.914727] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:19.927926] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:19.927947] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:19.942125] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:19.942154] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:19.956192] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:19.956210] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:19.967153] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:19.967179] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:19.981698] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:19.981716] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:19.995516] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:19.995535] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:20.009832] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:20.009850] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:20.024288] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:20.024306] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:20.035732] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:20.035754] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:20.050066] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:20.050085] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:20.063775] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:20.063794] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:20.077805] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:20.077823] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:20.086644] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:20.086663] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:20.095387] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:20.095405] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:20.104806] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:20.104824] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:20.113713] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:20.113731] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:20.128660] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:20.128679] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:20.139983] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:20.140001] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:20.148986] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:20.149005] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:20.158368] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:20.158386] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:20.167599] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:20.167617] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:20.182513] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:20.182530] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:20.198133] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:20.198152] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:20.206992] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:20.207010] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:20.216309] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:20.216328] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:20.231201] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:20.231220] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:20.241966] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:20.241986] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:20.251391] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:20.251409] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:20.260648] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:20.260666] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:20.270061] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:20.270079] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:20.284772] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:20.284791] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:20.299043] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:20.299062] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:20.312848] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:20.312867] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:20.321656] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:20.321675] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:20.336182] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:20.336200] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:20.349555] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:20.349574] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:20.364103] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:20.364122] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:20.375352] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:20.375371] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:20.384369] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:20.384388] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:20.393865] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:20.393884] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:20.408670] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:20.408688] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:20.419272] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:20.419291] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:20.428752] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:20.428770] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:20.443404] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:20.443422] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:20.457312] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:20.457330] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:20.466238] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:20.466256] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:20.480774] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:20.480793] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:20.489503] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:20.489522] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:20.498372] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:20.498390] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:20.508021] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.319 [2024-05-15 11:06:20.508039] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.319 [2024-05-15 11:06:20.522206] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.320 [2024-05-15 11:06:20.522224] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.320 [2024-05-15 11:06:20.536197] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.320 [2024-05-15 11:06:20.536216] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.320 [2024-05-15 11:06:20.545145] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.320 [2024-05-15 11:06:20.545168] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.320 [2024-05-15 11:06:20.553840] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.320 [2024-05-15 11:06:20.553859] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.320 [2024-05-15 11:06:20.563558] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.320 [2024-05-15 11:06:20.563576] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.320 [2024-05-15 11:06:20.577882] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.320 [2024-05-15 11:06:20.577901] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.579 [2024-05-15 11:06:20.592023] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.579 [2024-05-15 11:06:20.592044] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.579 [2024-05-15 11:06:20.602397] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.579 [2024-05-15 11:06:20.602416] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.579 [2024-05-15 11:06:20.611861] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.579 [2024-05-15 11:06:20.611880] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.579 [2024-05-15 11:06:20.620655] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.579 [2024-05-15 11:06:20.620673] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.579 [2024-05-15 11:06:20.635281] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.579 [2024-05-15 11:06:20.635299] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.579 [2024-05-15 11:06:20.648941] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.579 [2024-05-15 11:06:20.648960] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.579 [2024-05-15 11:06:20.657883] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.579 [2024-05-15 11:06:20.657901] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.579 [2024-05-15 11:06:20.672629] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.579 [2024-05-15 11:06:20.672648] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.579 [2024-05-15 11:06:20.684030] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.579 [2024-05-15 11:06:20.684048] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.579 [2024-05-15 11:06:20.693029] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.579 [2024-05-15 11:06:20.693048] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.579 [2024-05-15 11:06:20.707629] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.579 [2024-05-15 11:06:20.707647] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.579 [2024-05-15 11:06:20.716574] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.579 [2024-05-15 11:06:20.716593] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.579 [2024-05-15 11:06:20.725744] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.579 [2024-05-15 11:06:20.725762] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.579 [2024-05-15 11:06:20.740392] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.579 [2024-05-15 11:06:20.740410] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.579 [2024-05-15 11:06:20.751157] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.579 [2024-05-15 11:06:20.751179] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.579 [2024-05-15 11:06:20.765336] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.579 [2024-05-15 11:06:20.765354] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.579 [2024-05-15 11:06:20.774147] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.579 [2024-05-15 11:06:20.774172] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.579 [2024-05-15 11:06:20.788375] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.579 [2024-05-15 11:06:20.788394] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.579 [2024-05-15 11:06:20.801703] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.579 [2024-05-15 11:06:20.801721] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.579 [2024-05-15 11:06:20.810663] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.579 [2024-05-15 11:06:20.810681] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.579 [2024-05-15 11:06:20.825100] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.579 [2024-05-15 11:06:20.825120] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.579 [2024-05-15 11:06:20.839024] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.579 [2024-05-15 11:06:20.839042] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.839 [2024-05-15 11:06:20.852969] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.839 [2024-05-15 11:06:20.852988] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.839 [2024-05-15 11:06:20.862138] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.839 [2024-05-15 11:06:20.862156] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.839 [2024-05-15 11:06:20.871030] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.839 [2024-05-15 11:06:20.871048] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.839 [2024-05-15 11:06:20.885493] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.839 [2024-05-15 11:06:20.885514] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.839 [2024-05-15 11:06:20.899268] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.839 [2024-05-15 11:06:20.899286] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.839 [2024-05-15 11:06:20.913148] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.839 [2024-05-15 11:06:20.913173] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.839 [2024-05-15 11:06:20.927114] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.839 [2024-05-15 11:06:20.927134] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.839 [2024-05-15 11:06:20.940882] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.839 [2024-05-15 11:06:20.940901] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.839 [2024-05-15 11:06:20.954856] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.839 [2024-05-15 11:06:20.954875] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.839 [2024-05-15 11:06:20.969224] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.839 [2024-05-15 11:06:20.969244] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.839 [2024-05-15 11:06:20.979662] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.839 [2024-05-15 11:06:20.979681] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.839 [2024-05-15 11:06:20.988623] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.839 [2024-05-15 11:06:20.988643] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.839 [2024-05-15 11:06:20.998242] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.839 [2024-05-15 11:06:20.998261] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.839 [2024-05-15 11:06:21.012830] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.839 [2024-05-15 11:06:21.012850] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.839 [2024-05-15 11:06:21.026888] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.839 [2024-05-15 11:06:21.026906] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.839 [2024-05-15 11:06:21.035771] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.839 [2024-05-15 11:06:21.035790] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.839 [2024-05-15 11:06:21.050134] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.839 [2024-05-15 11:06:21.050153] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.839 [2024-05-15 11:06:21.064412] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.839 [2024-05-15 11:06:21.064432] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.839 [2024-05-15 11:06:21.075243] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.839 [2024-05-15 11:06:21.075262] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.839 [2024-05-15 11:06:21.084504] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.839 [2024-05-15 11:06:21.084523] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.839 [2024-05-15 11:06:21.093759] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.839 [2024-05-15 11:06:21.093778] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.099 [2024-05-15 11:06:21.108891] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.099 [2024-05-15 11:06:21.108910] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.099 [2024-05-15 11:06:21.119792] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.099 [2024-05-15 11:06:21.119815] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.099 [2024-05-15 11:06:21.134229] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.099 [2024-05-15 11:06:21.134247] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.099 [2024-05-15 11:06:21.148070] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.099 [2024-05-15 11:06:21.148093] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.099 [2024-05-15 11:06:21.157095] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.099 [2024-05-15 11:06:21.157114] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.099 [2024-05-15 11:06:21.171406] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.099 [2024-05-15 11:06:21.171424] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.099 [2024-05-15 11:06:21.185410] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.099 [2024-05-15 11:06:21.185429] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.099 [2024-05-15 11:06:21.199366] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.099 [2024-05-15 11:06:21.199386] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.099 [2024-05-15 11:06:21.208342] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.099 [2024-05-15 11:06:21.208360] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.099 [2024-05-15 11:06:21.217728] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.099 [2024-05-15 11:06:21.217747] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.099 [2024-05-15 11:06:21.232159] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.099 [2024-05-15 11:06:21.232185] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.099 [2024-05-15 11:06:21.245398] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.099 [2024-05-15 11:06:21.245417] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.099 [2024-05-15 11:06:21.259442] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.099 [2024-05-15 11:06:21.259461] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.099 [2024-05-15 11:06:21.273670] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.099 [2024-05-15 11:06:21.273688] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.099 [2024-05-15 11:06:21.289180] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.099 [2024-05-15 11:06:21.289203] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.099 [2024-05-15 11:06:21.298306] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.099 [2024-05-15 11:06:21.298325] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.099 [2024-05-15 11:06:21.312558] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.099 [2024-05-15 11:06:21.312577] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.099 [2024-05-15 11:06:21.326239] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.099 [2024-05-15 11:06:21.326259] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.099 [2024-05-15 11:06:21.340437] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.099 [2024-05-15 11:06:21.340456] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.099 [2024-05-15 11:06:21.354401] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.099 [2024-05-15 11:06:21.354419] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.358 [2024-05-15 11:06:21.368414] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.358 [2024-05-15 11:06:21.368436] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.358 [2024-05-15 11:06:21.377305] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.358 [2024-05-15 11:06:21.377323] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.358 [2024-05-15 11:06:21.392020] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.358 [2024-05-15 11:06:21.392039] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.358 [2024-05-15 11:06:21.399607] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.358 [2024-05-15 11:06:21.399626] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.358 [2024-05-15 11:06:21.408689] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.358 [2024-05-15 11:06:21.408708] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.358 [2024-05-15 11:06:21.422549] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.358 [2024-05-15 11:06:21.422567] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.358 [2024-05-15 11:06:21.431528] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.358 [2024-05-15 11:06:21.431546] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.358 [2024-05-15 11:06:21.445807] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.358 [2024-05-15 11:06:21.445826] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.358 [2024-05-15 11:06:21.459300] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.358 [2024-05-15 11:06:21.459318] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.358 [2024-05-15 11:06:21.468254] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.358 [2024-05-15 11:06:21.468272] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.358 [2024-05-15 11:06:21.482661] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.358 [2024-05-15 11:06:21.482679] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.359 [2024-05-15 11:06:21.491359] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.359 [2024-05-15 11:06:21.491377] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.359 [2024-05-15 11:06:21.505849] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.359 [2024-05-15 11:06:21.505868] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.359 [2024-05-15 11:06:21.514918] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.359 [2024-05-15 11:06:21.514936] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.359 [2024-05-15 11:06:21.528929] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.359 [2024-05-15 11:06:21.528947] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.359 [2024-05-15 11:06:21.537771] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.359 [2024-05-15 11:06:21.537788] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.359 [2024-05-15 11:06:21.552135] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.359 [2024-05-15 11:06:21.552153] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.359 [2024-05-15 11:06:21.565759] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.359 [2024-05-15 11:06:21.565778] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.359 [2024-05-15 11:06:21.574667] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.359 [2024-05-15 11:06:21.574685] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.359 [2024-05-15 11:06:21.588870] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.359 [2024-05-15 11:06:21.588893] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.359 [2024-05-15 11:06:21.597704] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.359 [2024-05-15 11:06:21.597723] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.359 [2024-05-15 11:06:21.611913] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.359 [2024-05-15 11:06:21.611933] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.618 [2024-05-15 11:06:21.625905] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.618 [2024-05-15 11:06:21.625925] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.618 [2024-05-15 11:06:21.640090] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.618 [2024-05-15 11:06:21.640109] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.618 [2024-05-15 11:06:21.647647] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.618 [2024-05-15 11:06:21.647665] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.618 [2024-05-15 11:06:21.656597] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.618 [2024-05-15 11:06:21.656615] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.618 [2024-05-15 11:06:21.671592] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.618 [2024-05-15 11:06:21.671610] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.618 [2024-05-15 11:06:21.681904] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.618 [2024-05-15 11:06:21.681922] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.618 [2024-05-15 11:06:21.691614] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.618 [2024-05-15 11:06:21.691631] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.618 [2024-05-15 11:06:21.705672] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.618 [2024-05-15 11:06:21.705691] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.619 [2024-05-15 11:06:21.719593] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.619 [2024-05-15 11:06:21.719611] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.619 [2024-05-15 11:06:21.730289] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.619 [2024-05-15 11:06:21.730307] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.619 [2024-05-15 11:06:21.744773] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.619 [2024-05-15 11:06:21.744792] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.619 [2024-05-15 11:06:21.753733] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.619 [2024-05-15 11:06:21.753751] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.619 [2024-05-15 11:06:21.767986] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.619 [2024-05-15 11:06:21.768004] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.619 [2024-05-15 11:06:21.782036] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.619 [2024-05-15 11:06:21.782055] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.619 [2024-05-15 11:06:21.796224] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.619 [2024-05-15 11:06:21.796243] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.619 [2024-05-15 11:06:21.809903] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.619 [2024-05-15 11:06:21.809921] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.619 [2024-05-15 11:06:21.818760] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.619 [2024-05-15 11:06:21.818778] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.619 [2024-05-15 11:06:21.832814] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.619 [2024-05-15 11:06:21.832832] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.619 [2024-05-15 11:06:21.841772] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.619 [2024-05-15 11:06:21.841790] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.619 [2024-05-15 11:06:21.851140] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.619 [2024-05-15 11:06:21.851158] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.619 [2024-05-15 11:06:21.865574] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.619 [2024-05-15 11:06:21.865592] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.619 [2024-05-15 11:06:21.878979] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.619 [2024-05-15 11:06:21.878997] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.878 [2024-05-15 11:06:21.892653] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.878 [2024-05-15 11:06:21.892671] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.878 [2024-05-15 11:06:21.901546] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.878 [2024-05-15 11:06:21.901564] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.878 [2024-05-15 11:06:21.910361] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.878 [2024-05-15 11:06:21.910379] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.878 [2024-05-15 11:06:21.924740] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.878 [2024-05-15 11:06:21.924757] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.878 [2024-05-15 11:06:21.938685] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.878 [2024-05-15 11:06:21.938703] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.878 [2024-05-15 11:06:21.952664] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.879 [2024-05-15 11:06:21.952683] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.879 [2024-05-15 11:06:21.966677] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.879 [2024-05-15 11:06:21.966695] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.879 [2024-05-15 11:06:21.975807] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.879 [2024-05-15 11:06:21.975824] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.879 [2024-05-15 11:06:21.990048] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.879 [2024-05-15 11:06:21.990066] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.879 [2024-05-15 11:06:22.003284] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.879 [2024-05-15 11:06:22.003302] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.879 [2024-05-15 11:06:22.011982] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.879 [2024-05-15 11:06:22.012001] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.879 [2024-05-15 11:06:22.021068] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.879 [2024-05-15 11:06:22.021086] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.879 [2024-05-15 11:06:22.030425] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.879 [2024-05-15 11:06:22.030444] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.879 [2024-05-15 11:06:22.045094] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.879 [2024-05-15 11:06:22.045112] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.879 [2024-05-15 11:06:22.058854] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.879 [2024-05-15 11:06:22.058872] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.879 [2024-05-15 11:06:22.072616] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.879 [2024-05-15 11:06:22.072634] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.879 [2024-05-15 11:06:22.086455] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.879 [2024-05-15 11:06:22.086481] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.879 [2024-05-15 11:06:22.095589] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.879 [2024-05-15 11:06:22.095608] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.879 [2024-05-15 11:06:22.109868] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.879 [2024-05-15 11:06:22.109886] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.879 [2024-05-15 11:06:22.118708] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.879 [2024-05-15 11:06:22.118726] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.879 [2024-05-15 11:06:22.127928] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.879 [2024-05-15 11:06:22.127946] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.879 [2024-05-15 11:06:22.142108] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.879 [2024-05-15 11:06:22.142126] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.138 [2024-05-15 11:06:22.155987] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.138 [2024-05-15 11:06:22.156006] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.138 [2024-05-15 11:06:22.170411] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.138 [2024-05-15 11:06:22.170430] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.138 [2024-05-15 11:06:22.183812] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.138 [2024-05-15 11:06:22.183831] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.138 [2024-05-15 11:06:22.198248] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.138 [2024-05-15 11:06:22.198267] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.138 [2024-05-15 11:06:22.212296] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.138 [2024-05-15 11:06:22.212315] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.138 [2024-05-15 11:06:22.222884] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.138 [2024-05-15 11:06:22.222902] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.138 [2024-05-15 11:06:22.237626] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.138 [2024-05-15 11:06:22.237645] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.138 [2024-05-15 11:06:22.246524] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.138 [2024-05-15 11:06:22.246541] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.138 [2024-05-15 11:06:22.260820] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.138 [2024-05-15 11:06:22.260838] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.138 [2024-05-15 11:06:22.274102] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.138 [2024-05-15 11:06:22.274121] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.138 [2024-05-15 11:06:22.288410] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.138 [2024-05-15 11:06:22.288428] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.138 [2024-05-15 11:06:22.299226] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.138 [2024-05-15 11:06:22.299244] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.138 [2024-05-15 11:06:22.308341] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.138 [2024-05-15 11:06:22.308358] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.138 [2024-05-15 11:06:22.317002] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.138 [2024-05-15 11:06:22.317020] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.138 [2024-05-15 11:06:22.331680] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.138 [2024-05-15 11:06:22.331699] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.138 [2024-05-15 11:06:22.342310] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.138 [2024-05-15 11:06:22.342329] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.138 [2024-05-15 11:06:22.356565] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.138 [2024-05-15 11:06:22.356584] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.139 [2024-05-15 11:06:22.370272] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.139 [2024-05-15 11:06:22.370291] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.139 [2024-05-15 11:06:22.379328] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.139 [2024-05-15 11:06:22.379347] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.139 [2024-05-15 11:06:22.393720] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.139 [2024-05-15 11:06:22.393740] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.398 [2024-05-15 11:06:22.407819] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.398 [2024-05-15 11:06:22.407838] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.398 [2024-05-15 11:06:22.421863] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.398 [2024-05-15 11:06:22.421881] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.398 [2024-05-15 11:06:22.430838] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.398 [2024-05-15 11:06:22.430856] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.398 [2024-05-15 11:06:22.439533] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.398 [2024-05-15 11:06:22.439568] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.398 [2024-05-15 11:06:22.448878] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.398 [2024-05-15 11:06:22.448896] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.398 [2024-05-15 11:06:22.463563] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.398 [2024-05-15 11:06:22.463582] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.398 [2024-05-15 11:06:22.474344] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.398 [2024-05-15 11:06:22.474363] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.398 [2024-05-15 11:06:22.488665] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.398 [2024-05-15 11:06:22.488684] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.398 [2024-05-15 11:06:22.502368] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.398 [2024-05-15 11:06:22.502387] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.398 [2024-05-15 11:06:22.516315] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.398 [2024-05-15 11:06:22.516334] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.398 [2024-05-15 11:06:22.530606] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.398 [2024-05-15 11:06:22.530625] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.398 [2024-05-15 11:06:22.541890] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.398 [2024-05-15 11:06:22.541909] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.398 [2024-05-15 11:06:22.556170] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.398 [2024-05-15 11:06:22.556189] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.398 [2024-05-15 11:06:22.565045] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.398 [2024-05-15 11:06:22.565063] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.398 [2024-05-15 11:06:22.574454] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.398 [2024-05-15 11:06:22.574473] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.398 [2024-05-15 11:06:22.583744] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.398 [2024-05-15 11:06:22.583763] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.398 [2024-05-15 11:06:22.598309] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.398 [2024-05-15 11:06:22.598327] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.398 [2024-05-15 11:06:22.612111] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.398 [2024-05-15 11:06:22.612129] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.398 [2024-05-15 11:06:22.621176] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.398 [2024-05-15 11:06:22.621195] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.398 [2024-05-15 11:06:22.635477] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.398 [2024-05-15 11:06:22.635496] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.398 [2024-05-15 11:06:22.644634] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.398 [2024-05-15 11:06:22.644653] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.398 [2024-05-15 11:06:22.658833] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.398 [2024-05-15 11:06:22.658853] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.658 [2024-05-15 11:06:22.666436] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.658 [2024-05-15 11:06:22.666456] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.658 [2024-05-15 11:06:22.676713] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.658 [2024-05-15 11:06:22.676732] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.658 [2024-05-15 11:06:22.685745] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.658 [2024-05-15 11:06:22.685764] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.658 [2024-05-15 11:06:22.700301] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.658 [2024-05-15 11:06:22.700319] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.658 [2024-05-15 11:06:22.713870] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.658 [2024-05-15 11:06:22.713889] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.658 [2024-05-15 11:06:22.727747] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.658 [2024-05-15 11:06:22.727771] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.658 [2024-05-15 11:06:22.741929] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.658 [2024-05-15 11:06:22.741949] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.658 [2024-05-15 11:06:22.755716] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.658 [2024-05-15 11:06:22.755734] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.658 [2024-05-15 11:06:22.764500] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.658 [2024-05-15 11:06:22.764518] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.658 [2024-05-15 11:06:22.778584] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.658 [2024-05-15 11:06:22.778602] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.658 [2024-05-15 11:06:22.787659] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.658 [2024-05-15 11:06:22.787678] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.658 [2024-05-15 11:06:22.796316] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.658 [2024-05-15 11:06:22.796334] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.658 [2024-05-15 11:06:22.804804] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.658 [2024-05-15 11:06:22.804822] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.658 [2024-05-15 11:06:22.819263] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.658 [2024-05-15 11:06:22.819282] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.658 [2024-05-15 11:06:22.833503] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.658 [2024-05-15 11:06:22.833520] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.658 [2024-05-15 11:06:22.847811] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.658 [2024-05-15 11:06:22.847829] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.658 [2024-05-15 11:06:22.861876] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.658 [2024-05-15 11:06:22.861894] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.658 [2024-05-15 11:06:22.870936] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.658 [2024-05-15 11:06:22.870954] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.658 [2024-05-15 11:06:22.879733] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.658 [2024-05-15 11:06:22.879751] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.658 [2024-05-15 11:06:22.894376] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.658 [2024-05-15 11:06:22.894394] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.658 [2024-05-15 11:06:22.903107] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.658 [2024-05-15 11:06:22.903125] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.658 [2024-05-15 11:06:22.912124] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.658 [2024-05-15 11:06:22.912143] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.658 [2024-05-15 11:06:22.920827] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.658 [2024-05-15 11:06:22.920845] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.917 [2024-05-15 11:06:22.930083] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.917 [2024-05-15 11:06:22.930101] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.917 [2024-05-15 11:06:22.939346] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.917 [2024-05-15 11:06:22.939367] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.917 [2024-05-15 11:06:22.954211] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.917 [2024-05-15 11:06:22.954229] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.917 [2024-05-15 11:06:22.969789] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.917 [2024-05-15 11:06:22.969808] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.917 [2024-05-15 11:06:22.978848] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.917 [2024-05-15 11:06:22.978866] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.917 [2024-05-15 11:06:22.987658] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.917 [2024-05-15 11:06:22.987676] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.917 [2024-05-15 11:06:23.002295] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.917 [2024-05-15 11:06:23.002314] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.917 [2024-05-15 11:06:23.015794] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.917 [2024-05-15 11:06:23.015812] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.917 [2024-05-15 11:06:23.025016] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.917 [2024-05-15 11:06:23.025034] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.917 [2024-05-15 11:06:23.039565] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.917 [2024-05-15 11:06:23.039584] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.917 [2024-05-15 11:06:23.052976] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.917 [2024-05-15 11:06:23.052994] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.917 [2024-05-15 11:06:23.067208] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.917 [2024-05-15 11:06:23.067226] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.917 [2024-05-15 11:06:23.076585] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.917 [2024-05-15 11:06:23.076604] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.917 [2024-05-15 11:06:23.090848] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.917 [2024-05-15 11:06:23.090868] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.917 [2024-05-15 11:06:23.104504] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.917 [2024-05-15 11:06:23.104525] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.917 [2024-05-15 11:06:23.118100] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.917 [2024-05-15 11:06:23.118120] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.917 [2024-05-15 11:06:23.131833] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.918 [2024-05-15 11:06:23.131852] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.918 [2024-05-15 11:06:23.145610] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.918 [2024-05-15 11:06:23.145628] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.918 [2024-05-15 11:06:23.159601] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.918 [2024-05-15 11:06:23.159619] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.918 [2024-05-15 11:06:23.173695] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.918 [2024-05-15 11:06:23.173714] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.918 [2024-05-15 11:06:23.182653] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.918 [2024-05-15 11:06:23.182675] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.176 [2024-05-15 11:06:23.197091] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.176 [2024-05-15 11:06:23.197110] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.176 [2024-05-15 11:06:23.210693] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.176 [2024-05-15 11:06:23.210711] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.176 [2024-05-15 11:06:23.219325] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.176 [2024-05-15 11:06:23.219343] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.176 [2024-05-15 11:06:23.233320] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.176 [2024-05-15 11:06:23.233339] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.176 [2024-05-15 11:06:23.246916] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.176 [2024-05-15 11:06:23.246934] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.176 [2024-05-15 11:06:23.261065] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.176 [2024-05-15 11:06:23.261084] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.176 [2024-05-15 11:06:23.275113] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.176 [2024-05-15 11:06:23.275132] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.176 [2024-05-15 11:06:23.286390] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.176 [2024-05-15 11:06:23.286407] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.176 [2024-05-15 11:06:23.300726] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.176 [2024-05-15 11:06:23.300744] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.176 [2024-05-15 11:06:23.309779] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.176 [2024-05-15 11:06:23.309797] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.176 [2024-05-15 11:06:23.324079] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.176 [2024-05-15 11:06:23.324097] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.176 [2024-05-15 11:06:23.332971] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.176 [2024-05-15 11:06:23.332989] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.176 [2024-05-15 11:06:23.341787] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.176 [2024-05-15 11:06:23.341805] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.176 [2024-05-15 11:06:23.351054] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.176 [2024-05-15 11:06:23.351072] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.176 [2024-05-15 11:06:23.360260] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.176 [2024-05-15 11:06:23.360278] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.176 [2024-05-15 11:06:23.374643] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.176 [2024-05-15 11:06:23.374661] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.176 [2024-05-15 11:06:23.388046] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.176 [2024-05-15 11:06:23.388065] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.176 [2024-05-15 11:06:23.402285] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.176 [2024-05-15 11:06:23.402312] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.176 [2024-05-15 11:06:23.412595] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.177 [2024-05-15 11:06:23.412623] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.177 [2024-05-15 11:06:23.427050] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.177 [2024-05-15 11:06:23.427068] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.177 [2024-05-15 11:06:23.441023] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.177 [2024-05-15 11:06:23.441041] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.434 [2024-05-15 11:06:23.454763] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.434 [2024-05-15 11:06:23.454781] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.434 [2024-05-15 11:06:23.463979] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.434 [2024-05-15 11:06:23.463997] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.434 [2024-05-15 11:06:23.478155] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.434 [2024-05-15 11:06:23.478181] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.434 [2024-05-15 11:06:23.491678] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.434 [2024-05-15 11:06:23.491696] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.434 [2024-05-15 11:06:23.505279] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.434 [2024-05-15 11:06:23.505297] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.434 [2024-05-15 11:06:23.514188] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.434 [2024-05-15 11:06:23.514205] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.434 [2024-05-15 11:06:23.524101] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.434 [2024-05-15 11:06:23.524119] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.434 [2024-05-15 11:06:23.533608] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.434 [2024-05-15 11:06:23.533625] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.434 [2024-05-15 11:06:23.547961] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.434 [2024-05-15 11:06:23.547980] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.434 [2024-05-15 11:06:23.561952] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.434 [2024-05-15 11:06:23.561970] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.434 [2024-05-15 11:06:23.575748] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.434 [2024-05-15 11:06:23.575767] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.434 [2024-05-15 11:06:23.584623] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.434 [2024-05-15 11:06:23.584641] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.434 [2024-05-15 11:06:23.593425] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.434 [2024-05-15 11:06:23.593443] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.434 [2024-05-15 11:06:23.607789] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.434 [2024-05-15 11:06:23.607807] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.434 [2024-05-15 11:06:23.621247] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.434 [2024-05-15 11:06:23.621266] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.434 [2024-05-15 11:06:23.634982] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.434 [2024-05-15 11:06:23.635003] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.434 [2024-05-15 11:06:23.648896] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.434 [2024-05-15 11:06:23.648915] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.434 [2024-05-15 11:06:23.662604] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.434 [2024-05-15 11:06:23.662622] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.435 [2024-05-15 11:06:23.671602] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.435 [2024-05-15 11:06:23.671620] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.435 [2024-05-15 11:06:23.686221] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.435 [2024-05-15 11:06:23.686240] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.435 [2024-05-15 11:06:23.695207] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.435 [2024-05-15 11:06:23.695224] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.693 [2024-05-15 11:06:23.704260] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.693 [2024-05-15 11:06:23.704279] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.693 [2024-05-15 11:06:23.713594] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.693 [2024-05-15 11:06:23.713613] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.693 [2024-05-15 11:06:23.722799] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.693 [2024-05-15 11:06:23.722817] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.693 [2024-05-15 11:06:23.737338] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.693 [2024-05-15 11:06:23.737357] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.693 [2024-05-15 11:06:23.751097] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.693 [2024-05-15 11:06:23.751117] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.693 [2024-05-15 11:06:23.760236] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.693 [2024-05-15 11:06:23.760254] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.693 [2024-05-15 11:06:23.774627] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.693 [2024-05-15 11:06:23.774647] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.693 [2024-05-15 11:06:23.788484] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.693 [2024-05-15 11:06:23.788503] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.693 [2024-05-15 11:06:23.802371] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.693 [2024-05-15 11:06:23.802390] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.693 [2024-05-15 11:06:23.816604] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.693 [2024-05-15 11:06:23.816623] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.693 [2024-05-15 11:06:23.830152] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.693 [2024-05-15 11:06:23.830178] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.693 [2024-05-15 11:06:23.843895] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.693 [2024-05-15 11:06:23.843913] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.693 [2024-05-15 11:06:23.857568] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.693 [2024-05-15 11:06:23.857587] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.693 [2024-05-15 11:06:23.871479] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.693 [2024-05-15 11:06:23.871498] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.693 [2024-05-15 11:06:23.880496] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.693 [2024-05-15 11:06:23.880514] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.693 [2024-05-15 11:06:23.894881] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.693 [2024-05-15 11:06:23.894899] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.693 [2024-05-15 11:06:23.908483] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.693 [2024-05-15 11:06:23.908503] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.693 [2024-05-15 11:06:23.917306] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.693 [2024-05-15 11:06:23.917341] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.693 [2024-05-15 11:06:23.931842] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.693 [2024-05-15 11:06:23.931861] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.693 [2024-05-15 11:06:23.941014] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.693 [2024-05-15 11:06:23.941032] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.693 [2024-05-15 11:06:23.950324] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.693 [2024-05-15 11:06:23.950342] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.952 [2024-05-15 11:06:23.965153] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.952 [2024-05-15 11:06:23.965180] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.952 [2024-05-15 11:06:23.976586] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.952 [2024-05-15 11:06:23.976606] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.952 [2024-05-15 11:06:23.998802] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.952 [2024-05-15 11:06:23.998822] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.952 [2024-05-15 11:06:24.013107] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.952 [2024-05-15 11:06:24.013127] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.952 [2024-05-15 11:06:24.022077] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.952 [2024-05-15 11:06:24.022095] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.952 [2024-05-15 11:06:24.035522] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.952 [2024-05-15 11:06:24.035540] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.952 00:14:26.952 Latency(us) 00:14:26.952 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:26.952 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:14:26.952 Nvme1n1 : 5.01 16742.40 130.80 0.00 0.00 7637.29 3447.76 16640.45 00:14:26.952 =================================================================================================================== 00:14:26.952 Total : 16742.40 130.80 0.00 0.00 7637.29 3447.76 16640.45 00:14:26.952 [2024-05-15 11:06:24.044828] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.952 [2024-05-15 11:06:24.044845] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.952 [2024-05-15 11:06:24.056860] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.952 [2024-05-15 11:06:24.056874] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.952 [2024-05-15 11:06:24.068910] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.952 [2024-05-15 11:06:24.068927] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.952 [2024-05-15 11:06:24.080930] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.952 [2024-05-15 11:06:24.080946] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.952 [2024-05-15 11:06:24.092964] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.952 [2024-05-15 11:06:24.092979] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.952 [2024-05-15 11:06:24.104989] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.952 [2024-05-15 11:06:24.105004] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.952 [2024-05-15 11:06:24.117028] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.952 [2024-05-15 11:06:24.117044] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.952 [2024-05-15 11:06:24.129056] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.952 [2024-05-15 11:06:24.129070] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.952 [2024-05-15 11:06:24.141088] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.952 [2024-05-15 11:06:24.141102] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.952 [2024-05-15 11:06:24.153113] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.952 [2024-05-15 11:06:24.153122] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.952 [2024-05-15 11:06:24.165142] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.952 [2024-05-15 11:06:24.165151] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.952 [2024-05-15 11:06:24.177184] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.952 [2024-05-15 11:06:24.177195] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.952 [2024-05-15 11:06:24.189214] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.952 [2024-05-15 11:06:24.189226] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.952 [2024-05-15 11:06:24.197228] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.952 [2024-05-15 11:06:24.197238] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.952 [2024-05-15 11:06:24.209260] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.952 [2024-05-15 11:06:24.209270] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.211 [2024-05-15 11:06:24.221293] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.211 [2024-05-15 11:06:24.221304] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.211 [2024-05-15 11:06:24.233327] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.211 [2024-05-15 11:06:24.233338] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.211 [2024-05-15 11:06:24.245358] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.211 [2024-05-15 11:06:24.245368] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.211 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2225098) - No such process 00:14:27.211 11:06:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2225098 00:14:27.211 11:06:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:27.211 11:06:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:27.211 11:06:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:27.211 11:06:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:27.211 11:06:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:27.211 11:06:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:27.211 11:06:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:27.211 delay0 00:14:27.211 11:06:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:27.211 11:06:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:14:27.211 11:06:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:27.211 11:06:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:27.211 11:06:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:27.211 11:06:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:14:27.211 EAL: No free 2048 kB hugepages reported on node 1 00:14:27.211 [2024-05-15 11:06:24.375016] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:33.776 Initializing NVMe Controllers 00:14:33.776 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:33.776 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:33.776 Initialization complete. Launching workers. 00:14:33.776 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 207 00:14:33.776 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 494, failed to submit 33 00:14:33.776 success 300, unsuccess 194, failed 0 00:14:33.776 11:06:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:14:33.776 11:06:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:14:33.776 11:06:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:33.776 11:06:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:14:33.776 11:06:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:33.776 11:06:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:14:33.776 11:06:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:33.776 11:06:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:33.776 rmmod nvme_tcp 00:14:33.776 rmmod nvme_fabrics 00:14:33.776 rmmod nvme_keyring 00:14:33.776 11:06:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:33.776 11:06:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:14:33.776 11:06:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:14:33.776 11:06:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 2223001 ']' 00:14:33.776 11:06:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 2223001 00:14:33.776 11:06:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@947 -- # '[' -z 2223001 ']' 00:14:33.776 11:06:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # kill -0 2223001 00:14:33.776 11:06:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # uname 00:14:33.776 11:06:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:14:33.776 11:06:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2223001 00:14:33.776 11:06:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:14:33.776 11:06:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:14:33.776 11:06:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2223001' 00:14:33.776 killing process with pid 2223001 00:14:33.776 11:06:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # kill 2223001 00:14:33.776 [2024-05-15 11:06:30.610064] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:33.776 11:06:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@971 -- # wait 2223001 00:14:33.776 11:06:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:33.776 11:06:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:33.776 11:06:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:33.776 11:06:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:33.776 11:06:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:33.776 11:06:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:33.776 11:06:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:33.776 11:06:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.681 11:06:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:35.681 00:14:35.681 real 0m31.459s 00:14:35.681 user 0m43.134s 00:14:35.681 sys 0m10.347s 00:14:35.681 11:06:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:35.681 11:06:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:35.681 ************************************ 00:14:35.681 END TEST nvmf_zcopy 00:14:35.681 ************************************ 00:14:35.681 11:06:32 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:35.681 11:06:32 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:14:35.681 11:06:32 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:35.681 11:06:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:35.940 ************************************ 00:14:35.940 START TEST nvmf_nmic 00:14:35.940 ************************************ 00:14:35.940 11:06:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:35.940 * Looking for test storage... 00:14:35.940 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:35.940 11:06:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:35.940 11:06:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:14:35.940 11:06:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:35.940 11:06:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:35.940 11:06:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:35.940 11:06:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:35.940 11:06:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:35.940 11:06:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:35.940 11:06:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:35.940 11:06:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:35.940 11:06:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:35.940 11:06:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:35.940 11:06:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:35.940 11:06:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:35.940 11:06:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:35.940 11:06:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:35.940 11:06:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:35.940 11:06:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:35.940 11:06:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:35.940 11:06:33 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:35.940 11:06:33 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:35.940 11:06:33 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:35.940 11:06:33 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.940 11:06:33 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.940 11:06:33 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.940 11:06:33 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:14:35.940 11:06:33 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.940 11:06:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:14:35.940 11:06:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:35.940 11:06:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:35.940 11:06:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:35.940 11:06:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:35.940 11:06:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:35.940 11:06:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:35.940 11:06:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:35.940 11:06:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:35.940 11:06:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:35.940 11:06:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:35.940 11:06:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:14:35.940 11:06:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:35.940 11:06:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:35.940 11:06:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:35.940 11:06:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:35.940 11:06:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:35.940 11:06:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:35.940 11:06:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:35.940 11:06:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.940 11:06:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:35.940 11:06:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:35.940 11:06:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:14:35.940 11:06:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:41.212 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:41.212 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:41.212 Found net devices under 0000:86:00.0: cvl_0_0 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:41.212 Found net devices under 0000:86:00.1: cvl_0_1 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:41.212 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:41.213 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:41.213 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:41.213 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:41.213 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:41.213 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:14:41.213 00:14:41.213 --- 10.0.0.2 ping statistics --- 00:14:41.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.213 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:14:41.213 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:41.213 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:41.213 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:14:41.213 00:14:41.213 --- 10.0.0.1 ping statistics --- 00:14:41.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.213 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:14:41.213 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:41.213 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:14:41.213 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:41.213 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:41.213 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:41.213 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:41.213 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:41.213 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:41.213 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:41.213 11:06:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:14:41.213 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:41.213 11:06:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@721 -- # xtrace_disable 00:14:41.213 11:06:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:41.213 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=2230263 00:14:41.213 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:41.213 11:06:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 2230263 00:14:41.213 11:06:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@828 -- # '[' -z 2230263 ']' 00:14:41.213 11:06:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.213 11:06:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local max_retries=100 00:14:41.213 11:06:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.213 11:06:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@837 -- # xtrace_disable 00:14:41.213 11:06:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:41.213 [2024-05-15 11:06:37.744838] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:14:41.213 [2024-05-15 11:06:37.744881] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:41.213 EAL: No free 2048 kB hugepages reported on node 1 00:14:41.213 [2024-05-15 11:06:37.802670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:41.213 [2024-05-15 11:06:37.884323] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:41.213 [2024-05-15 11:06:37.884357] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:41.213 [2024-05-15 11:06:37.884366] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:41.213 [2024-05-15 11:06:37.884374] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:41.213 [2024-05-15 11:06:37.884379] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:41.213 [2024-05-15 11:06:37.884429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:41.213 [2024-05-15 11:06:37.884633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:41.213 [2024-05-15 11:06:37.884698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:41.213 [2024-05-15 11:06:37.884699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.473 11:06:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:14:41.473 11:06:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@861 -- # return 0 00:14:41.473 11:06:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:41.473 11:06:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@727 -- # xtrace_disable 00:14:41.473 11:06:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:41.473 11:06:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:41.473 11:06:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:41.473 11:06:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:41.473 11:06:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:41.473 [2024-05-15 11:06:38.602175] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:41.473 11:06:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:41.473 11:06:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:41.473 11:06:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:41.473 11:06:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:41.473 Malloc0 00:14:41.473 11:06:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:41.473 11:06:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:41.473 11:06:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:41.473 11:06:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:41.473 11:06:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:41.473 11:06:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:41.473 11:06:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:41.473 11:06:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:41.473 11:06:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:41.473 11:06:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:41.473 11:06:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:41.473 11:06:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:41.473 [2024-05-15 11:06:38.653608] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:41.473 [2024-05-15 11:06:38.653846] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:41.473 11:06:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:41.473 11:06:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:14:41.473 test case1: single bdev can't be used in multiple subsystems 00:14:41.473 11:06:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:14:41.473 11:06:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:41.473 11:06:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:41.473 11:06:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:41.473 11:06:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:41.473 11:06:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:41.473 11:06:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:41.473 11:06:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:41.473 11:06:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:14:41.473 11:06:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:14:41.473 11:06:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:41.473 11:06:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:41.473 [2024-05-15 11:06:38.677740] bdev.c:8030:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:14:41.473 [2024-05-15 11:06:38.677757] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:14:41.473 [2024-05-15 11:06:38.677765] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.473 request: 00:14:41.473 { 00:14:41.473 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:41.473 "namespace": { 00:14:41.473 "bdev_name": "Malloc0", 00:14:41.473 "no_auto_visible": false 00:14:41.473 }, 00:14:41.473 "method": "nvmf_subsystem_add_ns", 00:14:41.473 "req_id": 1 00:14:41.473 } 00:14:41.473 Got JSON-RPC error response 00:14:41.473 response: 00:14:41.473 { 00:14:41.473 "code": -32602, 00:14:41.473 "message": "Invalid parameters" 00:14:41.473 } 00:14:41.473 11:06:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:14:41.473 11:06:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:14:41.473 11:06:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:14:41.473 11:06:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:14:41.473 Adding namespace failed - expected result. 00:14:41.473 11:06:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:14:41.473 test case2: host connect to nvmf target in multiple paths 00:14:41.473 11:06:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:41.473 11:06:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:41.473 11:06:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:41.473 [2024-05-15 11:06:38.689857] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:41.473 11:06:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:41.473 11:06:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:42.851 11:06:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:14:43.789 11:06:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:14:43.789 11:06:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local i=0 00:14:43.789 11:06:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:14:43.789 11:06:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:14:43.789 11:06:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # sleep 2 00:14:46.323 11:06:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:14:46.323 11:06:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:14:46.323 11:06:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:14:46.323 11:06:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:14:46.323 11:06:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:14:46.323 11:06:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # return 0 00:14:46.323 11:06:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:46.323 [global] 00:14:46.323 thread=1 00:14:46.323 invalidate=1 00:14:46.323 rw=write 00:14:46.323 time_based=1 00:14:46.323 runtime=1 00:14:46.323 ioengine=libaio 00:14:46.323 direct=1 00:14:46.323 bs=4096 00:14:46.323 iodepth=1 00:14:46.323 norandommap=0 00:14:46.323 numjobs=1 00:14:46.323 00:14:46.323 verify_dump=1 00:14:46.323 verify_backlog=512 00:14:46.323 verify_state_save=0 00:14:46.323 do_verify=1 00:14:46.323 verify=crc32c-intel 00:14:46.323 [job0] 00:14:46.323 filename=/dev/nvme0n1 00:14:46.323 Could not set queue depth (nvme0n1) 00:14:46.323 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:46.323 fio-3.35 00:14:46.323 Starting 1 thread 00:14:47.261 00:14:47.261 job0: (groupid=0, jobs=1): err= 0: pid=2231327: Wed May 15 11:06:44 2024 00:14:47.261 read: IOPS=922, BW=3688KiB/s (3777kB/s)(3740KiB/1014msec) 00:14:47.261 slat (nsec): min=7050, max=45622, avg=8108.79, stdev=2187.00 00:14:47.261 clat (usec): min=198, max=41020, avg=896.77, stdev=5119.31 00:14:47.261 lat (usec): min=205, max=41039, avg=904.88, stdev=5120.78 00:14:47.261 clat percentiles (usec): 00:14:47.261 | 1.00th=[ 208], 5.00th=[ 215], 10.00th=[ 217], 20.00th=[ 221], 00:14:47.261 | 30.00th=[ 223], 40.00th=[ 231], 50.00th=[ 247], 60.00th=[ 249], 00:14:47.261 | 70.00th=[ 253], 80.00th=[ 258], 90.00th=[ 265], 95.00th=[ 281], 00:14:47.261 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:14:47.261 | 99.99th=[41157] 00:14:47.261 write: IOPS=1009, BW=4039KiB/s (4136kB/s)(4096KiB/1014msec); 0 zone resets 00:14:47.261 slat (nsec): min=9784, max=44203, avg=11431.87, stdev=2294.40 00:14:47.261 clat (usec): min=117, max=244, avg=146.07, stdev=12.35 00:14:47.261 lat (usec): min=133, max=280, avg=157.50, stdev=13.10 00:14:47.261 clat percentiles (usec): 00:14:47.261 | 1.00th=[ 133], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 139], 00:14:47.261 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 143], 60.00th=[ 145], 00:14:47.261 | 70.00th=[ 147], 80.00th=[ 149], 90.00th=[ 159], 95.00th=[ 174], 00:14:47.261 | 99.00th=[ 190], 99.50th=[ 196], 99.90th=[ 243], 99.95th=[ 245], 00:14:47.261 | 99.99th=[ 245] 00:14:47.261 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:14:47.261 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:14:47.261 lat (usec) : 250=81.47%, 500=17.76% 00:14:47.261 lat (msec) : 50=0.77% 00:14:47.261 cpu : usr=1.48%, sys=3.16%, ctx=1959, majf=0, minf=2 00:14:47.261 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:47.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:47.261 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:47.261 issued rwts: total=935,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:47.261 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:47.261 00:14:47.261 Run status group 0 (all jobs): 00:14:47.261 READ: bw=3688KiB/s (3777kB/s), 3688KiB/s-3688KiB/s (3777kB/s-3777kB/s), io=3740KiB (3830kB), run=1014-1014msec 00:14:47.261 WRITE: bw=4039KiB/s (4136kB/s), 4039KiB/s-4039KiB/s (4136kB/s-4136kB/s), io=4096KiB (4194kB), run=1014-1014msec 00:14:47.261 00:14:47.261 Disk stats (read/write): 00:14:47.261 nvme0n1: ios=982/1024, merge=0/0, ticks=812/144, in_queue=956, util=95.39% 00:14:47.261 11:06:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:47.520 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:14:47.520 11:06:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:47.520 11:06:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # local i=0 00:14:47.520 11:06:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:14:47.520 11:06:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:47.520 11:06:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:14:47.520 11:06:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:47.520 11:06:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1228 -- # return 0 00:14:47.520 11:06:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:14:47.520 11:06:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:14:47.520 11:06:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:47.520 11:06:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:14:47.520 11:06:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:47.520 11:06:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:14:47.520 11:06:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:47.520 11:06:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:47.520 rmmod nvme_tcp 00:14:47.520 rmmod nvme_fabrics 00:14:47.520 rmmod nvme_keyring 00:14:47.520 11:06:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:47.520 11:06:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:14:47.520 11:06:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:14:47.520 11:06:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 2230263 ']' 00:14:47.520 11:06:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 2230263 00:14:47.520 11:06:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@947 -- # '[' -z 2230263 ']' 00:14:47.520 11:06:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # kill -0 2230263 00:14:47.520 11:06:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # uname 00:14:47.520 11:06:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:14:47.520 11:06:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2230263 00:14:47.520 11:06:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:14:47.520 11:06:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:14:47.520 11:06:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2230263' 00:14:47.520 killing process with pid 2230263 00:14:47.520 11:06:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # kill 2230263 00:14:47.520 [2024-05-15 11:06:44.716595] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:47.521 11:06:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@971 -- # wait 2230263 00:14:47.780 11:06:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:47.780 11:06:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:47.780 11:06:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:47.780 11:06:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:47.780 11:06:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:47.780 11:06:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.780 11:06:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:47.780 11:06:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.342 11:06:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:50.342 00:14:50.342 real 0m14.047s 00:14:50.342 user 0m34.549s 00:14:50.342 sys 0m4.287s 00:14:50.342 11:06:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:50.342 11:06:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:50.342 ************************************ 00:14:50.342 END TEST nvmf_nmic 00:14:50.342 ************************************ 00:14:50.342 11:06:47 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:50.342 11:06:47 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:14:50.342 11:06:47 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:50.342 11:06:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:50.342 ************************************ 00:14:50.342 START TEST nvmf_fio_target 00:14:50.342 ************************************ 00:14:50.342 11:06:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:50.342 * Looking for test storage... 00:14:50.342 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:50.342 11:06:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:50.342 11:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:14:50.342 11:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:50.342 11:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:50.342 11:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:50.342 11:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:50.342 11:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:50.342 11:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:50.342 11:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:50.342 11:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:50.342 11:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:50.342 11:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:50.342 11:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:50.342 11:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:50.342 11:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:50.342 11:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:50.342 11:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:50.342 11:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:50.342 11:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:50.342 11:06:47 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:50.342 11:06:47 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:50.342 11:06:47 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:50.343 11:06:47 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.343 11:06:47 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.343 11:06:47 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.343 11:06:47 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:14:50.343 11:06:47 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.343 11:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:14:50.343 11:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:50.343 11:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:50.343 11:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:50.343 11:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:50.343 11:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:50.343 11:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:50.343 11:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:50.343 11:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:50.343 11:06:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:50.343 11:06:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:50.343 11:06:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:50.343 11:06:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:14:50.343 11:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:50.343 11:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:50.343 11:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:50.343 11:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:50.343 11:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:50.343 11:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.343 11:06:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:50.343 11:06:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.343 11:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:50.343 11:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:50.343 11:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:14:50.343 11:06:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:55.618 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:55.618 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:55.618 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:55.619 Found net devices under 0000:86:00.0: cvl_0_0 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:55.619 Found net devices under 0000:86:00.1: cvl_0_1 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:55.619 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:55.619 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:14:55.619 00:14:55.619 --- 10.0.0.2 ping statistics --- 00:14:55.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:55.619 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:55.619 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:55.619 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:14:55.619 00:14:55.619 --- 10.0.0.1 ping statistics --- 00:14:55.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:55.619 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@721 -- # xtrace_disable 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=2235062 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 2235062 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@828 -- # '[' -z 2235062 ']' 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.619 11:06:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:55.619 [2024-05-15 11:06:52.340553] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:14:55.619 [2024-05-15 11:06:52.340595] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:55.619 EAL: No free 2048 kB hugepages reported on node 1 00:14:55.619 [2024-05-15 11:06:52.396500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:55.619 [2024-05-15 11:06:52.476618] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:55.619 [2024-05-15 11:06:52.476650] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:55.619 [2024-05-15 11:06:52.476660] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:55.619 [2024-05-15 11:06:52.476666] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:55.619 [2024-05-15 11:06:52.476671] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:55.619 [2024-05-15 11:06:52.476708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:55.619 [2024-05-15 11:06:52.476725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:55.619 [2024-05-15 11:06:52.476831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:55.619 [2024-05-15 11:06:52.476832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.188 11:06:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:14:56.188 11:06:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@861 -- # return 0 00:14:56.188 11:06:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:56.188 11:06:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@727 -- # xtrace_disable 00:14:56.188 11:06:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.188 11:06:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:56.188 11:06:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:56.188 [2024-05-15 11:06:53.357692] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:56.188 11:06:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:56.460 11:06:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:14:56.460 11:06:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:56.719 11:06:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:14:56.719 11:06:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:56.719 11:06:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:14:56.719 11:06:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:56.977 11:06:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:14:56.977 11:06:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:14:57.236 11:06:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:57.495 11:06:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:14:57.495 11:06:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:57.495 11:06:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:14:57.495 11:06:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:57.753 11:06:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:14:57.754 11:06:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:14:58.012 11:06:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:58.271 11:06:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:58.271 11:06:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:58.272 11:06:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:58.272 11:06:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:58.530 11:06:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:58.789 [2024-05-15 11:06:55.803717] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:58.789 [2024-05-15 11:06:55.804035] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:58.789 11:06:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:14:58.789 11:06:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:14:59.048 11:06:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:00.429 11:06:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:15:00.429 11:06:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local i=0 00:15:00.429 11:06:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:15:00.429 11:06:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # [[ -n 4 ]] 00:15:00.429 11:06:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # nvme_device_counter=4 00:15:00.429 11:06:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # sleep 2 00:15:02.357 11:06:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:15:02.357 11:06:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:15:02.357 11:06:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:15:02.357 11:06:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # nvme_devices=4 00:15:02.357 11:06:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:15:02.357 11:06:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # return 0 00:15:02.357 11:06:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:02.357 [global] 00:15:02.357 thread=1 00:15:02.357 invalidate=1 00:15:02.357 rw=write 00:15:02.357 time_based=1 00:15:02.357 runtime=1 00:15:02.357 ioengine=libaio 00:15:02.357 direct=1 00:15:02.357 bs=4096 00:15:02.357 iodepth=1 00:15:02.358 norandommap=0 00:15:02.358 numjobs=1 00:15:02.358 00:15:02.358 verify_dump=1 00:15:02.358 verify_backlog=512 00:15:02.358 verify_state_save=0 00:15:02.358 do_verify=1 00:15:02.358 verify=crc32c-intel 00:15:02.358 [job0] 00:15:02.358 filename=/dev/nvme0n1 00:15:02.358 [job1] 00:15:02.358 filename=/dev/nvme0n2 00:15:02.358 [job2] 00:15:02.358 filename=/dev/nvme0n3 00:15:02.358 [job3] 00:15:02.358 filename=/dev/nvme0n4 00:15:02.358 Could not set queue depth (nvme0n1) 00:15:02.358 Could not set queue depth (nvme0n2) 00:15:02.358 Could not set queue depth (nvme0n3) 00:15:02.358 Could not set queue depth (nvme0n4) 00:15:02.617 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:02.617 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:02.617 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:02.617 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:02.617 fio-3.35 00:15:02.617 Starting 4 threads 00:15:03.984 00:15:03.984 job0: (groupid=0, jobs=1): err= 0: pid=2236409: Wed May 15 11:07:00 2024 00:15:03.984 read: IOPS=2103, BW=8416KiB/s (8618kB/s)(8424KiB/1001msec) 00:15:03.984 slat (nsec): min=6216, max=66111, avg=7319.59, stdev=2269.80 00:15:03.984 clat (usec): min=169, max=9312, avg=210.12, stdev=204.51 00:15:03.984 lat (usec): min=176, max=9321, avg=217.44, stdev=204.63 00:15:03.984 clat percentiles (usec): 00:15:03.984 | 1.00th=[ 178], 5.00th=[ 182], 10.00th=[ 184], 20.00th=[ 190], 00:15:03.984 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 198], 60.00th=[ 202], 00:15:03.984 | 70.00th=[ 206], 80.00th=[ 212], 90.00th=[ 243], 95.00th=[ 253], 00:15:03.984 | 99.00th=[ 269], 99.50th=[ 293], 99.90th=[ 1418], 99.95th=[ 1795], 00:15:03.984 | 99.99th=[ 9372] 00:15:03.984 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:15:03.984 slat (usec): min=6, max=41155, avg=55.68, stdev=1324.43 00:15:03.984 clat (usec): min=118, max=280, avg=151.69, stdev=29.95 00:15:03.984 lat (usec): min=128, max=41382, avg=207.37, stdev=1327.95 00:15:03.984 clat percentiles (usec): 00:15:03.984 | 1.00th=[ 124], 5.00th=[ 127], 10.00th=[ 130], 20.00th=[ 133], 00:15:03.984 | 30.00th=[ 137], 40.00th=[ 139], 50.00th=[ 141], 60.00th=[ 145], 00:15:03.984 | 70.00th=[ 151], 80.00th=[ 163], 90.00th=[ 188], 95.00th=[ 239], 00:15:03.984 | 99.00th=[ 251], 99.50th=[ 253], 99.90th=[ 262], 99.95th=[ 269], 00:15:03.984 | 99.99th=[ 281] 00:15:03.984 bw ( KiB/s): min= 8192, max= 8192, per=51.65%, avg=8192.00, stdev= 0.00, samples=1 00:15:03.984 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:03.984 lat (usec) : 250=96.64%, 500=3.28%, 750=0.02% 00:15:03.984 lat (msec) : 2=0.04%, 10=0.02% 00:15:03.984 cpu : usr=2.40%, sys=4.50%, ctx=4672, majf=0, minf=1 00:15:03.984 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:03.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:03.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:03.984 issued rwts: total=2106,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:03.984 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:03.984 job1: (groupid=0, jobs=1): err= 0: pid=2236410: Wed May 15 11:07:00 2024 00:15:03.984 read: IOPS=22, BW=91.7KiB/s (93.9kB/s)(92.0KiB/1003msec) 00:15:03.984 slat (nsec): min=9606, max=25733, avg=20794.78, stdev=3567.45 00:15:03.984 clat (usec): min=370, max=42007, avg=39226.52, stdev=8473.73 00:15:03.984 lat (usec): min=396, max=42029, avg=39247.32, stdev=8472.68 00:15:03.984 clat percentiles (usec): 00:15:03.984 | 1.00th=[ 371], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:15:03.984 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:03.984 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:03.984 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:03.984 | 99.99th=[42206] 00:15:03.984 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:15:03.984 slat (nsec): min=11394, max=38659, avg=12737.17, stdev=1755.85 00:15:03.984 clat (usec): min=157, max=275, avg=178.46, stdev=12.49 00:15:03.984 lat (usec): min=169, max=313, avg=191.20, stdev=12.98 00:15:03.984 clat percentiles (usec): 00:15:03.984 | 1.00th=[ 161], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 169], 00:15:03.984 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 180], 00:15:03.984 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 194], 95.00th=[ 198], 00:15:03.984 | 99.00th=[ 212], 99.50th=[ 258], 99.90th=[ 277], 99.95th=[ 277], 00:15:03.984 | 99.99th=[ 277] 00:15:03.984 bw ( KiB/s): min= 4096, max= 4096, per=25.83%, avg=4096.00, stdev= 0.00, samples=1 00:15:03.984 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:03.984 lat (usec) : 250=95.14%, 500=0.75% 00:15:03.984 lat (msec) : 50=4.11% 00:15:03.984 cpu : usr=0.60%, sys=0.80%, ctx=535, majf=0, minf=1 00:15:03.984 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:03.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:03.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:03.984 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:03.984 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:03.984 job2: (groupid=0, jobs=1): err= 0: pid=2236411: Wed May 15 11:07:00 2024 00:15:03.984 read: IOPS=368, BW=1476KiB/s (1511kB/s)(1480KiB/1003msec) 00:15:03.984 slat (nsec): min=7363, max=73046, avg=9803.47, stdev=5330.13 00:15:03.984 clat (usec): min=218, max=42028, avg=2428.14, stdev=9059.14 00:15:03.984 lat (usec): min=226, max=42050, avg=2437.94, stdev=9061.63 00:15:03.984 clat percentiles (usec): 00:15:03.984 | 1.00th=[ 229], 5.00th=[ 237], 10.00th=[ 241], 20.00th=[ 251], 00:15:03.984 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 281], 00:15:03.984 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 326], 95.00th=[40633], 00:15:03.984 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:15:03.984 | 99.99th=[42206] 00:15:03.984 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:15:03.984 slat (nsec): min=10977, max=35273, avg=12753.48, stdev=2126.70 00:15:03.984 clat (usec): min=149, max=306, avg=176.53, stdev=15.36 00:15:03.984 lat (usec): min=164, max=322, avg=189.29, stdev=15.78 00:15:03.984 clat percentiles (usec): 00:15:03.984 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 167], 00:15:03.984 | 30.00th=[ 169], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 178], 00:15:03.984 | 70.00th=[ 180], 80.00th=[ 184], 90.00th=[ 192], 95.00th=[ 200], 00:15:03.984 | 99.00th=[ 237], 99.50th=[ 265], 99.90th=[ 306], 99.95th=[ 306], 00:15:03.984 | 99.99th=[ 306] 00:15:03.984 bw ( KiB/s): min= 4096, max= 4096, per=25.83%, avg=4096.00, stdev= 0.00, samples=1 00:15:03.984 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:03.984 lat (usec) : 250=65.42%, 500=31.97%, 750=0.34% 00:15:03.984 lat (msec) : 50=2.27% 00:15:03.984 cpu : usr=0.80%, sys=1.40%, ctx=886, majf=0, minf=1 00:15:03.984 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:03.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:03.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:03.984 issued rwts: total=370,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:03.984 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:03.984 job3: (groupid=0, jobs=1): err= 0: pid=2236412: Wed May 15 11:07:00 2024 00:15:03.984 read: IOPS=387, BW=1549KiB/s (1586kB/s)(1600KiB/1033msec) 00:15:03.984 slat (nsec): min=7353, max=48456, avg=8957.34, stdev=3416.59 00:15:03.984 clat (usec): min=202, max=41129, avg=2321.56, stdev=8881.56 00:15:03.984 lat (usec): min=210, max=41141, avg=2330.52, stdev=8882.85 00:15:03.984 clat percentiles (usec): 00:15:03.984 | 1.00th=[ 219], 5.00th=[ 235], 10.00th=[ 241], 20.00th=[ 249], 00:15:03.984 | 30.00th=[ 258], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 285], 00:15:03.984 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 330], 95.00th=[ 3916], 00:15:03.985 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:03.985 | 99.99th=[41157] 00:15:03.985 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:15:03.985 slat (nsec): min=10815, max=43470, avg=12223.81, stdev=2178.94 00:15:03.985 clat (usec): min=146, max=339, avg=176.90, stdev=15.71 00:15:03.985 lat (usec): min=157, max=383, avg=189.12, stdev=16.42 00:15:03.985 clat percentiles (usec): 00:15:03.985 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 167], 00:15:03.985 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 178], 00:15:03.985 | 70.00th=[ 180], 80.00th=[ 184], 90.00th=[ 192], 95.00th=[ 198], 00:15:03.985 | 99.00th=[ 229], 99.50th=[ 247], 99.90th=[ 338], 99.95th=[ 338], 00:15:03.985 | 99.99th=[ 338] 00:15:03.985 bw ( KiB/s): min= 4096, max= 4096, per=25.83%, avg=4096.00, stdev= 0.00, samples=1 00:15:03.985 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:03.985 lat (usec) : 250=65.02%, 500=32.13%, 750=0.55% 00:15:03.985 lat (msec) : 4=0.11%, 50=2.19% 00:15:03.985 cpu : usr=1.45%, sys=0.68%, ctx=913, majf=0, minf=2 00:15:03.985 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:03.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:03.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:03.985 issued rwts: total=400,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:03.985 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:03.985 00:15:03.985 Run status group 0 (all jobs): 00:15:03.985 READ: bw=11.0MiB/s (11.5MB/s), 91.7KiB/s-8416KiB/s (93.9kB/s-8618kB/s), io=11.3MiB (11.9MB), run=1001-1033msec 00:15:03.985 WRITE: bw=15.5MiB/s (16.2MB/s), 1983KiB/s-9.99MiB/s (2030kB/s-10.5MB/s), io=16.0MiB (16.8MB), run=1001-1033msec 00:15:03.985 00:15:03.985 Disk stats (read/write): 00:15:03.985 nvme0n1: ios=1809/2048, merge=0/0, ticks=587/298, in_queue=885, util=86.67% 00:15:03.985 nvme0n2: ios=69/512, merge=0/0, ticks=794/90, in_queue=884, util=90.96% 00:15:03.985 nvme0n3: ios=390/512, merge=0/0, ticks=1633/83, in_queue=1716, util=93.65% 00:15:03.985 nvme0n4: ios=418/512, merge=0/0, ticks=1599/80, in_queue=1679, util=94.44% 00:15:03.985 11:07:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:15:03.985 [global] 00:15:03.985 thread=1 00:15:03.985 invalidate=1 00:15:03.985 rw=randwrite 00:15:03.985 time_based=1 00:15:03.985 runtime=1 00:15:03.985 ioengine=libaio 00:15:03.985 direct=1 00:15:03.985 bs=4096 00:15:03.985 iodepth=1 00:15:03.985 norandommap=0 00:15:03.985 numjobs=1 00:15:03.985 00:15:03.985 verify_dump=1 00:15:03.985 verify_backlog=512 00:15:03.985 verify_state_save=0 00:15:03.985 do_verify=1 00:15:03.985 verify=crc32c-intel 00:15:03.985 [job0] 00:15:03.985 filename=/dev/nvme0n1 00:15:03.985 [job1] 00:15:03.985 filename=/dev/nvme0n2 00:15:03.985 [job2] 00:15:03.985 filename=/dev/nvme0n3 00:15:03.985 [job3] 00:15:03.985 filename=/dev/nvme0n4 00:15:03.985 Could not set queue depth (nvme0n1) 00:15:03.985 Could not set queue depth (nvme0n2) 00:15:03.985 Could not set queue depth (nvme0n3) 00:15:03.985 Could not set queue depth (nvme0n4) 00:15:03.985 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:03.985 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:03.985 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:03.985 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:03.985 fio-3.35 00:15:03.985 Starting 4 threads 00:15:05.352 00:15:05.352 job0: (groupid=0, jobs=1): err= 0: pid=2236780: Wed May 15 11:07:02 2024 00:15:05.352 read: IOPS=1873, BW=7494KiB/s (7674kB/s)(7696KiB/1027msec) 00:15:05.352 slat (nsec): min=7218, max=38143, avg=8392.45, stdev=1505.91 00:15:05.352 clat (usec): min=201, max=41377, avg=318.67, stdev=1612.48 00:15:05.352 lat (usec): min=209, max=41385, avg=327.06, stdev=1612.88 00:15:05.352 clat percentiles (usec): 00:15:05.352 | 1.00th=[ 221], 5.00th=[ 227], 10.00th=[ 231], 20.00th=[ 233], 00:15:05.352 | 30.00th=[ 237], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 245], 00:15:05.352 | 70.00th=[ 253], 80.00th=[ 269], 90.00th=[ 297], 95.00th=[ 322], 00:15:05.352 | 99.00th=[ 453], 99.50th=[ 498], 99.90th=[41157], 99.95th=[41157], 00:15:05.352 | 99.99th=[41157] 00:15:05.352 write: IOPS=1994, BW=7977KiB/s (8168kB/s)(8192KiB/1027msec); 0 zone resets 00:15:05.352 slat (nsec): min=9587, max=40514, avg=11984.89, stdev=1812.09 00:15:05.352 clat (usec): min=122, max=435, avg=175.22, stdev=33.78 00:15:05.352 lat (usec): min=133, max=449, avg=187.21, stdev=33.96 00:15:05.352 clat percentiles (usec): 00:15:05.352 | 1.00th=[ 131], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 149], 00:15:05.352 | 30.00th=[ 155], 40.00th=[ 161], 50.00th=[ 169], 60.00th=[ 178], 00:15:05.352 | 70.00th=[ 184], 80.00th=[ 196], 90.00th=[ 227], 95.00th=[ 241], 00:15:05.352 | 99.00th=[ 281], 99.50th=[ 293], 99.90th=[ 416], 99.95th=[ 424], 00:15:05.352 | 99.99th=[ 437] 00:15:05.352 bw ( KiB/s): min= 8072, max= 8312, per=35.23%, avg=8192.00, stdev=169.71, samples=2 00:15:05.352 iops : min= 2018, max= 2078, avg=2048.00, stdev=42.43, samples=2 00:15:05.352 lat (usec) : 250=82.58%, 500=17.20%, 750=0.15% 00:15:05.352 lat (msec) : 50=0.08% 00:15:05.352 cpu : usr=3.22%, sys=6.34%, ctx=3975, majf=0, minf=1 00:15:05.352 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:05.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:05.352 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:05.352 issued rwts: total=1924,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:05.352 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:05.352 job1: (groupid=0, jobs=1): err= 0: pid=2236781: Wed May 15 11:07:02 2024 00:15:05.352 read: IOPS=69, BW=280KiB/s (286kB/s)(288KiB/1030msec) 00:15:05.352 slat (nsec): min=7812, max=27908, avg=13226.61, stdev=6852.18 00:15:05.352 clat (usec): min=263, max=41136, avg=12758.23, stdev=18836.86 00:15:05.352 lat (usec): min=271, max=41159, avg=12771.46, stdev=18843.28 00:15:05.352 clat percentiles (usec): 00:15:05.352 | 1.00th=[ 265], 5.00th=[ 297], 10.00th=[ 314], 20.00th=[ 326], 00:15:05.352 | 30.00th=[ 338], 40.00th=[ 351], 50.00th=[ 375], 60.00th=[ 396], 00:15:05.352 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:05.352 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:05.352 | 99.99th=[41157] 00:15:05.352 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:15:05.352 slat (usec): min=8, max=9959, avg=29.55, stdev=439.71 00:15:05.352 clat (usec): min=146, max=1504, avg=182.21, stdev=65.05 00:15:05.352 lat (usec): min=155, max=10240, avg=211.76, stdev=448.82 00:15:05.352 clat percentiles (usec): 00:15:05.352 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 163], 00:15:05.352 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 178], 00:15:05.352 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 196], 95.00th=[ 235], 00:15:05.352 | 99.00th=[ 306], 99.50th=[ 326], 99.90th=[ 1500], 99.95th=[ 1500], 00:15:05.352 | 99.99th=[ 1500] 00:15:05.352 bw ( KiB/s): min= 4096, max= 4096, per=17.62%, avg=4096.00, stdev= 0.00, samples=1 00:15:05.352 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:05.352 lat (usec) : 250=84.42%, 500=11.64% 00:15:05.352 lat (msec) : 2=0.17%, 50=3.77% 00:15:05.352 cpu : usr=0.29%, sys=0.58%, ctx=586, majf=0, minf=2 00:15:05.353 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:05.353 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:05.353 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:05.353 issued rwts: total=72,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:05.353 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:05.353 job2: (groupid=0, jobs=1): err= 0: pid=2236784: Wed May 15 11:07:02 2024 00:15:05.353 read: IOPS=519, BW=2077KiB/s (2127kB/s)(2112KiB/1017msec) 00:15:05.353 slat (usec): min=7, max=165, avg= 9.74, stdev= 7.45 00:15:05.353 clat (usec): min=203, max=41053, avg=1498.43, stdev=6980.06 00:15:05.353 lat (usec): min=212, max=41065, avg=1508.18, stdev=6980.62 00:15:05.353 clat percentiles (usec): 00:15:05.353 | 1.00th=[ 212], 5.00th=[ 225], 10.00th=[ 231], 20.00th=[ 245], 00:15:05.353 | 30.00th=[ 253], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 269], 00:15:05.353 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 306], 95.00th=[ 322], 00:15:05.353 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:05.353 | 99.99th=[41157] 00:15:05.353 write: IOPS=1006, BW=4028KiB/s (4124kB/s)(4096KiB/1017msec); 0 zone resets 00:15:05.353 slat (nsec): min=5699, max=41830, avg=12565.73, stdev=4110.13 00:15:05.353 clat (usec): min=132, max=500, avg=197.51, stdev=32.78 00:15:05.353 lat (usec): min=143, max=507, avg=210.07, stdev=33.45 00:15:05.353 clat percentiles (usec): 00:15:05.353 | 1.00th=[ 141], 5.00th=[ 157], 10.00th=[ 167], 20.00th=[ 174], 00:15:05.353 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 190], 60.00th=[ 198], 00:15:05.353 | 70.00th=[ 215], 80.00th=[ 225], 90.00th=[ 239], 95.00th=[ 245], 00:15:05.353 | 99.00th=[ 277], 99.50th=[ 326], 99.90th=[ 453], 99.95th=[ 502], 00:15:05.353 | 99.99th=[ 502] 00:15:05.353 bw ( KiB/s): min= 8192, max= 8192, per=35.23%, avg=8192.00, stdev= 0.00, samples=1 00:15:05.353 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:05.353 lat (usec) : 250=71.91%, 500=27.00%, 750=0.06% 00:15:05.353 lat (msec) : 50=1.03% 00:15:05.353 cpu : usr=1.38%, sys=1.97%, ctx=1552, majf=0, minf=1 00:15:05.353 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:05.353 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:05.353 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:05.353 issued rwts: total=528,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:05.353 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:05.353 job3: (groupid=0, jobs=1): err= 0: pid=2236788: Wed May 15 11:07:02 2024 00:15:05.353 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:15:05.353 slat (nsec): min=7070, max=40209, avg=8040.66, stdev=1327.70 00:15:05.353 clat (usec): min=186, max=720, avg=258.13, stdev=51.35 00:15:05.353 lat (usec): min=194, max=728, avg=266.17, stdev=51.39 00:15:05.353 clat percentiles (usec): 00:15:05.353 | 1.00th=[ 198], 5.00th=[ 212], 10.00th=[ 221], 20.00th=[ 231], 00:15:05.353 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 249], 00:15:05.353 | 70.00th=[ 255], 80.00th=[ 273], 90.00th=[ 318], 95.00th=[ 367], 00:15:05.353 | 99.00th=[ 478], 99.50th=[ 498], 99.90th=[ 611], 99.95th=[ 635], 00:15:05.353 | 99.99th=[ 717] 00:15:05.353 write: IOPS=2400, BW=9602KiB/s (9833kB/s)(9612KiB/1001msec); 0 zone resets 00:15:05.353 slat (nsec): min=10096, max=41353, avg=11418.46, stdev=1746.74 00:15:05.353 clat (usec): min=128, max=454, avg=172.12, stdev=28.15 00:15:05.353 lat (usec): min=139, max=467, avg=183.53, stdev=28.52 00:15:05.353 clat percentiles (usec): 00:15:05.353 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 149], 00:15:05.353 | 30.00th=[ 155], 40.00th=[ 161], 50.00th=[ 169], 60.00th=[ 178], 00:15:05.353 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 200], 95.00th=[ 223], 00:15:05.353 | 99.00th=[ 262], 99.50th=[ 273], 99.90th=[ 404], 99.95th=[ 449], 00:15:05.353 | 99.99th=[ 453] 00:15:05.353 bw ( KiB/s): min= 8192, max= 8192, per=35.23%, avg=8192.00, stdev= 0.00, samples=1 00:15:05.353 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:05.353 lat (usec) : 250=82.07%, 500=17.73%, 750=0.20% 00:15:05.353 cpu : usr=4.60%, sys=6.10%, ctx=4451, majf=0, minf=1 00:15:05.353 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:05.353 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:05.353 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:05.353 issued rwts: total=2048,2403,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:05.353 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:05.353 00:15:05.353 Run status group 0 (all jobs): 00:15:05.353 READ: bw=17.3MiB/s (18.2MB/s), 280KiB/s-8184KiB/s (286kB/s-8380kB/s), io=17.9MiB (18.7MB), run=1001-1030msec 00:15:05.353 WRITE: bw=22.7MiB/s (23.8MB/s), 1988KiB/s-9602KiB/s (2036kB/s-9833kB/s), io=23.4MiB (24.5MB), run=1001-1030msec 00:15:05.353 00:15:05.353 Disk stats (read/write): 00:15:05.353 nvme0n1: ios=1788/2048, merge=0/0, ticks=652/350, in_queue=1002, util=86.07% 00:15:05.353 nvme0n2: ios=112/512, merge=0/0, ticks=940/93, in_queue=1033, util=91.08% 00:15:05.353 nvme0n3: ios=581/1024, merge=0/0, ticks=692/188, in_queue=880, util=94.70% 00:15:05.353 nvme0n4: ios=1795/2048, merge=0/0, ticks=496/337, in_queue=833, util=95.29% 00:15:05.353 11:07:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:15:05.353 [global] 00:15:05.353 thread=1 00:15:05.353 invalidate=1 00:15:05.353 rw=write 00:15:05.353 time_based=1 00:15:05.353 runtime=1 00:15:05.353 ioengine=libaio 00:15:05.353 direct=1 00:15:05.353 bs=4096 00:15:05.353 iodepth=128 00:15:05.353 norandommap=0 00:15:05.353 numjobs=1 00:15:05.353 00:15:05.353 verify_dump=1 00:15:05.353 verify_backlog=512 00:15:05.353 verify_state_save=0 00:15:05.353 do_verify=1 00:15:05.353 verify=crc32c-intel 00:15:05.353 [job0] 00:15:05.353 filename=/dev/nvme0n1 00:15:05.353 [job1] 00:15:05.353 filename=/dev/nvme0n2 00:15:05.353 [job2] 00:15:05.353 filename=/dev/nvme0n3 00:15:05.353 [job3] 00:15:05.353 filename=/dev/nvme0n4 00:15:05.353 Could not set queue depth (nvme0n1) 00:15:05.353 Could not set queue depth (nvme0n2) 00:15:05.353 Could not set queue depth (nvme0n3) 00:15:05.353 Could not set queue depth (nvme0n4) 00:15:05.610 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:05.610 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:05.610 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:05.610 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:05.610 fio-3.35 00:15:05.610 Starting 4 threads 00:15:06.979 00:15:06.979 job0: (groupid=0, jobs=1): err= 0: pid=2237168: Wed May 15 11:07:04 2024 00:15:06.979 read: IOPS=3412, BW=13.3MiB/s (14.0MB/s)(13.9MiB/1046msec) 00:15:06.979 slat (nsec): min=1468, max=12163k, avg=132152.24, stdev=885161.02 00:15:06.979 clat (usec): min=5485, max=57135, avg=17032.84, stdev=8797.92 00:15:06.979 lat (usec): min=5496, max=64178, avg=17164.99, stdev=8843.33 00:15:06.979 clat percentiles (usec): 00:15:06.979 | 1.00th=[ 8291], 5.00th=[11469], 10.00th=[11731], 20.00th=[12125], 00:15:06.979 | 30.00th=[12518], 40.00th=[13173], 50.00th=[13435], 60.00th=[14353], 00:15:06.979 | 70.00th=[16581], 80.00th=[19792], 90.00th=[26084], 95.00th=[34341], 00:15:06.979 | 99.00th=[53216], 99.50th=[53216], 99.90th=[56886], 99.95th=[56886], 00:15:06.979 | 99.99th=[56886] 00:15:06.979 write: IOPS=3426, BW=13.4MiB/s (14.0MB/s)(14.0MiB/1046msec); 0 zone resets 00:15:06.979 slat (usec): min=2, max=13086, avg=140.54, stdev=672.39 00:15:06.979 clat (usec): min=2421, max=41533, avg=20002.66, stdev=7439.11 00:15:06.979 lat (usec): min=2431, max=41540, avg=20143.21, stdev=7502.18 00:15:06.979 clat percentiles (usec): 00:15:06.979 | 1.00th=[ 4555], 5.00th=[ 9110], 10.00th=[10159], 20.00th=[11994], 00:15:06.979 | 30.00th=[15008], 40.00th=[18482], 50.00th=[22152], 60.00th=[22676], 00:15:06.979 | 70.00th=[23200], 80.00th=[25822], 90.00th=[29754], 95.00th=[33424], 00:15:06.979 | 99.00th=[34341], 99.50th=[34341], 99.90th=[35914], 99.95th=[41681], 00:15:06.979 | 99.99th=[41681] 00:15:06.979 bw ( KiB/s): min=12288, max=16384, per=21.77%, avg=14336.00, stdev=2896.31, samples=2 00:15:06.979 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:15:06.979 lat (msec) : 4=0.36%, 10=4.29%, 20=56.33%, 50=37.26%, 100=1.76% 00:15:06.979 cpu : usr=3.64%, sys=3.83%, ctx=418, majf=0, minf=1 00:15:06.979 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:15:06.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:06.979 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:06.979 issued rwts: total=3569,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:06.979 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:06.979 job1: (groupid=0, jobs=1): err= 0: pid=2237175: Wed May 15 11:07:04 2024 00:15:06.979 read: IOPS=5218, BW=20.4MiB/s (21.4MB/s)(20.5MiB/1004msec) 00:15:06.979 slat (nsec): min=1256, max=9901.2k, avg=89954.64, stdev=625581.27 00:15:06.979 clat (usec): min=708, max=28833, avg=11389.09, stdev=2927.37 00:15:06.979 lat (usec): min=3914, max=31113, avg=11479.05, stdev=2965.53 00:15:06.979 clat percentiles (usec): 00:15:06.979 | 1.00th=[ 4555], 5.00th=[ 7373], 10.00th=[ 8979], 20.00th=[ 9896], 00:15:06.979 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10683], 60.00th=[10945], 00:15:06.979 | 70.00th=[11469], 80.00th=[12387], 90.00th=[15533], 95.00th=[17433], 00:15:06.979 | 99.00th=[19792], 99.50th=[25560], 99.90th=[25560], 99.95th=[25560], 00:15:06.979 | 99.99th=[28705] 00:15:06.979 write: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec); 0 zone resets 00:15:06.979 slat (usec): min=2, max=28157, avg=87.82, stdev=770.69 00:15:06.979 clat (usec): min=2063, max=73090, avg=11967.24, stdev=7896.87 00:15:06.979 lat (usec): min=2139, max=73121, avg=12055.06, stdev=7966.96 00:15:06.979 clat percentiles (usec): 00:15:06.979 | 1.00th=[ 3818], 5.00th=[ 6063], 10.00th=[ 7767], 20.00th=[ 9765], 00:15:06.979 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10683], 60.00th=[10814], 00:15:06.979 | 70.00th=[10945], 80.00th=[11076], 90.00th=[11731], 95.00th=[32900], 00:15:06.979 | 99.00th=[56886], 99.50th=[56886], 99.90th=[56886], 99.95th=[60556], 00:15:06.979 | 99.99th=[72877] 00:15:06.979 bw ( KiB/s): min=20439, max=24504, per=34.12%, avg=22471.50, stdev=2874.39, samples=2 00:15:06.979 iops : min= 5109, max= 6126, avg=5617.50, stdev=719.13, samples=2 00:15:06.979 lat (usec) : 750=0.01% 00:15:06.979 lat (msec) : 4=0.78%, 10=21.61%, 20=73.71%, 50=3.28%, 100=0.61% 00:15:06.979 cpu : usr=4.69%, sys=6.38%, ctx=530, majf=0, minf=1 00:15:06.979 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:15:06.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:06.979 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:06.979 issued rwts: total=5239,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:06.979 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:06.979 job2: (groupid=0, jobs=1): err= 0: pid=2237183: Wed May 15 11:07:04 2024 00:15:06.979 read: IOPS=4521, BW=17.7MiB/s (18.5MB/s)(18.5MiB/1047msec) 00:15:06.979 slat (nsec): min=1302, max=10458k, avg=97030.71, stdev=618087.75 00:15:06.979 clat (usec): min=4923, max=63213, avg=13366.06, stdev=7621.58 00:15:06.979 lat (usec): min=4931, max=63218, avg=13463.09, stdev=7644.61 00:15:06.979 clat percentiles (usec): 00:15:06.979 | 1.00th=[ 6325], 5.00th=[ 8586], 10.00th=[ 9634], 20.00th=[11469], 00:15:06.979 | 30.00th=[11731], 40.00th=[11863], 50.00th=[11994], 60.00th=[12125], 00:15:06.979 | 70.00th=[12518], 80.00th=[13566], 90.00th=[14615], 95.00th=[17433], 00:15:06.979 | 99.00th=[58459], 99.50th=[61080], 99.90th=[63177], 99.95th=[63177], 00:15:06.979 | 99.99th=[63177] 00:15:06.979 write: IOPS=4890, BW=19.1MiB/s (20.0MB/s)(20.0MiB/1047msec); 0 zone resets 00:15:06.979 slat (usec): min=2, max=17912, avg=99.42, stdev=705.85 00:15:06.979 clat (usec): min=4341, max=63219, avg=13357.07, stdev=5020.01 00:15:06.979 lat (usec): min=4358, max=63228, avg=13456.50, stdev=5088.07 00:15:06.979 clat percentiles (usec): 00:15:06.979 | 1.00th=[ 6980], 5.00th=[ 9372], 10.00th=[10945], 20.00th=[11469], 00:15:06.979 | 30.00th=[11731], 40.00th=[11863], 50.00th=[12125], 60.00th=[12256], 00:15:06.979 | 70.00th=[12256], 80.00th=[12518], 90.00th=[17171], 95.00th=[28181], 00:15:06.979 | 99.00th=[30540], 99.50th=[31327], 99.90th=[34866], 99.95th=[43779], 00:15:06.979 | 99.99th=[63177] 00:15:06.979 bw ( KiB/s): min=19384, max=21560, per=31.08%, avg=20472.00, stdev=1538.66, samples=2 00:15:06.979 iops : min= 4846, max= 5390, avg=5118.00, stdev=384.67, samples=2 00:15:06.979 lat (msec) : 10=9.71%, 20=84.36%, 50=4.65%, 100=1.28% 00:15:06.979 cpu : usr=3.82%, sys=5.93%, ctx=512, majf=0, minf=1 00:15:06.979 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:15:06.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:06.979 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:06.979 issued rwts: total=4734,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:06.979 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:06.979 job3: (groupid=0, jobs=1): err= 0: pid=2237189: Wed May 15 11:07:04 2024 00:15:06.979 read: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec) 00:15:06.979 slat (nsec): min=1652, max=16896k, avg=197988.46, stdev=1076125.48 00:15:06.979 clat (usec): min=4704, max=92960, avg=27236.81, stdev=17506.64 00:15:06.979 lat (usec): min=4712, max=92970, avg=27434.80, stdev=17573.43 00:15:06.979 clat percentiles (usec): 00:15:06.979 | 1.00th=[ 7439], 5.00th=[ 8291], 10.00th=[ 9503], 20.00th=[15926], 00:15:06.979 | 30.00th=[20579], 40.00th=[21890], 50.00th=[22938], 60.00th=[23200], 00:15:06.979 | 70.00th=[25035], 80.00th=[38011], 90.00th=[53216], 95.00th=[60556], 00:15:06.979 | 99.00th=[84411], 99.50th=[92799], 99.90th=[92799], 99.95th=[92799], 00:15:06.979 | 99.99th=[92799] 00:15:06.979 write: IOPS=2886, BW=11.3MiB/s (11.8MB/s)(11.3MiB/1006msec); 0 zone resets 00:15:06.979 slat (usec): min=2, max=22002, avg=163.12, stdev=1069.81 00:15:06.979 clat (usec): min=1404, max=68688, avg=19948.80, stdev=12828.79 00:15:06.979 lat (usec): min=1419, max=80325, avg=20111.92, stdev=12917.80 00:15:06.979 clat percentiles (usec): 00:15:06.979 | 1.00th=[ 5145], 5.00th=[ 7439], 10.00th=[ 8356], 20.00th=[ 9372], 00:15:06.979 | 30.00th=[ 9765], 40.00th=[16909], 50.00th=[16909], 60.00th=[17433], 00:15:06.980 | 70.00th=[21365], 80.00th=[24249], 90.00th=[40109], 95.00th=[48497], 00:15:06.980 | 99.00th=[61604], 99.50th=[68682], 99.90th=[68682], 99.95th=[68682], 00:15:06.980 | 99.99th=[68682] 00:15:06.980 bw ( KiB/s): min= 9920, max=12263, per=16.84%, avg=11091.50, stdev=1656.75, samples=2 00:15:06.980 iops : min= 2480, max= 3065, avg=2772.50, stdev=413.66, samples=2 00:15:06.980 lat (msec) : 2=0.04%, 4=0.02%, 10=22.88%, 20=25.15%, 50=44.71% 00:15:06.980 lat (msec) : 100=7.21% 00:15:06.980 cpu : usr=3.08%, sys=3.28%, ctx=309, majf=0, minf=1 00:15:06.980 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:15:06.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:06.980 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:06.980 issued rwts: total=2560,2904,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:06.980 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:06.980 00:15:06.980 Run status group 0 (all jobs): 00:15:06.980 READ: bw=60.1MiB/s (63.0MB/s), 9.94MiB/s-20.4MiB/s (10.4MB/s-21.4MB/s), io=62.9MiB (66.0MB), run=1004-1047msec 00:15:06.980 WRITE: bw=64.3MiB/s (67.4MB/s), 11.3MiB/s-21.9MiB/s (11.8MB/s-23.0MB/s), io=67.3MiB (70.6MB), run=1004-1047msec 00:15:06.980 00:15:06.980 Disk stats (read/write): 00:15:06.980 nvme0n1: ios=2970/3072, merge=0/0, ticks=45844/60514, in_queue=106358, util=97.19% 00:15:06.980 nvme0n2: ios=4582/4608, merge=0/0, ticks=47758/47566, in_queue=95324, util=88.13% 00:15:06.980 nvme0n3: ios=4120/4103, merge=0/0, ticks=25570/27879, in_queue=53449, util=95.12% 00:15:06.980 nvme0n4: ios=2343/2560, merge=0/0, ticks=19062/17879, in_queue=36941, util=90.90% 00:15:06.980 11:07:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:15:06.980 [global] 00:15:06.980 thread=1 00:15:06.980 invalidate=1 00:15:06.980 rw=randwrite 00:15:06.980 time_based=1 00:15:06.980 runtime=1 00:15:06.980 ioengine=libaio 00:15:06.980 direct=1 00:15:06.980 bs=4096 00:15:06.980 iodepth=128 00:15:06.980 norandommap=0 00:15:06.980 numjobs=1 00:15:06.980 00:15:06.980 verify_dump=1 00:15:06.980 verify_backlog=512 00:15:06.980 verify_state_save=0 00:15:06.980 do_verify=1 00:15:06.980 verify=crc32c-intel 00:15:06.980 [job0] 00:15:06.980 filename=/dev/nvme0n1 00:15:06.980 [job1] 00:15:06.980 filename=/dev/nvme0n2 00:15:06.980 [job2] 00:15:06.980 filename=/dev/nvme0n3 00:15:06.980 [job3] 00:15:06.980 filename=/dev/nvme0n4 00:15:06.980 Could not set queue depth (nvme0n1) 00:15:06.980 Could not set queue depth (nvme0n2) 00:15:06.980 Could not set queue depth (nvme0n3) 00:15:06.980 Could not set queue depth (nvme0n4) 00:15:07.236 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:07.236 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:07.236 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:07.236 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:07.236 fio-3.35 00:15:07.236 Starting 4 threads 00:15:08.607 00:15:08.607 job0: (groupid=0, jobs=1): err= 0: pid=2237632: Wed May 15 11:07:05 2024 00:15:08.607 read: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec) 00:15:08.607 slat (nsec): min=1353, max=9758.7k, avg=91511.11, stdev=573767.02 00:15:08.607 clat (usec): min=3869, max=21394, avg=11131.24, stdev=2078.82 00:15:08.607 lat (usec): min=3875, max=21404, avg=11222.75, stdev=2116.36 00:15:08.607 clat percentiles (usec): 00:15:08.607 | 1.00th=[ 6194], 5.00th=[ 8160], 10.00th=[ 8848], 20.00th=[10159], 00:15:08.607 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10814], 60.00th=[10945], 00:15:08.607 | 70.00th=[11338], 80.00th=[12125], 90.00th=[13304], 95.00th=[15270], 00:15:08.607 | 99.00th=[19006], 99.50th=[19530], 99.90th=[20055], 99.95th=[21365], 00:15:08.607 | 99.99th=[21365] 00:15:08.607 write: IOPS=5794, BW=22.6MiB/s (23.7MB/s)(22.7MiB/1005msec); 0 zone resets 00:15:08.607 slat (usec): min=2, max=8688, avg=77.24, stdev=373.39 00:15:08.607 clat (usec): min=1468, max=65197, avg=11126.75, stdev=3888.55 00:15:08.607 lat (usec): min=1480, max=65200, avg=11203.99, stdev=3907.45 00:15:08.607 clat percentiles (usec): 00:15:08.607 | 1.00th=[ 4015], 5.00th=[ 6783], 10.00th=[ 8586], 20.00th=[ 9896], 00:15:08.607 | 30.00th=[10421], 40.00th=[10945], 50.00th=[10945], 60.00th=[11076], 00:15:08.607 | 70.00th=[11338], 80.00th=[11731], 90.00th=[12518], 95.00th=[14484], 00:15:08.607 | 99.00th=[28181], 99.50th=[34341], 99.90th=[63701], 99.95th=[63701], 00:15:08.607 | 99.99th=[65274] 00:15:08.607 bw ( KiB/s): min=22024, max=23544, per=33.10%, avg=22784.00, stdev=1074.80, samples=2 00:15:08.607 iops : min= 5506, max= 5886, avg=5696.00, stdev=268.70, samples=2 00:15:08.607 lat (msec) : 2=0.05%, 4=0.52%, 10=17.58%, 20=80.62%, 50=1.11% 00:15:08.607 lat (msec) : 100=0.12% 00:15:08.607 cpu : usr=4.88%, sys=6.18%, ctx=705, majf=0, minf=1 00:15:08.607 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:15:08.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:08.607 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:08.607 issued rwts: total=5632,5823,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:08.607 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:08.607 job1: (groupid=0, jobs=1): err= 0: pid=2237652: Wed May 15 11:07:05 2024 00:15:08.607 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:15:08.607 slat (nsec): min=1313, max=15772k, avg=116533.91, stdev=781149.46 00:15:08.607 clat (usec): min=4235, max=38210, avg=15413.60, stdev=5819.79 00:15:08.607 lat (usec): min=4245, max=38249, avg=15530.14, stdev=5891.60 00:15:08.607 clat percentiles (usec): 00:15:08.607 | 1.00th=[ 7504], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10421], 00:15:08.607 | 30.00th=[10945], 40.00th=[11863], 50.00th=[13960], 60.00th=[15270], 00:15:08.607 | 70.00th=[17433], 80.00th=[21365], 90.00th=[24773], 95.00th=[27395], 00:15:08.607 | 99.00th=[28967], 99.50th=[30540], 99.90th=[33817], 99.95th=[35390], 00:15:08.607 | 99.99th=[38011] 00:15:08.607 write: IOPS=4377, BW=17.1MiB/s (17.9MB/s)(17.2MiB/1003msec); 0 zone resets 00:15:08.607 slat (usec): min=2, max=23813, avg=106.97, stdev=764.37 00:15:08.607 clat (usec): min=627, max=40755, avg=14559.13, stdev=6209.05 00:15:08.607 lat (usec): min=655, max=40775, avg=14666.10, stdev=6262.44 00:15:08.607 clat percentiles (usec): 00:15:08.607 | 1.00th=[ 4146], 5.00th=[ 6718], 10.00th=[ 8586], 20.00th=[10159], 00:15:08.607 | 30.00th=[10552], 40.00th=[10945], 50.00th=[12518], 60.00th=[13566], 00:15:08.607 | 70.00th=[17695], 80.00th=[22152], 90.00th=[22938], 95.00th=[26084], 00:15:08.607 | 99.00th=[30540], 99.50th=[30540], 99.90th=[30802], 99.95th=[31589], 00:15:08.607 | 99.99th=[40633] 00:15:08.607 bw ( KiB/s): min=13224, max=20888, per=24.78%, avg=17056.00, stdev=5419.27, samples=2 00:15:08.607 iops : min= 3306, max= 5222, avg=4264.00, stdev=1354.82, samples=2 00:15:08.607 lat (usec) : 750=0.08% 00:15:08.607 lat (msec) : 2=0.20%, 4=0.15%, 10=13.30%, 20=61.93%, 50=24.33% 00:15:08.607 cpu : usr=3.29%, sys=7.39%, ctx=348, majf=0, minf=1 00:15:08.607 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:15:08.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:08.607 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:08.607 issued rwts: total=4096,4391,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:08.607 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:08.607 job2: (groupid=0, jobs=1): err= 0: pid=2237676: Wed May 15 11:07:05 2024 00:15:08.607 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:15:08.607 slat (nsec): min=1039, max=22813k, avg=168945.81, stdev=1230241.58 00:15:08.607 clat (usec): min=7106, max=63068, avg=20564.29, stdev=7310.52 00:15:08.607 lat (usec): min=7112, max=63074, avg=20733.24, stdev=7421.46 00:15:08.607 clat percentiles (usec): 00:15:08.607 | 1.00th=[ 7177], 5.00th=[10945], 10.00th=[12649], 20.00th=[16319], 00:15:08.607 | 30.00th=[17171], 40.00th=[17695], 50.00th=[19530], 60.00th=[20317], 00:15:08.607 | 70.00th=[22938], 80.00th=[23462], 90.00th=[28443], 95.00th=[38011], 00:15:08.607 | 99.00th=[40633], 99.50th=[63177], 99.90th=[63177], 99.95th=[63177], 00:15:08.607 | 99.99th=[63177] 00:15:08.607 write: IOPS=2968, BW=11.6MiB/s (12.2MB/s)(11.7MiB/1005msec); 0 zone resets 00:15:08.607 slat (nsec): min=1840, max=12393k, avg=171061.91, stdev=934452.90 00:15:08.607 clat (usec): min=1336, max=65391, avg=25121.16, stdev=13150.36 00:15:08.607 lat (usec): min=1368, max=65398, avg=25292.22, stdev=13234.67 00:15:08.607 clat percentiles (usec): 00:15:08.607 | 1.00th=[ 6783], 5.00th=[ 9110], 10.00th=[10683], 20.00th=[12387], 00:15:08.607 | 30.00th=[17433], 40.00th=[21890], 50.00th=[22676], 60.00th=[23200], 00:15:08.607 | 70.00th=[23987], 80.00th=[36963], 90.00th=[44827], 95.00th=[51643], 00:15:08.607 | 99.00th=[65274], 99.50th=[65274], 99.90th=[65274], 99.95th=[65274], 00:15:08.607 | 99.99th=[65274] 00:15:08.607 bw ( KiB/s): min=10560, max=12288, per=16.60%, avg=11424.00, stdev=1221.88, samples=2 00:15:08.607 iops : min= 2640, max= 3072, avg=2856.00, stdev=305.47, samples=2 00:15:08.607 lat (msec) : 2=0.29%, 10=5.81%, 20=36.19%, 50=54.39%, 100=3.32% 00:15:08.607 cpu : usr=1.99%, sys=2.59%, ctx=266, majf=0, minf=1 00:15:08.607 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:15:08.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:08.607 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:08.607 issued rwts: total=2560,2983,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:08.607 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:08.607 job3: (groupid=0, jobs=1): err= 0: pid=2237687: Wed May 15 11:07:05 2024 00:15:08.607 read: IOPS=4080, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:15:08.607 slat (nsec): min=1162, max=63124k, avg=123118.53, stdev=1223191.43 00:15:08.607 clat (usec): min=1050, max=93138, avg=15422.54, stdev=12142.19 00:15:08.607 lat (usec): min=5847, max=93167, avg=15545.66, stdev=12212.60 00:15:08.607 clat percentiles (usec): 00:15:08.607 | 1.00th=[ 6259], 5.00th=[ 8979], 10.00th=[10159], 20.00th=[11469], 00:15:08.607 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12256], 60.00th=[12911], 00:15:08.607 | 70.00th=[13698], 80.00th=[15401], 90.00th=[19268], 95.00th=[30016], 00:15:08.607 | 99.00th=[85459], 99.50th=[87557], 99.90th=[88605], 99.95th=[88605], 00:15:08.607 | 99.99th=[92799] 00:15:08.607 write: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets 00:15:08.608 slat (usec): min=2, max=9014, avg=109.71, stdev=491.11 00:15:08.608 clat (usec): min=3086, max=74993, avg=15398.10, stdev=7724.03 00:15:08.608 lat (usec): min=3094, max=75634, avg=15507.81, stdev=7774.07 00:15:08.608 clat percentiles (usec): 00:15:08.608 | 1.00th=[ 7373], 5.00th=[10290], 10.00th=[11207], 20.00th=[11731], 00:15:08.608 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12387], 60.00th=[12649], 00:15:08.608 | 70.00th=[13566], 80.00th=[16909], 90.00th=[24249], 95.00th=[35390], 00:15:08.608 | 99.00th=[44827], 99.50th=[47449], 99.90th=[48497], 99.95th=[74974], 00:15:08.608 | 99.99th=[74974] 00:15:08.608 bw ( KiB/s): min=12288, max=20480, per=23.80%, avg=16384.00, stdev=5792.62, samples=2 00:15:08.608 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:15:08.608 lat (msec) : 2=0.01%, 4=0.07%, 10=6.71%, 20=80.43%, 50=11.23% 00:15:08.608 lat (msec) : 100=1.55% 00:15:08.608 cpu : usr=3.20%, sys=5.69%, ctx=519, majf=0, minf=1 00:15:08.608 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:15:08.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:08.608 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:08.608 issued rwts: total=4089,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:08.608 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:08.608 00:15:08.608 Run status group 0 (all jobs): 00:15:08.608 READ: bw=63.7MiB/s (66.7MB/s), 9.95MiB/s-21.9MiB/s (10.4MB/s-23.0MB/s), io=64.0MiB (67.1MB), run=1002-1005msec 00:15:08.608 WRITE: bw=67.2MiB/s (70.5MB/s), 11.6MiB/s-22.6MiB/s (12.2MB/s-23.7MB/s), io=67.6MiB (70.8MB), run=1002-1005msec 00:15:08.608 00:15:08.608 Disk stats (read/write): 00:15:08.608 nvme0n1: ios=4570/4608, merge=0/0, ticks=36778/37711, in_queue=74489, util=81.66% 00:15:08.608 nvme0n2: ios=3365/3584, merge=0/0, ticks=36606/38090, in_queue=74696, util=97.84% 00:15:08.608 nvme0n3: ios=2297/2560, merge=0/0, ticks=27889/32349, in_queue=60238, util=87.55% 00:15:08.608 nvme0n4: ios=3065/3072, merge=0/0, ticks=34692/36935, in_queue=71627, util=97.25% 00:15:08.608 11:07:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:15:08.608 11:07:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2237783 00:15:08.608 11:07:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:15:08.608 11:07:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:15:08.608 [global] 00:15:08.608 thread=1 00:15:08.608 invalidate=1 00:15:08.608 rw=read 00:15:08.608 time_based=1 00:15:08.608 runtime=10 00:15:08.608 ioengine=libaio 00:15:08.608 direct=1 00:15:08.608 bs=4096 00:15:08.608 iodepth=1 00:15:08.608 norandommap=1 00:15:08.608 numjobs=1 00:15:08.608 00:15:08.608 [job0] 00:15:08.608 filename=/dev/nvme0n1 00:15:08.608 [job1] 00:15:08.608 filename=/dev/nvme0n2 00:15:08.608 [job2] 00:15:08.608 filename=/dev/nvme0n3 00:15:08.608 [job3] 00:15:08.608 filename=/dev/nvme0n4 00:15:08.608 Could not set queue depth (nvme0n1) 00:15:08.608 Could not set queue depth (nvme0n2) 00:15:08.608 Could not set queue depth (nvme0n3) 00:15:08.608 Could not set queue depth (nvme0n4) 00:15:08.865 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:08.865 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:08.865 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:08.865 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:08.865 fio-3.35 00:15:08.865 Starting 4 threads 00:15:12.152 11:07:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:15:12.152 11:07:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:15:12.152 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=352256, buflen=4096 00:15:12.152 fio: pid=2238127, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:12.152 11:07:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:12.152 11:07:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:15:12.152 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=315392, buflen=4096 00:15:12.152 fio: pid=2238126, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:12.152 11:07:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:12.152 11:07:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:15:12.152 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=35364864, buflen=4096 00:15:12.152 fio: pid=2238109, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:12.427 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=52707328, buflen=4096 00:15:12.427 fio: pid=2238121, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:12.427 11:07:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:12.427 11:07:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:15:12.427 00:15:12.427 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2238109: Wed May 15 11:07:09 2024 00:15:12.427 read: IOPS=2734, BW=10.7MiB/s (11.2MB/s)(33.7MiB/3158msec) 00:15:12.427 slat (usec): min=6, max=29804, avg=13.61, stdev=320.73 00:15:12.427 clat (usec): min=162, max=44076, avg=347.24, stdev=1867.55 00:15:12.427 lat (usec): min=199, max=70967, avg=360.85, stdev=1968.26 00:15:12.427 clat percentiles (usec): 00:15:12.427 | 1.00th=[ 210], 5.00th=[ 221], 10.00th=[ 227], 20.00th=[ 235], 00:15:12.427 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 255], 00:15:12.427 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 289], 95.00th=[ 416], 00:15:12.427 | 99.00th=[ 482], 99.50th=[ 494], 99.90th=[41157], 99.95th=[41157], 00:15:12.427 | 99.99th=[44303] 00:15:12.427 bw ( KiB/s): min= 1014, max=15240, per=43.87%, avg=11475.67, stdev=5336.26, samples=6 00:15:12.427 iops : min= 253, max= 3810, avg=2868.83, stdev=1334.26, samples=6 00:15:12.427 lat (usec) : 250=48.14%, 500=51.49%, 750=0.13%, 1000=0.01% 00:15:12.427 lat (msec) : 4=0.01%, 50=0.21% 00:15:12.427 cpu : usr=1.49%, sys=4.75%, ctx=8637, majf=0, minf=1 00:15:12.427 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:12.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:12.427 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:12.427 issued rwts: total=8635,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:12.427 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:12.427 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2238121: Wed May 15 11:07:09 2024 00:15:12.427 read: IOPS=3884, BW=15.2MiB/s (15.9MB/s)(50.3MiB/3313msec) 00:15:12.427 slat (usec): min=6, max=22788, avg=11.72, stdev=245.14 00:15:12.427 clat (usec): min=181, max=672, avg=242.03, stdev=21.14 00:15:12.427 lat (usec): min=190, max=23206, avg=253.74, stdev=248.33 00:15:12.427 clat percentiles (usec): 00:15:12.427 | 1.00th=[ 196], 5.00th=[ 208], 10.00th=[ 215], 20.00th=[ 227], 00:15:12.427 | 30.00th=[ 235], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 249], 00:15:12.427 | 70.00th=[ 251], 80.00th=[ 258], 90.00th=[ 262], 95.00th=[ 269], 00:15:12.427 | 99.00th=[ 281], 99.50th=[ 289], 99.90th=[ 457], 99.95th=[ 474], 00:15:12.427 | 99.99th=[ 482] 00:15:12.427 bw ( KiB/s): min=15352, max=16656, per=59.93%, avg=15676.00, stdev=484.43, samples=6 00:15:12.427 iops : min= 3838, max= 4164, avg=3919.00, stdev=121.11, samples=6 00:15:12.427 lat (usec) : 250=65.25%, 500=34.73%, 750=0.01% 00:15:12.427 cpu : usr=2.39%, sys=5.86%, ctx=12873, majf=0, minf=1 00:15:12.427 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:12.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:12.427 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:12.427 issued rwts: total=12869,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:12.427 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:12.427 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2238126: Wed May 15 11:07:09 2024 00:15:12.427 read: IOPS=26, BW=105KiB/s (108kB/s)(308KiB/2933msec) 00:15:12.427 slat (nsec): min=9384, max=31677, avg=20512.56, stdev=4732.99 00:15:12.427 clat (usec): min=267, max=42093, avg=37798.80, stdev=10968.44 00:15:12.427 lat (usec): min=289, max=42115, avg=37819.29, stdev=10967.59 00:15:12.427 clat percentiles (usec): 00:15:12.427 | 1.00th=[ 269], 5.00th=[ 281], 10.00th=[40633], 20.00th=[41157], 00:15:12.427 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:12.427 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:12.427 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:12.427 | 99.99th=[42206] 00:15:12.428 bw ( KiB/s): min= 104, max= 112, per=0.41%, avg=107.20, stdev= 4.38, samples=5 00:15:12.428 iops : min= 26, max= 28, avg=26.80, stdev= 1.10, samples=5 00:15:12.428 lat (usec) : 500=6.41%, 750=1.28% 00:15:12.428 lat (msec) : 50=91.03% 00:15:12.428 cpu : usr=0.10%, sys=0.00%, ctx=78, majf=0, minf=1 00:15:12.428 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:12.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:12.428 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:12.428 issued rwts: total=78,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:12.428 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:12.428 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2238127: Wed May 15 11:07:09 2024 00:15:12.428 read: IOPS=31, BW=125KiB/s (128kB/s)(344KiB/2745msec) 00:15:12.428 slat (nsec): min=6369, max=35548, avg=20180.56, stdev=6502.63 00:15:12.428 clat (usec): min=228, max=42024, avg=31654.51, stdev=17276.96 00:15:12.428 lat (usec): min=236, max=42047, avg=31674.65, stdev=17282.10 00:15:12.428 clat percentiles (usec): 00:15:12.428 | 1.00th=[ 229], 5.00th=[ 233], 10.00th=[ 239], 20.00th=[ 424], 00:15:12.428 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:12.428 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:15:12.428 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:12.428 | 99.99th=[42206] 00:15:12.428 bw ( KiB/s): min= 96, max= 184, per=0.49%, avg=128.00, stdev=36.22, samples=5 00:15:12.428 iops : min= 24, max= 46, avg=32.00, stdev= 9.06, samples=5 00:15:12.428 lat (usec) : 250=12.64%, 500=9.20% 00:15:12.428 lat (msec) : 10=1.15%, 50=75.86% 00:15:12.428 cpu : usr=0.00%, sys=0.11%, ctx=87, majf=0, minf=2 00:15:12.428 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:12.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:12.428 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:12.428 issued rwts: total=87,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:12.428 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:12.428 00:15:12.428 Run status group 0 (all jobs): 00:15:12.428 READ: bw=25.5MiB/s (26.8MB/s), 105KiB/s-15.2MiB/s (108kB/s-15.9MB/s), io=84.6MiB (88.7MB), run=2745-3313msec 00:15:12.428 00:15:12.428 Disk stats (read/write): 00:15:12.428 nvme0n1: ios=8668/0, merge=0/0, ticks=3893/0, in_queue=3893, util=98.06% 00:15:12.428 nvme0n2: ios=12186/0, merge=0/0, ticks=2848/0, in_queue=2848, util=95.58% 00:15:12.428 nvme0n3: ios=75/0, merge=0/0, ticks=2830/0, in_queue=2830, util=96.55% 00:15:12.428 nvme0n4: ios=83/0, merge=0/0, ticks=2601/0, in_queue=2601, util=96.49% 00:15:12.428 11:07:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:12.428 11:07:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:15:12.684 11:07:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:12.684 11:07:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:15:12.942 11:07:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:12.942 11:07:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:15:13.199 11:07:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:13.199 11:07:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:15:13.199 11:07:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:15:13.199 11:07:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 2237783 00:15:13.199 11:07:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:15:13.199 11:07:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:13.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:13.455 11:07:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:13.455 11:07:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # local i=0 00:15:13.455 11:07:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:15:13.455 11:07:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:13.455 11:07:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:15:13.455 11:07:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:13.455 11:07:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1228 -- # return 0 00:15:13.455 11:07:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:15:13.455 11:07:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:15:13.455 nvmf hotplug test: fio failed as expected 00:15:13.455 11:07:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:13.712 11:07:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:15:13.712 11:07:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:15:13.712 11:07:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:15:13.712 11:07:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:15:13.712 11:07:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:15:13.712 11:07:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:13.712 11:07:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:15:13.712 11:07:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:13.712 11:07:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:15:13.712 11:07:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:13.712 11:07:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:13.712 rmmod nvme_tcp 00:15:13.712 rmmod nvme_fabrics 00:15:13.712 rmmod nvme_keyring 00:15:13.712 11:07:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:13.712 11:07:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:15:13.712 11:07:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:15:13.712 11:07:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 2235062 ']' 00:15:13.712 11:07:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 2235062 00:15:13.712 11:07:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@947 -- # '[' -z 2235062 ']' 00:15:13.712 11:07:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # kill -0 2235062 00:15:13.712 11:07:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # uname 00:15:13.712 11:07:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:15:13.712 11:07:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2235062 00:15:13.712 11:07:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:15:13.712 11:07:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:15:13.712 11:07:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2235062' 00:15:13.712 killing process with pid 2235062 00:15:13.712 11:07:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # kill 2235062 00:15:13.712 [2024-05-15 11:07:10.893538] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:13.712 11:07:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@971 -- # wait 2235062 00:15:13.969 11:07:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:13.969 11:07:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:13.969 11:07:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:13.969 11:07:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:13.969 11:07:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:13.969 11:07:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:13.969 11:07:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:13.969 11:07:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:15.957 11:07:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:15.958 00:15:15.958 real 0m26.068s 00:15:15.958 user 1m45.935s 00:15:15.958 sys 0m7.904s 00:15:15.958 11:07:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:15.958 11:07:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.958 ************************************ 00:15:15.958 END TEST nvmf_fio_target 00:15:15.958 ************************************ 00:15:15.958 11:07:13 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:15.958 11:07:13 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:15:15.958 11:07:13 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:15.958 11:07:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:16.216 ************************************ 00:15:16.216 START TEST nvmf_bdevio 00:15:16.216 ************************************ 00:15:16.216 11:07:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:16.216 * Looking for test storage... 00:15:16.216 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:16.216 11:07:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:16.216 11:07:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:15:16.216 11:07:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:16.216 11:07:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:16.216 11:07:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:16.216 11:07:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:16.216 11:07:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:16.216 11:07:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:16.216 11:07:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:16.216 11:07:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:16.216 11:07:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:16.216 11:07:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:16.216 11:07:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:16.216 11:07:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:16.216 11:07:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:16.216 11:07:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:16.216 11:07:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:16.216 11:07:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:16.216 11:07:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:16.216 11:07:13 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:16.216 11:07:13 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:16.216 11:07:13 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:16.216 11:07:13 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.216 11:07:13 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.216 11:07:13 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.216 11:07:13 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:15:16.216 11:07:13 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.217 11:07:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:15:16.217 11:07:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:16.217 11:07:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:16.217 11:07:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:16.217 11:07:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:16.217 11:07:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:16.217 11:07:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:16.217 11:07:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:16.217 11:07:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:16.217 11:07:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:16.217 11:07:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:16.217 11:07:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:15:16.217 11:07:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:16.217 11:07:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:16.217 11:07:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:16.217 11:07:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:16.217 11:07:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:16.217 11:07:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.217 11:07:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:16.217 11:07:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.217 11:07:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:16.217 11:07:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:16.217 11:07:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:15:16.217 11:07:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:21.482 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:21.482 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:15:21.482 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:21.482 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:21.482 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:21.482 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:21.482 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:21.482 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:15:21.482 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:21.743 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:15:21.743 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:15:21.743 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:15:21.743 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:15:21.743 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:15:21.743 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:15:21.743 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:21.743 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:21.743 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:21.743 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:21.743 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:21.743 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:21.743 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:21.743 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:21.743 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:21.743 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:21.743 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:21.743 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:21.743 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:21.743 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:21.743 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:21.743 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:21.743 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:21.743 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:21.743 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:21.743 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:21.743 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:21.743 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:21.743 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:21.743 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:21.743 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:21.743 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:21.744 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:21.744 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:21.744 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:21.744 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:21.744 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:21.744 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:21.744 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:21.744 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:21.744 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:21.744 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:21.744 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:21.744 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:21.744 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:21.744 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:21.744 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:21.744 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:21.744 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:21.744 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:21.744 Found net devices under 0000:86:00.0: cvl_0_0 00:15:21.744 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:21.744 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:21.744 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:21.744 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:21.744 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:21.744 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:21.744 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:21.744 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:21.744 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:21.744 Found net devices under 0000:86:00.1: cvl_0_1 00:15:21.744 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:21.744 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:21.744 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:15:21.744 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:21.744 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:21.744 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:21.744 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:21.744 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:21.744 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:21.744 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:21.744 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:21.744 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:21.744 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:21.744 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:21.744 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:21.744 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:21.744 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:21.744 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:21.744 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:21.744 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:21.744 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:21.744 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:21.744 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:21.744 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:21.744 11:07:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:21.744 11:07:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:21.744 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:21.744 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:15:21.744 00:15:21.744 --- 10.0.0.2 ping statistics --- 00:15:21.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:21.744 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:15:21.744 11:07:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:22.002 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:22.002 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:15:22.002 00:15:22.002 --- 10.0.0.1 ping statistics --- 00:15:22.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:22.002 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:15:22.002 11:07:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:22.002 11:07:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:15:22.002 11:07:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:22.002 11:07:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:22.002 11:07:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:22.002 11:07:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:22.002 11:07:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:22.002 11:07:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:22.002 11:07:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:22.002 11:07:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:22.002 11:07:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:22.002 11:07:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@721 -- # xtrace_disable 00:15:22.002 11:07:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:22.002 11:07:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=2242363 00:15:22.002 11:07:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 2242363 00:15:22.002 11:07:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:15:22.002 11:07:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@828 -- # '[' -z 2242363 ']' 00:15:22.002 11:07:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.002 11:07:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:22.002 11:07:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.002 11:07:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:22.002 11:07:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:22.002 [2024-05-15 11:07:19.109140] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:15:22.002 [2024-05-15 11:07:19.109190] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:22.002 EAL: No free 2048 kB hugepages reported on node 1 00:15:22.002 [2024-05-15 11:07:19.167540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:22.002 [2024-05-15 11:07:19.239372] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:22.002 [2024-05-15 11:07:19.239413] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:22.002 [2024-05-15 11:07:19.239419] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:22.002 [2024-05-15 11:07:19.239425] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:22.002 [2024-05-15 11:07:19.239430] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:22.002 [2024-05-15 11:07:19.239506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:22.002 [2024-05-15 11:07:19.239617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:22.002 [2024-05-15 11:07:19.239704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:22.002 [2024-05-15 11:07:19.239703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:22.935 11:07:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:22.935 11:07:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@861 -- # return 0 00:15:22.935 11:07:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:22.935 11:07:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@727 -- # xtrace_disable 00:15:22.935 11:07:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:22.935 11:07:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:22.935 11:07:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:22.935 11:07:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:22.935 11:07:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:22.935 [2024-05-15 11:07:19.959015] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:22.935 11:07:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:22.935 11:07:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:22.935 11:07:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:22.935 11:07:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:22.935 Malloc0 00:15:22.935 11:07:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:22.935 11:07:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:22.935 11:07:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:22.935 11:07:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:22.935 11:07:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:22.935 11:07:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:22.935 11:07:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:22.935 11:07:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:22.935 11:07:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:22.935 11:07:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:22.935 11:07:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:22.935 11:07:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:22.935 [2024-05-15 11:07:20.010147] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:22.935 [2024-05-15 11:07:20.010409] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:22.935 11:07:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:22.935 11:07:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:15:22.935 11:07:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:22.935 11:07:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:15:22.935 11:07:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:15:22.935 11:07:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:22.935 11:07:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:22.935 { 00:15:22.935 "params": { 00:15:22.935 "name": "Nvme$subsystem", 00:15:22.935 "trtype": "$TEST_TRANSPORT", 00:15:22.935 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:22.935 "adrfam": "ipv4", 00:15:22.935 "trsvcid": "$NVMF_PORT", 00:15:22.935 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:22.935 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:22.935 "hdgst": ${hdgst:-false}, 00:15:22.935 "ddgst": ${ddgst:-false} 00:15:22.935 }, 00:15:22.935 "method": "bdev_nvme_attach_controller" 00:15:22.935 } 00:15:22.935 EOF 00:15:22.935 )") 00:15:22.935 11:07:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:15:22.935 11:07:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:15:22.935 11:07:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:15:22.935 11:07:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:22.935 "params": { 00:15:22.935 "name": "Nvme1", 00:15:22.935 "trtype": "tcp", 00:15:22.935 "traddr": "10.0.0.2", 00:15:22.935 "adrfam": "ipv4", 00:15:22.935 "trsvcid": "4420", 00:15:22.935 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:22.935 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:22.935 "hdgst": false, 00:15:22.935 "ddgst": false 00:15:22.935 }, 00:15:22.935 "method": "bdev_nvme_attach_controller" 00:15:22.935 }' 00:15:22.935 [2024-05-15 11:07:20.049734] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:15:22.935 [2024-05-15 11:07:20.049781] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2242600 ] 00:15:22.935 EAL: No free 2048 kB hugepages reported on node 1 00:15:22.935 [2024-05-15 11:07:20.106703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:22.935 [2024-05-15 11:07:20.181889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:22.935 [2024-05-15 11:07:20.181983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.935 [2024-05-15 11:07:20.181983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:23.501 I/O targets: 00:15:23.501 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:23.501 00:15:23.501 00:15:23.501 CUnit - A unit testing framework for C - Version 2.1-3 00:15:23.501 http://cunit.sourceforge.net/ 00:15:23.501 00:15:23.501 00:15:23.501 Suite: bdevio tests on: Nvme1n1 00:15:23.501 Test: blockdev write read block ...passed 00:15:23.502 Test: blockdev write zeroes read block ...passed 00:15:23.502 Test: blockdev write zeroes read no split ...passed 00:15:23.502 Test: blockdev write zeroes read split ...passed 00:15:23.502 Test: blockdev write zeroes read split partial ...passed 00:15:23.502 Test: blockdev reset ...[2024-05-15 11:07:20.653786] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:23.502 [2024-05-15 11:07:20.653849] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17837f0 (9): Bad file descriptor 00:15:23.502 [2024-05-15 11:07:20.668469] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:23.502 passed 00:15:23.502 Test: blockdev write read 8 blocks ...passed 00:15:23.502 Test: blockdev write read size > 128k ...passed 00:15:23.502 Test: blockdev write read invalid size ...passed 00:15:23.502 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:23.502 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:23.502 Test: blockdev write read max offset ...passed 00:15:23.760 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:23.760 Test: blockdev writev readv 8 blocks ...passed 00:15:23.760 Test: blockdev writev readv 30 x 1block ...passed 00:15:23.760 Test: blockdev writev readv block ...passed 00:15:23.760 Test: blockdev writev readv size > 128k ...passed 00:15:23.760 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:23.760 Test: blockdev comparev and writev ...[2024-05-15 11:07:20.839902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:23.760 [2024-05-15 11:07:20.839929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:23.760 [2024-05-15 11:07:20.839943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:23.760 [2024-05-15 11:07:20.839951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:23.760 [2024-05-15 11:07:20.840218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:23.760 [2024-05-15 11:07:20.840228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:23.760 [2024-05-15 11:07:20.840240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:23.760 [2024-05-15 11:07:20.840247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:23.760 [2024-05-15 11:07:20.840502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:23.760 [2024-05-15 11:07:20.840512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:23.760 [2024-05-15 11:07:20.840523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:23.760 [2024-05-15 11:07:20.840531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:23.760 [2024-05-15 11:07:20.840769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:23.760 [2024-05-15 11:07:20.840779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:23.760 [2024-05-15 11:07:20.840791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:23.760 [2024-05-15 11:07:20.840798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:23.760 passed 00:15:23.760 Test: blockdev nvme passthru rw ...passed 00:15:23.760 Test: blockdev nvme passthru vendor specific ...[2024-05-15 11:07:20.924521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:23.760 [2024-05-15 11:07:20.924537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:23.760 [2024-05-15 11:07:20.924660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:23.760 [2024-05-15 11:07:20.924670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:23.760 [2024-05-15 11:07:20.924792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:23.760 [2024-05-15 11:07:20.924804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:23.760 [2024-05-15 11:07:20.924922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:23.760 [2024-05-15 11:07:20.924931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:23.760 passed 00:15:23.760 Test: blockdev nvme admin passthru ...passed 00:15:23.760 Test: blockdev copy ...passed 00:15:23.760 00:15:23.760 Run Summary: Type Total Ran Passed Failed Inactive 00:15:23.760 suites 1 1 n/a 0 0 00:15:23.760 tests 23 23 23 0 0 00:15:23.760 asserts 152 152 152 0 n/a 00:15:23.760 00:15:23.760 Elapsed time = 0.977 seconds 00:15:24.019 11:07:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:24.019 11:07:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:24.019 11:07:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:24.019 11:07:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:24.019 11:07:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:24.019 11:07:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:15:24.019 11:07:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:24.019 11:07:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:15:24.019 11:07:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:24.019 11:07:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:15:24.019 11:07:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:24.019 11:07:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:24.019 rmmod nvme_tcp 00:15:24.019 rmmod nvme_fabrics 00:15:24.019 rmmod nvme_keyring 00:15:24.019 11:07:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:24.019 11:07:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:15:24.019 11:07:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:15:24.019 11:07:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 2242363 ']' 00:15:24.019 11:07:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 2242363 00:15:24.019 11:07:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@947 -- # '[' -z 2242363 ']' 00:15:24.019 11:07:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # kill -0 2242363 00:15:24.019 11:07:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # uname 00:15:24.019 11:07:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:15:24.019 11:07:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2242363 00:15:24.278 11:07:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # process_name=reactor_3 00:15:24.278 11:07:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' reactor_3 = sudo ']' 00:15:24.278 11:07:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2242363' 00:15:24.278 killing process with pid 2242363 00:15:24.278 11:07:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # kill 2242363 00:15:24.278 [2024-05-15 11:07:21.301415] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:24.278 11:07:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@971 -- # wait 2242363 00:15:24.278 11:07:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:24.278 11:07:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:24.278 11:07:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:24.278 11:07:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:24.278 11:07:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:24.278 11:07:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:24.278 11:07:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:24.278 11:07:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:26.810 11:07:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:26.810 00:15:26.810 real 0m10.366s 00:15:26.810 user 0m12.960s 00:15:26.810 sys 0m4.827s 00:15:26.810 11:07:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:26.810 11:07:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:26.810 ************************************ 00:15:26.810 END TEST nvmf_bdevio 00:15:26.810 ************************************ 00:15:26.810 11:07:23 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:26.810 11:07:23 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:15:26.810 11:07:23 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:26.810 11:07:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:26.810 ************************************ 00:15:26.810 START TEST nvmf_auth_target 00:15:26.810 ************************************ 00:15:26.810 11:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:26.810 * Looking for test storage... 00:15:26.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:26.810 11:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:26.810 11:07:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:26.810 11:07:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:26.810 11:07:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:26.810 11:07:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:26.810 11:07:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:26.810 11:07:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:26.810 11:07:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:26.810 11:07:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:26.810 11:07:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:26.810 11:07:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:26.810 11:07:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:26.810 11:07:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:26.810 11:07:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:26.810 11:07:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:26.810 11:07:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:26.810 11:07:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:26.810 11:07:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:26.810 11:07:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:26.810 11:07:23 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:26.810 11:07:23 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:26.810 11:07:23 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:26.810 11:07:23 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.810 11:07:23 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.811 11:07:23 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.811 11:07:23 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:26.811 11:07:23 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.811 11:07:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:15:26.811 11:07:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:26.811 11:07:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:26.811 11:07:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:26.811 11:07:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:26.811 11:07:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:26.811 11:07:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:26.811 11:07:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:26.811 11:07:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:26.811 11:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:26.811 11:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:26.811 11:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:26.811 11:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:26.811 11:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:26.811 11:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:26.811 11:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@57 -- # nvmftestinit 00:15:26.811 11:07:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:26.811 11:07:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:26.811 11:07:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:26.811 11:07:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:26.811 11:07:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:26.811 11:07:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:26.811 11:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:26.811 11:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:26.811 11:07:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:26.811 11:07:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:26.811 11:07:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:15:26.811 11:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:32.076 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:32.076 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:32.076 Found net devices under 0000:86:00.0: cvl_0_0 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:32.076 Found net devices under 0000:86:00.1: cvl_0_1 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:32.076 11:07:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:32.076 11:07:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:32.076 11:07:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:32.076 11:07:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:32.076 11:07:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:32.076 11:07:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:32.076 11:07:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:32.076 11:07:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:32.076 11:07:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:32.076 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:32.076 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:15:32.076 00:15:32.076 --- 10.0.0.2 ping statistics --- 00:15:32.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.076 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:15:32.076 11:07:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:32.076 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:32.076 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:15:32.076 00:15:32.076 --- 10.0.0.1 ping statistics --- 00:15:32.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.076 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:15:32.076 11:07:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:32.076 11:07:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:15:32.076 11:07:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:32.076 11:07:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:32.076 11:07:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:32.076 11:07:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:32.076 11:07:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:32.076 11:07:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:32.076 11:07:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:32.076 11:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@58 -- # nvmfappstart -L nvmf_auth 00:15:32.076 11:07:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:32.076 11:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@721 -- # xtrace_disable 00:15:32.076 11:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.076 11:07:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2246140 00:15:32.076 11:07:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2246140 00:15:32.076 11:07:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:32.076 11:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # '[' -z 2246140 ']' 00:15:32.076 11:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.076 11:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:32.076 11:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.076 11:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:32.076 11:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@861 -- # return 0 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@727 -- # xtrace_disable 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # hostpid=2246387 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # gen_dhchap_key null 48 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=00f3cf8268b6c1b3f5cce0d2d16d3547ae16c3cbafb6a84e 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.HWV 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 00f3cf8268b6c1b3f5cce0d2d16d3547ae16c3cbafb6a84e 0 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 00f3cf8268b6c1b3f5cce0d2d16d3547ae16c3cbafb6a84e 0 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=00f3cf8268b6c1b3f5cce0d2d16d3547ae16c3cbafb6a84e 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.HWV 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.HWV 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # keys[0]=/tmp/spdk.key-null.HWV 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@65 -- # gen_dhchap_key sha256 32 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9ebda657cbd1f4593637361929c7a36b 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.a4e 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9ebda657cbd1f4593637361929c7a36b 1 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9ebda657cbd1f4593637361929c7a36b 1 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9ebda657cbd1f4593637361929c7a36b 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.a4e 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.a4e 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@65 -- # keys[1]=/tmp/spdk.key-sha256.a4e 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@66 -- # gen_dhchap_key sha384 48 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0e875b788d8d876b01147ea21d3c38a74247c37df68b8ec4 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.yrg 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0e875b788d8d876b01147ea21d3c38a74247c37df68b8ec4 2 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0e875b788d8d876b01147ea21d3c38a74247c37df68b8ec4 2 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0e875b788d8d876b01147ea21d3c38a74247c37df68b8ec4 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:15:33.008 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:33.265 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.yrg 00:15:33.265 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.yrg 00:15:33.265 11:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@66 -- # keys[2]=/tmp/spdk.key-sha384.yrg 00:15:33.265 11:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:15:33.265 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:33.265 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:33.265 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:33.265 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:15:33.265 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:15:33.265 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:33.266 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6a3b6aecc7b34ee60ee531301a3ef2dfabb9f6fc9afbc1703f95367078810e87 00:15:33.266 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:15:33.266 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.iXv 00:15:33.266 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6a3b6aecc7b34ee60ee531301a3ef2dfabb9f6fc9afbc1703f95367078810e87 3 00:15:33.266 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6a3b6aecc7b34ee60ee531301a3ef2dfabb9f6fc9afbc1703f95367078810e87 3 00:15:33.266 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:33.266 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:33.266 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6a3b6aecc7b34ee60ee531301a3ef2dfabb9f6fc9afbc1703f95367078810e87 00:15:33.266 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:15:33.266 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:33.266 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.iXv 00:15:33.266 11:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.iXv 00:15:33.266 11:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[3]=/tmp/spdk.key-sha512.iXv 00:15:33.266 11:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # waitforlisten 2246140 00:15:33.266 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # '[' -z 2246140 ']' 00:15:33.266 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.266 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:33.266 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.266 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:33.266 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.266 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:33.266 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@861 -- # return 0 00:15:33.266 11:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # waitforlisten 2246387 /var/tmp/host.sock 00:15:33.266 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # '[' -z 2246387 ']' 00:15:33.266 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/host.sock 00:15:33.266 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:33.266 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:33.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:33.266 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:33.266 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.523 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:33.523 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@861 -- # return 0 00:15:33.523 11:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@71 -- # rpc_cmd 00:15:33.523 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:33.523 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.523 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:33.523 11:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:15:33.523 11:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.HWV 00:15:33.523 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:33.523 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.523 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:33.523 11:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.HWV 00:15:33.523 11:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.HWV 00:15:33.780 11:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:15:33.780 11:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.a4e 00:15:33.780 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:33.780 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.780 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:33.780 11:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.a4e 00:15:33.780 11:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.a4e 00:15:34.037 11:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:15:34.037 11:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.yrg 00:15:34.037 11:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:34.037 11:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.037 11:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:34.037 11:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.yrg 00:15:34.037 11:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.yrg 00:15:34.037 11:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:15:34.037 11:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.iXv 00:15:34.037 11:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:34.037 11:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.037 11:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:34.037 11:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.iXv 00:15:34.037 11:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.iXv 00:15:34.295 11:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:15:34.295 11:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:15:34.295 11:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:34.295 11:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:34.295 11:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:34.553 11:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 0 00:15:34.553 11:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:34.553 11:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:34.553 11:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:34.553 11:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:34.553 11:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 00:15:34.553 11:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:34.553 11:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.553 11:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:34.553 11:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:34.553 11:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:34.811 00:15:34.811 11:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:34.811 11:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:34.811 11:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.811 11:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.811 11:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.811 11:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:34.811 11:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.811 11:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:34.811 11:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:34.811 { 00:15:34.811 "cntlid": 1, 00:15:34.811 "qid": 0, 00:15:34.811 "state": "enabled", 00:15:34.811 "listen_address": { 00:15:34.811 "trtype": "TCP", 00:15:34.811 "adrfam": "IPv4", 00:15:34.811 "traddr": "10.0.0.2", 00:15:34.811 "trsvcid": "4420" 00:15:34.811 }, 00:15:34.811 "peer_address": { 00:15:34.811 "trtype": "TCP", 00:15:34.811 "adrfam": "IPv4", 00:15:34.811 "traddr": "10.0.0.1", 00:15:34.811 "trsvcid": "58140" 00:15:34.811 }, 00:15:34.811 "auth": { 00:15:34.811 "state": "completed", 00:15:34.811 "digest": "sha256", 00:15:34.811 "dhgroup": "null" 00:15:34.811 } 00:15:34.811 } 00:15:34.811 ]' 00:15:34.811 11:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:35.068 11:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:35.068 11:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:35.068 11:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:15:35.068 11:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:35.068 11:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.068 11:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.068 11:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.356 11:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDBmM2NmODI2OGI2YzFiM2Y1Y2NlMGQyZDE2ZDM1NDdhZTE2YzNjYmFmYjZhODRleNC/ig==: 00:15:35.932 11:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.932 11:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:35.932 11:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:35.932 11:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.932 11:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:35.932 11:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:35.932 11:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:35.932 11:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:35.932 11:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 1 00:15:35.932 11:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:35.932 11:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:35.932 11:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:35.932 11:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:35.932 11:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:15:35.932 11:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:35.932 11:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.932 11:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:35.932 11:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:35.932 11:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:36.190 00:15:36.190 11:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:36.190 11:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.190 11:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:36.448 11:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.448 11:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.448 11:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:36.448 11:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.448 11:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:36.448 11:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:36.448 { 00:15:36.448 "cntlid": 3, 00:15:36.448 "qid": 0, 00:15:36.448 "state": "enabled", 00:15:36.448 "listen_address": { 00:15:36.448 "trtype": "TCP", 00:15:36.448 "adrfam": "IPv4", 00:15:36.448 "traddr": "10.0.0.2", 00:15:36.448 "trsvcid": "4420" 00:15:36.448 }, 00:15:36.448 "peer_address": { 00:15:36.448 "trtype": "TCP", 00:15:36.448 "adrfam": "IPv4", 00:15:36.448 "traddr": "10.0.0.1", 00:15:36.448 "trsvcid": "46376" 00:15:36.448 }, 00:15:36.448 "auth": { 00:15:36.448 "state": "completed", 00:15:36.448 "digest": "sha256", 00:15:36.448 "dhgroup": "null" 00:15:36.448 } 00:15:36.448 } 00:15:36.448 ]' 00:15:36.448 11:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:36.448 11:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:36.448 11:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:36.448 11:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:15:36.448 11:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:36.448 11:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.448 11:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.448 11:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.707 11:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:OWViZGE2NTdjYmQxZjQ1OTM2MzczNjE5MjljN2EzNmL7PpX3: 00:15:37.273 11:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:37.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:37.273 11:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:37.273 11:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:37.273 11:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.273 11:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:37.273 11:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:37.273 11:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:37.273 11:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:37.531 11:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 2 00:15:37.531 11:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:37.531 11:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:37.531 11:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:37.531 11:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:37.531 11:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 00:15:37.531 11:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:37.531 11:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.531 11:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:37.531 11:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:37.531 11:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:37.790 00:15:37.790 11:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:37.790 11:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:37.790 11:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.790 11:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.790 11:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.790 11:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:37.790 11:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.790 11:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:37.790 11:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:37.790 { 00:15:37.790 "cntlid": 5, 00:15:37.790 "qid": 0, 00:15:37.790 "state": "enabled", 00:15:37.790 "listen_address": { 00:15:37.790 "trtype": "TCP", 00:15:37.790 "adrfam": "IPv4", 00:15:37.790 "traddr": "10.0.0.2", 00:15:37.790 "trsvcid": "4420" 00:15:37.790 }, 00:15:37.790 "peer_address": { 00:15:37.790 "trtype": "TCP", 00:15:37.790 "adrfam": "IPv4", 00:15:37.790 "traddr": "10.0.0.1", 00:15:37.790 "trsvcid": "46412" 00:15:37.790 }, 00:15:37.790 "auth": { 00:15:37.790 "state": "completed", 00:15:37.790 "digest": "sha256", 00:15:37.790 "dhgroup": "null" 00:15:37.790 } 00:15:37.790 } 00:15:37.790 ]' 00:15:37.790 11:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:37.790 11:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:37.790 11:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:38.050 11:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:15:38.050 11:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:38.050 11:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:38.050 11:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:38.050 11:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.050 11:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGU4NzViNzg4ZDhkODc2YjAxMTQ3ZWEyMWQzYzM4YTc0MjQ3YzM3ZGY2OGI4ZWM0wk2Hmw==: 00:15:38.617 11:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.617 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.617 11:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:38.617 11:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:38.617 11:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.617 11:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:38.617 11:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:38.617 11:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:38.617 11:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:38.876 11:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 3 00:15:38.876 11:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:38.876 11:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:38.876 11:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:38.876 11:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:38.876 11:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:38.876 11:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:38.876 11:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.876 11:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:38.876 11:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:38.876 11:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:39.134 00:15:39.134 11:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:39.134 11:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:39.134 11:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.404 11:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.404 11:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.404 11:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:39.404 11:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.404 11:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:39.404 11:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:39.404 { 00:15:39.404 "cntlid": 7, 00:15:39.404 "qid": 0, 00:15:39.405 "state": "enabled", 00:15:39.405 "listen_address": { 00:15:39.405 "trtype": "TCP", 00:15:39.405 "adrfam": "IPv4", 00:15:39.405 "traddr": "10.0.0.2", 00:15:39.405 "trsvcid": "4420" 00:15:39.405 }, 00:15:39.405 "peer_address": { 00:15:39.405 "trtype": "TCP", 00:15:39.405 "adrfam": "IPv4", 00:15:39.405 "traddr": "10.0.0.1", 00:15:39.405 "trsvcid": "46436" 00:15:39.405 }, 00:15:39.405 "auth": { 00:15:39.405 "state": "completed", 00:15:39.405 "digest": "sha256", 00:15:39.405 "dhgroup": "null" 00:15:39.405 } 00:15:39.405 } 00:15:39.405 ]' 00:15:39.405 11:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:39.405 11:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:39.405 11:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:39.405 11:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:15:39.405 11:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:39.405 11:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.405 11:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.405 11:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.665 11:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NmEzYjZhZWNjN2IzNGVlNjBlZTUzMTMwMWEzZWYyZGZhYmI5ZjZmYzlhZmJjMTcwM2Y5NTM2NzA3ODgxMGU4N0wU71E=: 00:15:40.233 11:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.233 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.233 11:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:40.233 11:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:40.233 11:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.233 11:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:40.233 11:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:15:40.233 11:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:40.233 11:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:40.233 11:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:40.233 11:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 0 00:15:40.233 11:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:40.233 11:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:40.233 11:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:40.233 11:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:40.233 11:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 00:15:40.233 11:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:40.233 11:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.492 11:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:40.492 11:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:40.492 11:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:40.492 00:15:40.492 11:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:40.492 11:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:40.492 11:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.750 11:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.750 11:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.750 11:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:40.750 11:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.750 11:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:40.750 11:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:40.750 { 00:15:40.750 "cntlid": 9, 00:15:40.750 "qid": 0, 00:15:40.750 "state": "enabled", 00:15:40.750 "listen_address": { 00:15:40.750 "trtype": "TCP", 00:15:40.750 "adrfam": "IPv4", 00:15:40.750 "traddr": "10.0.0.2", 00:15:40.750 "trsvcid": "4420" 00:15:40.750 }, 00:15:40.750 "peer_address": { 00:15:40.750 "trtype": "TCP", 00:15:40.750 "adrfam": "IPv4", 00:15:40.750 "traddr": "10.0.0.1", 00:15:40.750 "trsvcid": "46456" 00:15:40.750 }, 00:15:40.750 "auth": { 00:15:40.750 "state": "completed", 00:15:40.750 "digest": "sha256", 00:15:40.750 "dhgroup": "ffdhe2048" 00:15:40.750 } 00:15:40.750 } 00:15:40.750 ]' 00:15:40.750 11:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:40.750 11:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:40.750 11:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:40.750 11:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:40.750 11:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:41.008 11:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.008 11:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.008 11:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.008 11:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDBmM2NmODI2OGI2YzFiM2Y1Y2NlMGQyZDE2ZDM1NDdhZTE2YzNjYmFmYjZhODRleNC/ig==: 00:15:41.574 11:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.574 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.574 11:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:41.574 11:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:41.574 11:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.574 11:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:41.574 11:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:41.574 11:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:41.574 11:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:41.832 11:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 1 00:15:41.833 11:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:41.833 11:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:41.833 11:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:41.833 11:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:41.833 11:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:15:41.833 11:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:41.833 11:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.833 11:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:41.833 11:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:41.833 11:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:42.091 00:15:42.091 11:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:42.091 11:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:42.091 11:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.350 11:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.350 11:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.350 11:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:42.350 11:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.350 11:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:42.350 11:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:42.350 { 00:15:42.350 "cntlid": 11, 00:15:42.350 "qid": 0, 00:15:42.350 "state": "enabled", 00:15:42.350 "listen_address": { 00:15:42.350 "trtype": "TCP", 00:15:42.350 "adrfam": "IPv4", 00:15:42.350 "traddr": "10.0.0.2", 00:15:42.350 "trsvcid": "4420" 00:15:42.350 }, 00:15:42.350 "peer_address": { 00:15:42.350 "trtype": "TCP", 00:15:42.350 "adrfam": "IPv4", 00:15:42.350 "traddr": "10.0.0.1", 00:15:42.350 "trsvcid": "46492" 00:15:42.350 }, 00:15:42.350 "auth": { 00:15:42.350 "state": "completed", 00:15:42.350 "digest": "sha256", 00:15:42.350 "dhgroup": "ffdhe2048" 00:15:42.350 } 00:15:42.350 } 00:15:42.350 ]' 00:15:42.350 11:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:42.350 11:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:42.350 11:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:42.350 11:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:42.350 11:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:42.350 11:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.350 11:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.350 11:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.609 11:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:OWViZGE2NTdjYmQxZjQ1OTM2MzczNjE5MjljN2EzNmL7PpX3: 00:15:43.176 11:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.176 11:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:43.176 11:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:43.176 11:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.176 11:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:43.176 11:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:43.177 11:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:43.177 11:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:43.177 11:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 2 00:15:43.177 11:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:43.177 11:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:43.177 11:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:43.177 11:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:43.177 11:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 00:15:43.177 11:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:43.177 11:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.436 11:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:43.436 11:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:43.436 11:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:43.436 00:15:43.436 11:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:43.436 11:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:43.436 11:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.695 11:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.695 11:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.695 11:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:43.695 11:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.695 11:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:43.695 11:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:43.695 { 00:15:43.695 "cntlid": 13, 00:15:43.695 "qid": 0, 00:15:43.695 "state": "enabled", 00:15:43.695 "listen_address": { 00:15:43.695 "trtype": "TCP", 00:15:43.695 "adrfam": "IPv4", 00:15:43.695 "traddr": "10.0.0.2", 00:15:43.695 "trsvcid": "4420" 00:15:43.695 }, 00:15:43.695 "peer_address": { 00:15:43.695 "trtype": "TCP", 00:15:43.695 "adrfam": "IPv4", 00:15:43.695 "traddr": "10.0.0.1", 00:15:43.695 "trsvcid": "46532" 00:15:43.695 }, 00:15:43.695 "auth": { 00:15:43.695 "state": "completed", 00:15:43.695 "digest": "sha256", 00:15:43.695 "dhgroup": "ffdhe2048" 00:15:43.695 } 00:15:43.695 } 00:15:43.695 ]' 00:15:43.695 11:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:43.695 11:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:43.695 11:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:43.695 11:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:43.695 11:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:43.954 11:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.954 11:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.954 11:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.954 11:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGU4NzViNzg4ZDhkODc2YjAxMTQ3ZWEyMWQzYzM4YTc0MjQ3YzM3ZGY2OGI4ZWM0wk2Hmw==: 00:15:44.544 11:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.544 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.544 11:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:44.544 11:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:44.544 11:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.544 11:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:44.544 11:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:44.544 11:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:44.544 11:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:44.802 11:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 3 00:15:44.802 11:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:44.802 11:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:44.802 11:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:44.802 11:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:44.802 11:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:44.802 11:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:44.802 11:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.802 11:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:44.802 11:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:44.802 11:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:45.059 00:15:45.059 11:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:45.059 11:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.059 11:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:45.318 11:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.318 11:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.318 11:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:45.318 11:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.318 11:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:45.318 11:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:45.318 { 00:15:45.318 "cntlid": 15, 00:15:45.318 "qid": 0, 00:15:45.318 "state": "enabled", 00:15:45.318 "listen_address": { 00:15:45.318 "trtype": "TCP", 00:15:45.318 "adrfam": "IPv4", 00:15:45.318 "traddr": "10.0.0.2", 00:15:45.318 "trsvcid": "4420" 00:15:45.318 }, 00:15:45.318 "peer_address": { 00:15:45.318 "trtype": "TCP", 00:15:45.318 "adrfam": "IPv4", 00:15:45.318 "traddr": "10.0.0.1", 00:15:45.318 "trsvcid": "46550" 00:15:45.318 }, 00:15:45.318 "auth": { 00:15:45.318 "state": "completed", 00:15:45.318 "digest": "sha256", 00:15:45.318 "dhgroup": "ffdhe2048" 00:15:45.318 } 00:15:45.318 } 00:15:45.318 ]' 00:15:45.318 11:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:45.318 11:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:45.318 11:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:45.318 11:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:45.318 11:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:45.318 11:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.318 11:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.318 11:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.575 11:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NmEzYjZhZWNjN2IzNGVlNjBlZTUzMTMwMWEzZWYyZGZhYmI5ZjZmYzlhZmJjMTcwM2Y5NTM2NzA3ODgxMGU4N0wU71E=: 00:15:46.141 11:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.141 11:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:46.141 11:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:46.141 11:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.141 11:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:46.141 11:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:15:46.141 11:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:46.141 11:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:46.141 11:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:46.400 11:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 0 00:15:46.400 11:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:46.400 11:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:46.400 11:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:46.400 11:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:46.400 11:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 00:15:46.400 11:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:46.400 11:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.400 11:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:46.400 11:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:46.400 11:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:46.658 00:15:46.658 11:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:46.658 11:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.658 11:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:46.658 11:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.658 11:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.658 11:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:46.658 11:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.658 11:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:46.658 11:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:46.658 { 00:15:46.658 "cntlid": 17, 00:15:46.658 "qid": 0, 00:15:46.658 "state": "enabled", 00:15:46.658 "listen_address": { 00:15:46.658 "trtype": "TCP", 00:15:46.658 "adrfam": "IPv4", 00:15:46.658 "traddr": "10.0.0.2", 00:15:46.658 "trsvcid": "4420" 00:15:46.658 }, 00:15:46.658 "peer_address": { 00:15:46.658 "trtype": "TCP", 00:15:46.658 "adrfam": "IPv4", 00:15:46.658 "traddr": "10.0.0.1", 00:15:46.658 "trsvcid": "41288" 00:15:46.658 }, 00:15:46.658 "auth": { 00:15:46.658 "state": "completed", 00:15:46.658 "digest": "sha256", 00:15:46.658 "dhgroup": "ffdhe3072" 00:15:46.658 } 00:15:46.658 } 00:15:46.658 ]' 00:15:46.658 11:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:46.917 11:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:46.917 11:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:46.917 11:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:46.917 11:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:46.917 11:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.917 11:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.917 11:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.917 11:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDBmM2NmODI2OGI2YzFiM2Y1Y2NlMGQyZDE2ZDM1NDdhZTE2YzNjYmFmYjZhODRleNC/ig==: 00:15:47.484 11:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.484 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.484 11:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:47.484 11:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:47.484 11:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.484 11:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:47.484 11:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:47.484 11:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:47.484 11:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:47.743 11:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 1 00:15:47.743 11:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:47.743 11:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:47.743 11:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:47.743 11:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:47.743 11:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:15:47.743 11:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:47.743 11:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.743 11:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:47.743 11:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:47.743 11:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:48.002 00:15:48.002 11:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:48.002 11:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:48.002 11:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.260 11:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.260 11:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.260 11:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:48.260 11:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.260 11:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:48.260 11:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:48.260 { 00:15:48.260 "cntlid": 19, 00:15:48.260 "qid": 0, 00:15:48.260 "state": "enabled", 00:15:48.260 "listen_address": { 00:15:48.260 "trtype": "TCP", 00:15:48.260 "adrfam": "IPv4", 00:15:48.260 "traddr": "10.0.0.2", 00:15:48.260 "trsvcid": "4420" 00:15:48.260 }, 00:15:48.260 "peer_address": { 00:15:48.260 "trtype": "TCP", 00:15:48.260 "adrfam": "IPv4", 00:15:48.260 "traddr": "10.0.0.1", 00:15:48.260 "trsvcid": "41318" 00:15:48.260 }, 00:15:48.260 "auth": { 00:15:48.260 "state": "completed", 00:15:48.260 "digest": "sha256", 00:15:48.260 "dhgroup": "ffdhe3072" 00:15:48.260 } 00:15:48.260 } 00:15:48.260 ]' 00:15:48.260 11:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:48.260 11:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:48.260 11:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:48.260 11:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:48.260 11:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:48.260 11:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.260 11:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.260 11:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.518 11:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:OWViZGE2NTdjYmQxZjQ1OTM2MzczNjE5MjljN2EzNmL7PpX3: 00:15:49.085 11:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.085 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.085 11:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:49.085 11:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:49.085 11:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.085 11:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:49.085 11:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:49.085 11:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:49.085 11:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:49.343 11:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 2 00:15:49.343 11:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:49.343 11:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:49.343 11:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:49.343 11:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:49.343 11:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 00:15:49.343 11:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:49.343 11:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.343 11:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:49.343 11:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:49.343 11:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:49.601 00:15:49.601 11:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:49.601 11:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:49.601 11:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.601 11:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.601 11:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.601 11:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:49.601 11:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.601 11:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:49.601 11:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:49.601 { 00:15:49.601 "cntlid": 21, 00:15:49.601 "qid": 0, 00:15:49.601 "state": "enabled", 00:15:49.601 "listen_address": { 00:15:49.601 "trtype": "TCP", 00:15:49.601 "adrfam": "IPv4", 00:15:49.601 "traddr": "10.0.0.2", 00:15:49.601 "trsvcid": "4420" 00:15:49.601 }, 00:15:49.601 "peer_address": { 00:15:49.601 "trtype": "TCP", 00:15:49.601 "adrfam": "IPv4", 00:15:49.601 "traddr": "10.0.0.1", 00:15:49.601 "trsvcid": "41352" 00:15:49.601 }, 00:15:49.601 "auth": { 00:15:49.601 "state": "completed", 00:15:49.601 "digest": "sha256", 00:15:49.601 "dhgroup": "ffdhe3072" 00:15:49.601 } 00:15:49.601 } 00:15:49.601 ]' 00:15:49.601 11:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:49.860 11:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:49.860 11:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:49.860 11:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:49.860 11:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:49.860 11:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.860 11:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.860 11:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.118 11:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGU4NzViNzg4ZDhkODc2YjAxMTQ3ZWEyMWQzYzM4YTc0MjQ3YzM3ZGY2OGI4ZWM0wk2Hmw==: 00:15:50.685 11:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.685 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.685 11:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:50.685 11:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:50.685 11:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.685 11:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:50.685 11:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:50.685 11:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:50.685 11:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:50.685 11:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 3 00:15:50.685 11:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:50.685 11:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:50.685 11:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:50.685 11:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:50.685 11:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:50.685 11:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:50.685 11:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.685 11:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:50.685 11:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:50.685 11:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:50.944 00:15:50.944 11:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:50.944 11:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:50.944 11:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.203 11:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.203 11:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.203 11:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:51.203 11:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.203 11:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:51.203 11:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:51.203 { 00:15:51.203 "cntlid": 23, 00:15:51.203 "qid": 0, 00:15:51.203 "state": "enabled", 00:15:51.203 "listen_address": { 00:15:51.203 "trtype": "TCP", 00:15:51.203 "adrfam": "IPv4", 00:15:51.203 "traddr": "10.0.0.2", 00:15:51.203 "trsvcid": "4420" 00:15:51.203 }, 00:15:51.203 "peer_address": { 00:15:51.203 "trtype": "TCP", 00:15:51.203 "adrfam": "IPv4", 00:15:51.203 "traddr": "10.0.0.1", 00:15:51.203 "trsvcid": "41366" 00:15:51.203 }, 00:15:51.203 "auth": { 00:15:51.203 "state": "completed", 00:15:51.203 "digest": "sha256", 00:15:51.203 "dhgroup": "ffdhe3072" 00:15:51.203 } 00:15:51.203 } 00:15:51.203 ]' 00:15:51.203 11:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:51.203 11:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:51.203 11:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:51.203 11:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:51.203 11:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:51.203 11:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.203 11:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.203 11:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.461 11:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NmEzYjZhZWNjN2IzNGVlNjBlZTUzMTMwMWEzZWYyZGZhYmI5ZjZmYzlhZmJjMTcwM2Y5NTM2NzA3ODgxMGU4N0wU71E=: 00:15:52.050 11:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.050 11:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:52.050 11:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:52.050 11:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.050 11:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:52.050 11:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:15:52.050 11:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:52.051 11:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:52.051 11:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:52.320 11:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 0 00:15:52.320 11:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:52.321 11:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:52.321 11:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:52.321 11:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:52.321 11:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 00:15:52.321 11:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:52.321 11:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.321 11:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:52.321 11:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:52.321 11:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:52.579 00:15:52.579 11:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:52.579 11:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.579 11:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:52.579 11:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.579 11:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.579 11:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:52.579 11:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.837 11:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:52.837 11:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:52.837 { 00:15:52.837 "cntlid": 25, 00:15:52.837 "qid": 0, 00:15:52.837 "state": "enabled", 00:15:52.837 "listen_address": { 00:15:52.837 "trtype": "TCP", 00:15:52.837 "adrfam": "IPv4", 00:15:52.837 "traddr": "10.0.0.2", 00:15:52.837 "trsvcid": "4420" 00:15:52.837 }, 00:15:52.837 "peer_address": { 00:15:52.837 "trtype": "TCP", 00:15:52.837 "adrfam": "IPv4", 00:15:52.837 "traddr": "10.0.0.1", 00:15:52.837 "trsvcid": "41392" 00:15:52.837 }, 00:15:52.837 "auth": { 00:15:52.837 "state": "completed", 00:15:52.837 "digest": "sha256", 00:15:52.837 "dhgroup": "ffdhe4096" 00:15:52.837 } 00:15:52.837 } 00:15:52.837 ]' 00:15:52.837 11:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:52.837 11:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:52.837 11:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:52.837 11:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:52.837 11:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:52.837 11:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.837 11:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.837 11:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.095 11:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDBmM2NmODI2OGI2YzFiM2Y1Y2NlMGQyZDE2ZDM1NDdhZTE2YzNjYmFmYjZhODRleNC/ig==: 00:15:53.660 11:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.660 11:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:53.661 11:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:53.661 11:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.661 11:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:53.661 11:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:53.661 11:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:53.661 11:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:53.661 11:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 1 00:15:53.661 11:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:53.661 11:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:53.661 11:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:53.661 11:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:53.661 11:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:15:53.661 11:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:53.661 11:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.661 11:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:53.661 11:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:53.661 11:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:53.920 00:15:53.920 11:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:53.920 11:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:53.920 11:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.178 11:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.178 11:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.178 11:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:54.178 11:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.178 11:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:54.178 11:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:54.178 { 00:15:54.178 "cntlid": 27, 00:15:54.178 "qid": 0, 00:15:54.178 "state": "enabled", 00:15:54.178 "listen_address": { 00:15:54.178 "trtype": "TCP", 00:15:54.178 "adrfam": "IPv4", 00:15:54.178 "traddr": "10.0.0.2", 00:15:54.178 "trsvcid": "4420" 00:15:54.178 }, 00:15:54.178 "peer_address": { 00:15:54.178 "trtype": "TCP", 00:15:54.178 "adrfam": "IPv4", 00:15:54.178 "traddr": "10.0.0.1", 00:15:54.178 "trsvcid": "41422" 00:15:54.178 }, 00:15:54.178 "auth": { 00:15:54.178 "state": "completed", 00:15:54.178 "digest": "sha256", 00:15:54.178 "dhgroup": "ffdhe4096" 00:15:54.178 } 00:15:54.178 } 00:15:54.178 ]' 00:15:54.178 11:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:54.178 11:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:54.178 11:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:54.178 11:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:54.178 11:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:54.436 11:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.436 11:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.436 11:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.436 11:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:OWViZGE2NTdjYmQxZjQ1OTM2MzczNjE5MjljN2EzNmL7PpX3: 00:15:55.003 11:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.003 11:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:55.003 11:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:55.003 11:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.003 11:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:55.003 11:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:55.003 11:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:55.003 11:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:55.262 11:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 2 00:15:55.262 11:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:55.262 11:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:55.262 11:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:55.262 11:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:55.262 11:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 00:15:55.262 11:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:55.262 11:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.262 11:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:55.262 11:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:55.262 11:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:55.521 00:15:55.521 11:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:55.521 11:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:55.521 11:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.780 11:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.780 11:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.780 11:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:55.780 11:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.780 11:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:55.780 11:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:55.780 { 00:15:55.780 "cntlid": 29, 00:15:55.780 "qid": 0, 00:15:55.780 "state": "enabled", 00:15:55.780 "listen_address": { 00:15:55.780 "trtype": "TCP", 00:15:55.780 "adrfam": "IPv4", 00:15:55.780 "traddr": "10.0.0.2", 00:15:55.780 "trsvcid": "4420" 00:15:55.780 }, 00:15:55.780 "peer_address": { 00:15:55.780 "trtype": "TCP", 00:15:55.780 "adrfam": "IPv4", 00:15:55.780 "traddr": "10.0.0.1", 00:15:55.780 "trsvcid": "41456" 00:15:55.780 }, 00:15:55.780 "auth": { 00:15:55.780 "state": "completed", 00:15:55.780 "digest": "sha256", 00:15:55.780 "dhgroup": "ffdhe4096" 00:15:55.780 } 00:15:55.780 } 00:15:55.780 ]' 00:15:55.780 11:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:55.780 11:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:55.780 11:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:55.780 11:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:55.780 11:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:55.780 11:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.780 11:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.780 11:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.039 11:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGU4NzViNzg4ZDhkODc2YjAxMTQ3ZWEyMWQzYzM4YTc0MjQ3YzM3ZGY2OGI4ZWM0wk2Hmw==: 00:15:56.603 11:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.603 11:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:56.603 11:07:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:56.603 11:07:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.603 11:07:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:56.603 11:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:56.603 11:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:56.603 11:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:56.861 11:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 3 00:15:56.861 11:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:56.861 11:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:56.861 11:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:56.861 11:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:56.861 11:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:56.861 11:07:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:56.861 11:07:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.861 11:07:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:56.861 11:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:56.861 11:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:57.120 00:15:57.120 11:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:57.120 11:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:57.120 11:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.120 11:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.120 11:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.120 11:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:57.120 11:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.120 11:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:57.120 11:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:57.120 { 00:15:57.120 "cntlid": 31, 00:15:57.120 "qid": 0, 00:15:57.120 "state": "enabled", 00:15:57.120 "listen_address": { 00:15:57.120 "trtype": "TCP", 00:15:57.120 "adrfam": "IPv4", 00:15:57.120 "traddr": "10.0.0.2", 00:15:57.120 "trsvcid": "4420" 00:15:57.120 }, 00:15:57.120 "peer_address": { 00:15:57.120 "trtype": "TCP", 00:15:57.120 "adrfam": "IPv4", 00:15:57.120 "traddr": "10.0.0.1", 00:15:57.120 "trsvcid": "42706" 00:15:57.120 }, 00:15:57.120 "auth": { 00:15:57.120 "state": "completed", 00:15:57.120 "digest": "sha256", 00:15:57.120 "dhgroup": "ffdhe4096" 00:15:57.120 } 00:15:57.120 } 00:15:57.120 ]' 00:15:57.120 11:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:57.378 11:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:57.378 11:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:57.378 11:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:57.378 11:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:57.378 11:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.378 11:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.378 11:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.637 11:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NmEzYjZhZWNjN2IzNGVlNjBlZTUzMTMwMWEzZWYyZGZhYmI5ZjZmYzlhZmJjMTcwM2Y5NTM2NzA3ODgxMGU4N0wU71E=: 00:15:58.203 11:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.203 11:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:58.203 11:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:58.203 11:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.203 11:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:58.203 11:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:15:58.203 11:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:58.203 11:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:58.203 11:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:58.203 11:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 0 00:15:58.203 11:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:58.203 11:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:58.204 11:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:58.204 11:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:58.204 11:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 00:15:58.204 11:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:58.204 11:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.204 11:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:58.204 11:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:58.204 11:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:58.770 00:15:58.770 11:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:58.770 11:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:58.770 11:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.770 11:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.770 11:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.770 11:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:58.770 11:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.029 11:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:59.029 11:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:59.029 { 00:15:59.029 "cntlid": 33, 00:15:59.029 "qid": 0, 00:15:59.029 "state": "enabled", 00:15:59.029 "listen_address": { 00:15:59.029 "trtype": "TCP", 00:15:59.029 "adrfam": "IPv4", 00:15:59.029 "traddr": "10.0.0.2", 00:15:59.029 "trsvcid": "4420" 00:15:59.029 }, 00:15:59.029 "peer_address": { 00:15:59.029 "trtype": "TCP", 00:15:59.029 "adrfam": "IPv4", 00:15:59.029 "traddr": "10.0.0.1", 00:15:59.029 "trsvcid": "42744" 00:15:59.029 }, 00:15:59.029 "auth": { 00:15:59.029 "state": "completed", 00:15:59.029 "digest": "sha256", 00:15:59.029 "dhgroup": "ffdhe6144" 00:15:59.029 } 00:15:59.029 } 00:15:59.029 ]' 00:15:59.029 11:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:59.029 11:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:59.029 11:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:59.029 11:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:59.029 11:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:59.029 11:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.029 11:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.029 11:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.287 11:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDBmM2NmODI2OGI2YzFiM2Y1Y2NlMGQyZDE2ZDM1NDdhZTE2YzNjYmFmYjZhODRleNC/ig==: 00:15:59.854 11:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.854 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.854 11:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:59.854 11:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:59.854 11:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.854 11:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:59.854 11:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:59.854 11:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:59.854 11:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:00.112 11:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 1 00:16:00.112 11:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:00.112 11:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:00.112 11:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:00.112 11:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:00.112 11:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:16:00.112 11:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:00.112 11:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.112 11:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:00.112 11:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:00.112 11:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:00.370 00:16:00.370 11:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:00.370 11:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:00.370 11:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.628 11:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.628 11:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.628 11:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:00.628 11:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.628 11:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:00.629 11:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:00.629 { 00:16:00.629 "cntlid": 35, 00:16:00.629 "qid": 0, 00:16:00.629 "state": "enabled", 00:16:00.629 "listen_address": { 00:16:00.629 "trtype": "TCP", 00:16:00.629 "adrfam": "IPv4", 00:16:00.629 "traddr": "10.0.0.2", 00:16:00.629 "trsvcid": "4420" 00:16:00.629 }, 00:16:00.629 "peer_address": { 00:16:00.629 "trtype": "TCP", 00:16:00.629 "adrfam": "IPv4", 00:16:00.629 "traddr": "10.0.0.1", 00:16:00.629 "trsvcid": "42764" 00:16:00.629 }, 00:16:00.629 "auth": { 00:16:00.629 "state": "completed", 00:16:00.629 "digest": "sha256", 00:16:00.629 "dhgroup": "ffdhe6144" 00:16:00.629 } 00:16:00.629 } 00:16:00.629 ]' 00:16:00.629 11:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:00.629 11:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:00.629 11:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:00.629 11:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:00.629 11:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:00.629 11:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.629 11:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.629 11:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.888 11:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:OWViZGE2NTdjYmQxZjQ1OTM2MzczNjE5MjljN2EzNmL7PpX3: 00:16:01.455 11:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.455 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.455 11:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:01.455 11:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:01.455 11:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.455 11:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:01.455 11:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:01.455 11:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:01.455 11:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:01.713 11:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 2 00:16:01.713 11:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:01.713 11:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:01.713 11:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:01.713 11:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:01.713 11:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 00:16:01.713 11:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:01.713 11:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.713 11:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:01.713 11:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:01.713 11:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:01.971 00:16:01.971 11:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:01.971 11:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:01.971 11:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.229 11:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.229 11:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.229 11:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:02.229 11:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.229 11:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:02.229 11:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:02.229 { 00:16:02.229 "cntlid": 37, 00:16:02.229 "qid": 0, 00:16:02.229 "state": "enabled", 00:16:02.229 "listen_address": { 00:16:02.230 "trtype": "TCP", 00:16:02.230 "adrfam": "IPv4", 00:16:02.230 "traddr": "10.0.0.2", 00:16:02.230 "trsvcid": "4420" 00:16:02.230 }, 00:16:02.230 "peer_address": { 00:16:02.230 "trtype": "TCP", 00:16:02.230 "adrfam": "IPv4", 00:16:02.230 "traddr": "10.0.0.1", 00:16:02.230 "trsvcid": "42792" 00:16:02.230 }, 00:16:02.230 "auth": { 00:16:02.230 "state": "completed", 00:16:02.230 "digest": "sha256", 00:16:02.230 "dhgroup": "ffdhe6144" 00:16:02.230 } 00:16:02.230 } 00:16:02.230 ]' 00:16:02.230 11:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:02.230 11:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:02.230 11:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:02.230 11:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:02.230 11:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:02.230 11:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.230 11:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.230 11:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.488 11:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGU4NzViNzg4ZDhkODc2YjAxMTQ3ZWEyMWQzYzM4YTc0MjQ3YzM3ZGY2OGI4ZWM0wk2Hmw==: 00:16:03.054 11:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.054 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.054 11:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:03.054 11:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:03.054 11:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.054 11:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:03.054 11:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:03.054 11:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:03.054 11:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:03.311 11:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 3 00:16:03.311 11:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:03.311 11:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:03.311 11:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:03.311 11:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:03.311 11:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:03.311 11:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:03.311 11:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.311 11:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:03.311 11:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:03.311 11:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:03.569 00:16:03.569 11:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:03.569 11:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:03.569 11:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.826 11:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.826 11:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.826 11:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:03.826 11:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.826 11:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:03.826 11:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:03.826 { 00:16:03.826 "cntlid": 39, 00:16:03.826 "qid": 0, 00:16:03.826 "state": "enabled", 00:16:03.826 "listen_address": { 00:16:03.826 "trtype": "TCP", 00:16:03.826 "adrfam": "IPv4", 00:16:03.826 "traddr": "10.0.0.2", 00:16:03.826 "trsvcid": "4420" 00:16:03.826 }, 00:16:03.826 "peer_address": { 00:16:03.826 "trtype": "TCP", 00:16:03.826 "adrfam": "IPv4", 00:16:03.826 "traddr": "10.0.0.1", 00:16:03.826 "trsvcid": "42830" 00:16:03.826 }, 00:16:03.826 "auth": { 00:16:03.826 "state": "completed", 00:16:03.826 "digest": "sha256", 00:16:03.826 "dhgroup": "ffdhe6144" 00:16:03.826 } 00:16:03.826 } 00:16:03.826 ]' 00:16:03.826 11:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:03.826 11:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:03.826 11:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:03.826 11:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:03.826 11:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:03.826 11:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.826 11:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.826 11:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.084 11:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NmEzYjZhZWNjN2IzNGVlNjBlZTUzMTMwMWEzZWYyZGZhYmI5ZjZmYzlhZmJjMTcwM2Y5NTM2NzA3ODgxMGU4N0wU71E=: 00:16:04.651 11:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.651 11:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:04.651 11:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:04.651 11:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.651 11:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:04.651 11:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:16:04.651 11:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:04.651 11:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:04.651 11:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:04.909 11:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 0 00:16:04.909 11:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:04.909 11:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:04.909 11:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:04.909 11:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:04.909 11:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 00:16:04.909 11:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:04.909 11:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.909 11:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:04.909 11:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:04.909 11:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:05.474 00:16:05.474 11:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:05.474 11:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:05.474 11:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.474 11:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.474 11:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.474 11:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:05.474 11:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.474 11:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:05.474 11:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:05.474 { 00:16:05.474 "cntlid": 41, 00:16:05.474 "qid": 0, 00:16:05.474 "state": "enabled", 00:16:05.474 "listen_address": { 00:16:05.474 "trtype": "TCP", 00:16:05.474 "adrfam": "IPv4", 00:16:05.474 "traddr": "10.0.0.2", 00:16:05.474 "trsvcid": "4420" 00:16:05.474 }, 00:16:05.474 "peer_address": { 00:16:05.474 "trtype": "TCP", 00:16:05.474 "adrfam": "IPv4", 00:16:05.474 "traddr": "10.0.0.1", 00:16:05.474 "trsvcid": "42864" 00:16:05.474 }, 00:16:05.474 "auth": { 00:16:05.474 "state": "completed", 00:16:05.474 "digest": "sha256", 00:16:05.474 "dhgroup": "ffdhe8192" 00:16:05.474 } 00:16:05.474 } 00:16:05.474 ]' 00:16:05.474 11:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:05.474 11:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:05.475 11:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:05.730 11:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:05.730 11:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:05.730 11:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.730 11:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.730 11:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.730 11:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDBmM2NmODI2OGI2YzFiM2Y1Y2NlMGQyZDE2ZDM1NDdhZTE2YzNjYmFmYjZhODRleNC/ig==: 00:16:06.292 11:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.292 11:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:06.292 11:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:06.292 11:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.292 11:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:06.292 11:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:06.292 11:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:06.548 11:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:06.548 11:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 1 00:16:06.548 11:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:06.548 11:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:06.549 11:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:06.549 11:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:06.549 11:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:16:06.549 11:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:06.549 11:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.549 11:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:06.549 11:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:06.549 11:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:07.113 00:16:07.113 11:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:07.113 11:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:07.113 11:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.369 11:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.369 11:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.369 11:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:07.369 11:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.369 11:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:07.370 11:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:07.370 { 00:16:07.370 "cntlid": 43, 00:16:07.370 "qid": 0, 00:16:07.370 "state": "enabled", 00:16:07.370 "listen_address": { 00:16:07.370 "trtype": "TCP", 00:16:07.370 "adrfam": "IPv4", 00:16:07.370 "traddr": "10.0.0.2", 00:16:07.370 "trsvcid": "4420" 00:16:07.370 }, 00:16:07.370 "peer_address": { 00:16:07.370 "trtype": "TCP", 00:16:07.370 "adrfam": "IPv4", 00:16:07.370 "traddr": "10.0.0.1", 00:16:07.370 "trsvcid": "47366" 00:16:07.370 }, 00:16:07.370 "auth": { 00:16:07.370 "state": "completed", 00:16:07.370 "digest": "sha256", 00:16:07.370 "dhgroup": "ffdhe8192" 00:16:07.370 } 00:16:07.370 } 00:16:07.370 ]' 00:16:07.370 11:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:07.370 11:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:07.370 11:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:07.370 11:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:07.370 11:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:07.370 11:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.370 11:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.370 11:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.627 11:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:OWViZGE2NTdjYmQxZjQ1OTM2MzczNjE5MjljN2EzNmL7PpX3: 00:16:08.191 11:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.191 11:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:08.191 11:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:08.191 11:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.191 11:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:08.191 11:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:08.191 11:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:08.191 11:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:08.449 11:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 2 00:16:08.449 11:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:08.449 11:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:08.449 11:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:08.449 11:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:08.449 11:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 00:16:08.449 11:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:08.449 11:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.449 11:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:08.449 11:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:08.449 11:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:08.746 00:16:08.746 11:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:08.746 11:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:08.746 11:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.004 11:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.004 11:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.004 11:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:09.004 11:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.004 11:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:09.004 11:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:09.004 { 00:16:09.004 "cntlid": 45, 00:16:09.004 "qid": 0, 00:16:09.004 "state": "enabled", 00:16:09.004 "listen_address": { 00:16:09.004 "trtype": "TCP", 00:16:09.004 "adrfam": "IPv4", 00:16:09.004 "traddr": "10.0.0.2", 00:16:09.004 "trsvcid": "4420" 00:16:09.004 }, 00:16:09.004 "peer_address": { 00:16:09.004 "trtype": "TCP", 00:16:09.004 "adrfam": "IPv4", 00:16:09.004 "traddr": "10.0.0.1", 00:16:09.004 "trsvcid": "47390" 00:16:09.004 }, 00:16:09.004 "auth": { 00:16:09.004 "state": "completed", 00:16:09.004 "digest": "sha256", 00:16:09.004 "dhgroup": "ffdhe8192" 00:16:09.004 } 00:16:09.004 } 00:16:09.004 ]' 00:16:09.004 11:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:09.004 11:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:09.004 11:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:09.004 11:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:09.004 11:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:09.261 11:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.261 11:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.261 11:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.261 11:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGU4NzViNzg4ZDhkODc2YjAxMTQ3ZWEyMWQzYzM4YTc0MjQ3YzM3ZGY2OGI4ZWM0wk2Hmw==: 00:16:09.826 11:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.826 11:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:09.826 11:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:09.826 11:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.826 11:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:09.826 11:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:09.826 11:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:09.826 11:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:10.084 11:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 3 00:16:10.084 11:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:10.084 11:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:10.084 11:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:10.084 11:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:10.084 11:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:10.084 11:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:10.084 11:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.084 11:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:10.084 11:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:10.084 11:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:10.647 00:16:10.647 11:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:10.647 11:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:10.648 11:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.648 11:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.648 11:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.648 11:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:10.648 11:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.905 11:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:10.905 11:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:10.905 { 00:16:10.905 "cntlid": 47, 00:16:10.905 "qid": 0, 00:16:10.905 "state": "enabled", 00:16:10.905 "listen_address": { 00:16:10.905 "trtype": "TCP", 00:16:10.905 "adrfam": "IPv4", 00:16:10.905 "traddr": "10.0.0.2", 00:16:10.905 "trsvcid": "4420" 00:16:10.905 }, 00:16:10.905 "peer_address": { 00:16:10.905 "trtype": "TCP", 00:16:10.905 "adrfam": "IPv4", 00:16:10.905 "traddr": "10.0.0.1", 00:16:10.905 "trsvcid": "47416" 00:16:10.905 }, 00:16:10.905 "auth": { 00:16:10.905 "state": "completed", 00:16:10.905 "digest": "sha256", 00:16:10.905 "dhgroup": "ffdhe8192" 00:16:10.905 } 00:16:10.905 } 00:16:10.905 ]' 00:16:10.905 11:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:10.905 11:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:10.905 11:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:10.905 11:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:10.905 11:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:10.905 11:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.905 11:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.905 11:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.162 11:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NmEzYjZhZWNjN2IzNGVlNjBlZTUzMTMwMWEzZWYyZGZhYmI5ZjZmYzlhZmJjMTcwM2Y5NTM2NzA3ODgxMGU4N0wU71E=: 00:16:11.727 11:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.727 11:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:11.727 11:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:11.727 11:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.727 11:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:11.727 11:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:16:11.727 11:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:16:11.727 11:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:11.727 11:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:11.727 11:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:11.727 11:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 0 00:16:11.727 11:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:11.727 11:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:11.727 11:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:11.727 11:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:11.727 11:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 00:16:11.727 11:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:11.727 11:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.727 11:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:11.727 11:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:11.727 11:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:11.985 00:16:11.985 11:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:11.985 11:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:11.985 11:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.242 11:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.242 11:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.243 11:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:12.243 11:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.243 11:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:12.243 11:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:12.243 { 00:16:12.243 "cntlid": 49, 00:16:12.243 "qid": 0, 00:16:12.243 "state": "enabled", 00:16:12.243 "listen_address": { 00:16:12.243 "trtype": "TCP", 00:16:12.243 "adrfam": "IPv4", 00:16:12.243 "traddr": "10.0.0.2", 00:16:12.243 "trsvcid": "4420" 00:16:12.243 }, 00:16:12.243 "peer_address": { 00:16:12.243 "trtype": "TCP", 00:16:12.243 "adrfam": "IPv4", 00:16:12.243 "traddr": "10.0.0.1", 00:16:12.243 "trsvcid": "47442" 00:16:12.243 }, 00:16:12.243 "auth": { 00:16:12.243 "state": "completed", 00:16:12.243 "digest": "sha384", 00:16:12.243 "dhgroup": "null" 00:16:12.243 } 00:16:12.243 } 00:16:12.243 ]' 00:16:12.243 11:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:12.243 11:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:12.243 11:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:12.243 11:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:16:12.243 11:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:12.500 11:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.500 11:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.500 11:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.500 11:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDBmM2NmODI2OGI2YzFiM2Y1Y2NlMGQyZDE2ZDM1NDdhZTE2YzNjYmFmYjZhODRleNC/ig==: 00:16:13.064 11:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.064 11:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:13.064 11:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:13.064 11:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.064 11:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:13.064 11:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:13.064 11:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:13.064 11:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:13.321 11:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 1 00:16:13.321 11:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:13.321 11:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:13.321 11:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:13.321 11:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:13.321 11:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:16:13.321 11:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:13.321 11:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.321 11:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:13.321 11:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:13.321 11:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:13.578 00:16:13.578 11:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:13.578 11:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:13.578 11:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.834 11:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.834 11:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.834 11:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:13.834 11:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.834 11:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:13.834 11:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:13.834 { 00:16:13.834 "cntlid": 51, 00:16:13.834 "qid": 0, 00:16:13.834 "state": "enabled", 00:16:13.834 "listen_address": { 00:16:13.834 "trtype": "TCP", 00:16:13.834 "adrfam": "IPv4", 00:16:13.834 "traddr": "10.0.0.2", 00:16:13.834 "trsvcid": "4420" 00:16:13.834 }, 00:16:13.834 "peer_address": { 00:16:13.834 "trtype": "TCP", 00:16:13.834 "adrfam": "IPv4", 00:16:13.834 "traddr": "10.0.0.1", 00:16:13.834 "trsvcid": "47474" 00:16:13.834 }, 00:16:13.834 "auth": { 00:16:13.834 "state": "completed", 00:16:13.834 "digest": "sha384", 00:16:13.834 "dhgroup": "null" 00:16:13.834 } 00:16:13.834 } 00:16:13.834 ]' 00:16:13.834 11:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:13.834 11:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:13.834 11:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:13.834 11:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:16:13.834 11:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:13.834 11:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.834 11:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.834 11:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.090 11:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:OWViZGE2NTdjYmQxZjQ1OTM2MzczNjE5MjljN2EzNmL7PpX3: 00:16:14.654 11:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.654 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.654 11:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:14.654 11:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:14.654 11:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.654 11:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:14.654 11:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:14.654 11:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:14.654 11:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:14.911 11:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 2 00:16:14.911 11:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:14.911 11:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:14.911 11:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:14.911 11:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:14.911 11:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 00:16:14.911 11:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:14.911 11:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.911 11:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:14.911 11:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:14.911 11:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:14.911 00:16:15.168 11:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:15.168 11:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:15.168 11:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.168 11:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.168 11:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.168 11:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:15.168 11:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.168 11:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:15.168 11:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:15.168 { 00:16:15.168 "cntlid": 53, 00:16:15.168 "qid": 0, 00:16:15.168 "state": "enabled", 00:16:15.168 "listen_address": { 00:16:15.168 "trtype": "TCP", 00:16:15.168 "adrfam": "IPv4", 00:16:15.168 "traddr": "10.0.0.2", 00:16:15.168 "trsvcid": "4420" 00:16:15.168 }, 00:16:15.168 "peer_address": { 00:16:15.168 "trtype": "TCP", 00:16:15.168 "adrfam": "IPv4", 00:16:15.168 "traddr": "10.0.0.1", 00:16:15.168 "trsvcid": "47502" 00:16:15.168 }, 00:16:15.168 "auth": { 00:16:15.168 "state": "completed", 00:16:15.168 "digest": "sha384", 00:16:15.168 "dhgroup": "null" 00:16:15.168 } 00:16:15.168 } 00:16:15.168 ]' 00:16:15.168 11:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:15.168 11:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:15.168 11:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:15.425 11:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:16:15.425 11:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:15.425 11:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.425 11:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.425 11:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.683 11:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGU4NzViNzg4ZDhkODc2YjAxMTQ3ZWEyMWQzYzM4YTc0MjQ3YzM3ZGY2OGI4ZWM0wk2Hmw==: 00:16:16.250 11:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.250 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.250 11:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:16.250 11:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:16.250 11:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.250 11:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:16.250 11:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:16.250 11:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:16.250 11:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:16.250 11:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 3 00:16:16.250 11:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:16.250 11:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:16.250 11:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:16.250 11:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:16.250 11:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:16.250 11:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:16.250 11:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.250 11:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:16.250 11:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:16.250 11:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:16.508 00:16:16.508 11:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:16.508 11:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:16.508 11:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.766 11:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.766 11:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.766 11:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:16.766 11:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.766 11:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:16.766 11:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:16.766 { 00:16:16.766 "cntlid": 55, 00:16:16.766 "qid": 0, 00:16:16.766 "state": "enabled", 00:16:16.766 "listen_address": { 00:16:16.766 "trtype": "TCP", 00:16:16.766 "adrfam": "IPv4", 00:16:16.766 "traddr": "10.0.0.2", 00:16:16.766 "trsvcid": "4420" 00:16:16.766 }, 00:16:16.766 "peer_address": { 00:16:16.766 "trtype": "TCP", 00:16:16.766 "adrfam": "IPv4", 00:16:16.766 "traddr": "10.0.0.1", 00:16:16.766 "trsvcid": "46010" 00:16:16.766 }, 00:16:16.766 "auth": { 00:16:16.766 "state": "completed", 00:16:16.766 "digest": "sha384", 00:16:16.766 "dhgroup": "null" 00:16:16.766 } 00:16:16.766 } 00:16:16.766 ]' 00:16:16.766 11:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:16.766 11:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:16.766 11:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:16.766 11:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:16:16.767 11:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:16.767 11:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.767 11:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.767 11:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.024 11:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NmEzYjZhZWNjN2IzNGVlNjBlZTUzMTMwMWEzZWYyZGZhYmI5ZjZmYzlhZmJjMTcwM2Y5NTM2NzA3ODgxMGU4N0wU71E=: 00:16:17.590 11:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.590 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.590 11:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:17.590 11:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:17.590 11:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.590 11:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:17.590 11:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:16:17.590 11:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:17.590 11:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:17.590 11:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:17.848 11:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 0 00:16:17.848 11:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:17.848 11:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:17.848 11:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:17.848 11:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:17.848 11:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 00:16:17.848 11:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:17.848 11:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.848 11:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:17.848 11:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:17.848 11:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:18.105 00:16:18.105 11:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:18.106 11:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.106 11:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:18.106 11:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.106 11:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.106 11:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:18.106 11:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.106 11:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:18.106 11:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:18.106 { 00:16:18.106 "cntlid": 57, 00:16:18.106 "qid": 0, 00:16:18.106 "state": "enabled", 00:16:18.106 "listen_address": { 00:16:18.106 "trtype": "TCP", 00:16:18.106 "adrfam": "IPv4", 00:16:18.106 "traddr": "10.0.0.2", 00:16:18.106 "trsvcid": "4420" 00:16:18.106 }, 00:16:18.106 "peer_address": { 00:16:18.106 "trtype": "TCP", 00:16:18.106 "adrfam": "IPv4", 00:16:18.106 "traddr": "10.0.0.1", 00:16:18.106 "trsvcid": "46028" 00:16:18.106 }, 00:16:18.106 "auth": { 00:16:18.106 "state": "completed", 00:16:18.106 "digest": "sha384", 00:16:18.106 "dhgroup": "ffdhe2048" 00:16:18.106 } 00:16:18.106 } 00:16:18.106 ]' 00:16:18.106 11:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:18.363 11:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:18.363 11:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:18.363 11:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:18.363 11:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:18.363 11:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.363 11:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.363 11:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.621 11:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDBmM2NmODI2OGI2YzFiM2Y1Y2NlMGQyZDE2ZDM1NDdhZTE2YzNjYmFmYjZhODRleNC/ig==: 00:16:19.187 11:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.187 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.187 11:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:19.187 11:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:19.187 11:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.187 11:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:19.187 11:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:19.187 11:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:19.187 11:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:19.187 11:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 1 00:16:19.187 11:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:19.187 11:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:19.187 11:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:19.187 11:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:19.187 11:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:16:19.187 11:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:19.187 11:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.187 11:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:19.187 11:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:19.187 11:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:19.445 00:16:19.445 11:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:19.445 11:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:19.445 11:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.704 11:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.704 11:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.704 11:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:19.704 11:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.704 11:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:19.704 11:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:19.704 { 00:16:19.704 "cntlid": 59, 00:16:19.704 "qid": 0, 00:16:19.704 "state": "enabled", 00:16:19.704 "listen_address": { 00:16:19.704 "trtype": "TCP", 00:16:19.704 "adrfam": "IPv4", 00:16:19.704 "traddr": "10.0.0.2", 00:16:19.704 "trsvcid": "4420" 00:16:19.704 }, 00:16:19.704 "peer_address": { 00:16:19.704 "trtype": "TCP", 00:16:19.704 "adrfam": "IPv4", 00:16:19.704 "traddr": "10.0.0.1", 00:16:19.704 "trsvcid": "46044" 00:16:19.704 }, 00:16:19.704 "auth": { 00:16:19.704 "state": "completed", 00:16:19.704 "digest": "sha384", 00:16:19.704 "dhgroup": "ffdhe2048" 00:16:19.704 } 00:16:19.704 } 00:16:19.704 ]' 00:16:19.704 11:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:19.704 11:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:19.704 11:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:19.704 11:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:19.704 11:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:19.704 11:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.704 11:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.704 11:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.962 11:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:OWViZGE2NTdjYmQxZjQ1OTM2MzczNjE5MjljN2EzNmL7PpX3: 00:16:20.528 11:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.528 11:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:20.528 11:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:20.528 11:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.528 11:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:20.528 11:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:20.528 11:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:20.528 11:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:20.786 11:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 2 00:16:20.786 11:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:20.786 11:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:20.786 11:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:20.786 11:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:20.786 11:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 00:16:20.786 11:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:20.786 11:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.786 11:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:20.786 11:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:20.786 11:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:21.045 00:16:21.045 11:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:21.045 11:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:21.045 11:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.045 11:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.045 11:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.045 11:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:21.045 11:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.045 11:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:21.045 11:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:21.045 { 00:16:21.045 "cntlid": 61, 00:16:21.045 "qid": 0, 00:16:21.045 "state": "enabled", 00:16:21.045 "listen_address": { 00:16:21.045 "trtype": "TCP", 00:16:21.045 "adrfam": "IPv4", 00:16:21.045 "traddr": "10.0.0.2", 00:16:21.045 "trsvcid": "4420" 00:16:21.045 }, 00:16:21.045 "peer_address": { 00:16:21.045 "trtype": "TCP", 00:16:21.045 "adrfam": "IPv4", 00:16:21.045 "traddr": "10.0.0.1", 00:16:21.045 "trsvcid": "46082" 00:16:21.045 }, 00:16:21.045 "auth": { 00:16:21.045 "state": "completed", 00:16:21.045 "digest": "sha384", 00:16:21.045 "dhgroup": "ffdhe2048" 00:16:21.045 } 00:16:21.045 } 00:16:21.045 ]' 00:16:21.045 11:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:21.302 11:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:21.302 11:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:21.302 11:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:21.302 11:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:21.302 11:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.302 11:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.302 11:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.560 11:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGU4NzViNzg4ZDhkODc2YjAxMTQ3ZWEyMWQzYzM4YTc0MjQ3YzM3ZGY2OGI4ZWM0wk2Hmw==: 00:16:22.126 11:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.126 11:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:22.126 11:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:22.126 11:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.126 11:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:22.126 11:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:22.126 11:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:22.126 11:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:22.126 11:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 3 00:16:22.126 11:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:22.126 11:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:22.126 11:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:22.126 11:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:22.126 11:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:22.126 11:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:22.126 11:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.126 11:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:22.126 11:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:22.126 11:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:22.384 00:16:22.385 11:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:22.385 11:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:22.385 11:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.643 11:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.643 11:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.643 11:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:22.643 11:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.643 11:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:22.643 11:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:22.643 { 00:16:22.643 "cntlid": 63, 00:16:22.643 "qid": 0, 00:16:22.643 "state": "enabled", 00:16:22.643 "listen_address": { 00:16:22.643 "trtype": "TCP", 00:16:22.643 "adrfam": "IPv4", 00:16:22.643 "traddr": "10.0.0.2", 00:16:22.643 "trsvcid": "4420" 00:16:22.643 }, 00:16:22.643 "peer_address": { 00:16:22.643 "trtype": "TCP", 00:16:22.643 "adrfam": "IPv4", 00:16:22.643 "traddr": "10.0.0.1", 00:16:22.643 "trsvcid": "46098" 00:16:22.643 }, 00:16:22.643 "auth": { 00:16:22.643 "state": "completed", 00:16:22.643 "digest": "sha384", 00:16:22.643 "dhgroup": "ffdhe2048" 00:16:22.643 } 00:16:22.643 } 00:16:22.643 ]' 00:16:22.643 11:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:22.643 11:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:22.643 11:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:22.643 11:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:22.643 11:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:22.901 11:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.901 11:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.901 11:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.901 11:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NmEzYjZhZWNjN2IzNGVlNjBlZTUzMTMwMWEzZWYyZGZhYmI5ZjZmYzlhZmJjMTcwM2Y5NTM2NzA3ODgxMGU4N0wU71E=: 00:16:23.466 11:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.466 11:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:23.466 11:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:23.467 11:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.467 11:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:23.467 11:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:16:23.467 11:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:23.467 11:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:23.467 11:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:23.724 11:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 0 00:16:23.724 11:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:23.724 11:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:23.724 11:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:23.724 11:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:23.724 11:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 00:16:23.724 11:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:23.724 11:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.724 11:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:23.724 11:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:23.724 11:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:23.982 00:16:23.982 11:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:23.982 11:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:23.982 11:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.239 11:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.239 11:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.239 11:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:24.240 11:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.240 11:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:24.240 11:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:24.240 { 00:16:24.240 "cntlid": 65, 00:16:24.240 "qid": 0, 00:16:24.240 "state": "enabled", 00:16:24.240 "listen_address": { 00:16:24.240 "trtype": "TCP", 00:16:24.240 "adrfam": "IPv4", 00:16:24.240 "traddr": "10.0.0.2", 00:16:24.240 "trsvcid": "4420" 00:16:24.240 }, 00:16:24.240 "peer_address": { 00:16:24.240 "trtype": "TCP", 00:16:24.240 "adrfam": "IPv4", 00:16:24.240 "traddr": "10.0.0.1", 00:16:24.240 "trsvcid": "46114" 00:16:24.240 }, 00:16:24.240 "auth": { 00:16:24.240 "state": "completed", 00:16:24.240 "digest": "sha384", 00:16:24.240 "dhgroup": "ffdhe3072" 00:16:24.240 } 00:16:24.240 } 00:16:24.240 ]' 00:16:24.240 11:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:24.240 11:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:24.240 11:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:24.240 11:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:24.240 11:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:24.240 11:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.240 11:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.240 11:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.497 11:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDBmM2NmODI2OGI2YzFiM2Y1Y2NlMGQyZDE2ZDM1NDdhZTE2YzNjYmFmYjZhODRleNC/ig==: 00:16:25.081 11:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.081 11:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:25.081 11:08:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:25.081 11:08:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.081 11:08:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:25.081 11:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:25.081 11:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:25.081 11:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:25.081 11:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 1 00:16:25.081 11:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:25.081 11:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:25.081 11:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:25.081 11:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:25.081 11:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:16:25.081 11:08:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:25.081 11:08:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.081 11:08:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:25.081 11:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:25.081 11:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:25.366 00:16:25.366 11:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:25.366 11:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:25.366 11:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.624 11:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.624 11:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.624 11:08:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:25.624 11:08:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.624 11:08:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:25.625 11:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:25.625 { 00:16:25.625 "cntlid": 67, 00:16:25.625 "qid": 0, 00:16:25.625 "state": "enabled", 00:16:25.625 "listen_address": { 00:16:25.625 "trtype": "TCP", 00:16:25.625 "adrfam": "IPv4", 00:16:25.625 "traddr": "10.0.0.2", 00:16:25.625 "trsvcid": "4420" 00:16:25.625 }, 00:16:25.625 "peer_address": { 00:16:25.625 "trtype": "TCP", 00:16:25.625 "adrfam": "IPv4", 00:16:25.625 "traddr": "10.0.0.1", 00:16:25.625 "trsvcid": "46148" 00:16:25.625 }, 00:16:25.625 "auth": { 00:16:25.625 "state": "completed", 00:16:25.625 "digest": "sha384", 00:16:25.625 "dhgroup": "ffdhe3072" 00:16:25.625 } 00:16:25.625 } 00:16:25.625 ]' 00:16:25.625 11:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:25.625 11:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:25.625 11:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:25.625 11:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:25.625 11:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:25.625 11:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.625 11:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.625 11:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.883 11:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:OWViZGE2NTdjYmQxZjQ1OTM2MzczNjE5MjljN2EzNmL7PpX3: 00:16:26.448 11:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.448 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.448 11:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:26.448 11:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:26.448 11:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.448 11:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:26.448 11:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:26.448 11:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:26.448 11:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:26.706 11:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 2 00:16:26.706 11:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:26.706 11:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:26.706 11:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:26.706 11:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:26.706 11:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 00:16:26.706 11:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:26.706 11:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.706 11:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:26.706 11:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:26.706 11:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:26.963 00:16:26.964 11:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:26.964 11:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:26.964 11:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.221 11:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.221 11:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.221 11:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:27.221 11:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.221 11:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:27.221 11:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:27.221 { 00:16:27.221 "cntlid": 69, 00:16:27.221 "qid": 0, 00:16:27.221 "state": "enabled", 00:16:27.221 "listen_address": { 00:16:27.221 "trtype": "TCP", 00:16:27.221 "adrfam": "IPv4", 00:16:27.221 "traddr": "10.0.0.2", 00:16:27.221 "trsvcid": "4420" 00:16:27.221 }, 00:16:27.221 "peer_address": { 00:16:27.221 "trtype": "TCP", 00:16:27.221 "adrfam": "IPv4", 00:16:27.221 "traddr": "10.0.0.1", 00:16:27.221 "trsvcid": "56060" 00:16:27.221 }, 00:16:27.221 "auth": { 00:16:27.221 "state": "completed", 00:16:27.221 "digest": "sha384", 00:16:27.221 "dhgroup": "ffdhe3072" 00:16:27.221 } 00:16:27.221 } 00:16:27.221 ]' 00:16:27.221 11:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:27.221 11:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:27.221 11:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:27.221 11:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:27.221 11:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:27.221 11:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.221 11:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.221 11:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.479 11:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGU4NzViNzg4ZDhkODc2YjAxMTQ3ZWEyMWQzYzM4YTc0MjQ3YzM3ZGY2OGI4ZWM0wk2Hmw==: 00:16:28.044 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.044 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.044 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:28.044 11:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:28.044 11:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.044 11:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:28.044 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:28.044 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:28.044 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:28.044 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 3 00:16:28.044 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:28.044 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:28.044 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:28.044 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:28.044 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:28.044 11:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:28.044 11:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.044 11:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:28.044 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:28.044 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:28.302 00:16:28.302 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:28.302 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:28.302 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.559 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.559 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.559 11:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:28.559 11:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.560 11:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:28.560 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:28.560 { 00:16:28.560 "cntlid": 71, 00:16:28.560 "qid": 0, 00:16:28.560 "state": "enabled", 00:16:28.560 "listen_address": { 00:16:28.560 "trtype": "TCP", 00:16:28.560 "adrfam": "IPv4", 00:16:28.560 "traddr": "10.0.0.2", 00:16:28.560 "trsvcid": "4420" 00:16:28.560 }, 00:16:28.560 "peer_address": { 00:16:28.560 "trtype": "TCP", 00:16:28.560 "adrfam": "IPv4", 00:16:28.560 "traddr": "10.0.0.1", 00:16:28.560 "trsvcid": "56096" 00:16:28.560 }, 00:16:28.560 "auth": { 00:16:28.560 "state": "completed", 00:16:28.560 "digest": "sha384", 00:16:28.560 "dhgroup": "ffdhe3072" 00:16:28.560 } 00:16:28.560 } 00:16:28.560 ]' 00:16:28.560 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:28.560 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:28.560 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:28.560 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:28.560 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:28.816 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.816 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.817 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.817 11:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NmEzYjZhZWNjN2IzNGVlNjBlZTUzMTMwMWEzZWYyZGZhYmI5ZjZmYzlhZmJjMTcwM2Y5NTM2NzA3ODgxMGU4N0wU71E=: 00:16:29.381 11:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.381 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.381 11:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:29.381 11:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:29.381 11:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.381 11:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:29.381 11:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:16:29.381 11:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:29.381 11:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:29.381 11:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:29.638 11:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 0 00:16:29.638 11:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:29.638 11:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:29.638 11:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:29.638 11:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:29.638 11:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 00:16:29.638 11:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:29.638 11:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.638 11:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:29.638 11:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:29.638 11:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:29.896 00:16:29.896 11:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:29.896 11:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:29.896 11:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.153 11:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.153 11:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.153 11:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:30.153 11:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.153 11:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:30.153 11:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:30.153 { 00:16:30.153 "cntlid": 73, 00:16:30.153 "qid": 0, 00:16:30.153 "state": "enabled", 00:16:30.153 "listen_address": { 00:16:30.153 "trtype": "TCP", 00:16:30.153 "adrfam": "IPv4", 00:16:30.153 "traddr": "10.0.0.2", 00:16:30.153 "trsvcid": "4420" 00:16:30.153 }, 00:16:30.153 "peer_address": { 00:16:30.153 "trtype": "TCP", 00:16:30.153 "adrfam": "IPv4", 00:16:30.153 "traddr": "10.0.0.1", 00:16:30.153 "trsvcid": "56126" 00:16:30.153 }, 00:16:30.153 "auth": { 00:16:30.153 "state": "completed", 00:16:30.153 "digest": "sha384", 00:16:30.153 "dhgroup": "ffdhe4096" 00:16:30.153 } 00:16:30.153 } 00:16:30.153 ]' 00:16:30.153 11:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:30.153 11:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:30.153 11:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:30.153 11:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:30.153 11:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:30.153 11:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.153 11:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.153 11:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.410 11:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDBmM2NmODI2OGI2YzFiM2Y1Y2NlMGQyZDE2ZDM1NDdhZTE2YzNjYmFmYjZhODRleNC/ig==: 00:16:30.974 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.974 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.974 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:30.974 11:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:30.974 11:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.974 11:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:30.974 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:30.974 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:30.974 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:31.230 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 1 00:16:31.230 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:31.230 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:31.230 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:31.230 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:31.230 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:16:31.230 11:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:31.230 11:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.230 11:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:31.230 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:31.230 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:31.487 00:16:31.487 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:31.487 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:31.487 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.487 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.487 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.487 11:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:31.487 11:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.487 11:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:31.487 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:31.487 { 00:16:31.487 "cntlid": 75, 00:16:31.487 "qid": 0, 00:16:31.487 "state": "enabled", 00:16:31.487 "listen_address": { 00:16:31.487 "trtype": "TCP", 00:16:31.487 "adrfam": "IPv4", 00:16:31.487 "traddr": "10.0.0.2", 00:16:31.487 "trsvcid": "4420" 00:16:31.487 }, 00:16:31.487 "peer_address": { 00:16:31.487 "trtype": "TCP", 00:16:31.487 "adrfam": "IPv4", 00:16:31.487 "traddr": "10.0.0.1", 00:16:31.487 "trsvcid": "56166" 00:16:31.487 }, 00:16:31.487 "auth": { 00:16:31.487 "state": "completed", 00:16:31.487 "digest": "sha384", 00:16:31.487 "dhgroup": "ffdhe4096" 00:16:31.487 } 00:16:31.487 } 00:16:31.487 ]' 00:16:31.487 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:31.744 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:31.744 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:31.744 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:31.744 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:31.744 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.744 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.744 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.000 11:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:OWViZGE2NTdjYmQxZjQ1OTM2MzczNjE5MjljN2EzNmL7PpX3: 00:16:32.564 11:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.564 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.564 11:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:32.564 11:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:32.564 11:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.564 11:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:32.564 11:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:32.564 11:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:32.564 11:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:32.564 11:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 2 00:16:32.564 11:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:32.564 11:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:32.564 11:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:32.564 11:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:32.564 11:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 00:16:32.564 11:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:32.564 11:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.564 11:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:32.564 11:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:32.564 11:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:32.821 00:16:32.821 11:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:32.821 11:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:32.821 11:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.079 11:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.079 11:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.079 11:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:33.079 11:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.079 11:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:33.079 11:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:33.079 { 00:16:33.079 "cntlid": 77, 00:16:33.079 "qid": 0, 00:16:33.079 "state": "enabled", 00:16:33.079 "listen_address": { 00:16:33.079 "trtype": "TCP", 00:16:33.079 "adrfam": "IPv4", 00:16:33.079 "traddr": "10.0.0.2", 00:16:33.079 "trsvcid": "4420" 00:16:33.079 }, 00:16:33.079 "peer_address": { 00:16:33.079 "trtype": "TCP", 00:16:33.079 "adrfam": "IPv4", 00:16:33.079 "traddr": "10.0.0.1", 00:16:33.079 "trsvcid": "56194" 00:16:33.079 }, 00:16:33.079 "auth": { 00:16:33.079 "state": "completed", 00:16:33.079 "digest": "sha384", 00:16:33.079 "dhgroup": "ffdhe4096" 00:16:33.079 } 00:16:33.079 } 00:16:33.079 ]' 00:16:33.079 11:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:33.079 11:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:33.079 11:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:33.336 11:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:33.336 11:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:33.336 11:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.336 11:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.336 11:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.336 11:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGU4NzViNzg4ZDhkODc2YjAxMTQ3ZWEyMWQzYzM4YTc0MjQ3YzM3ZGY2OGI4ZWM0wk2Hmw==: 00:16:33.899 11:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.899 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.899 11:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:33.899 11:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:33.899 11:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.899 11:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:33.899 11:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:33.899 11:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:33.899 11:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:34.157 11:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 3 00:16:34.157 11:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:34.157 11:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:34.157 11:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:34.157 11:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:34.157 11:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:34.157 11:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:34.157 11:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.157 11:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:34.157 11:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:34.157 11:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:34.412 00:16:34.412 11:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:34.412 11:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:34.412 11:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.669 11:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.669 11:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.669 11:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:34.669 11:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.669 11:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:34.669 11:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:34.669 { 00:16:34.669 "cntlid": 79, 00:16:34.669 "qid": 0, 00:16:34.669 "state": "enabled", 00:16:34.669 "listen_address": { 00:16:34.669 "trtype": "TCP", 00:16:34.669 "adrfam": "IPv4", 00:16:34.669 "traddr": "10.0.0.2", 00:16:34.669 "trsvcid": "4420" 00:16:34.669 }, 00:16:34.669 "peer_address": { 00:16:34.669 "trtype": "TCP", 00:16:34.669 "adrfam": "IPv4", 00:16:34.669 "traddr": "10.0.0.1", 00:16:34.669 "trsvcid": "56210" 00:16:34.669 }, 00:16:34.669 "auth": { 00:16:34.669 "state": "completed", 00:16:34.669 "digest": "sha384", 00:16:34.669 "dhgroup": "ffdhe4096" 00:16:34.669 } 00:16:34.669 } 00:16:34.669 ]' 00:16:34.669 11:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:34.669 11:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:34.669 11:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:34.669 11:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:34.669 11:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:34.669 11:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.669 11:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.669 11:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.925 11:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NmEzYjZhZWNjN2IzNGVlNjBlZTUzMTMwMWEzZWYyZGZhYmI5ZjZmYzlhZmJjMTcwM2Y5NTM2NzA3ODgxMGU4N0wU71E=: 00:16:35.488 11:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.488 11:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:35.488 11:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:35.488 11:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.488 11:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:35.488 11:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:16:35.488 11:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:35.488 11:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:35.488 11:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:35.745 11:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 0 00:16:35.745 11:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:35.745 11:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:35.745 11:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:35.745 11:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:35.745 11:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 00:16:35.745 11:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:35.745 11:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.745 11:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:35.745 11:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:35.745 11:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:36.002 00:16:36.002 11:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:36.002 11:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:36.002 11:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.258 11:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.258 11:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.258 11:08:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:36.258 11:08:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.258 11:08:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:36.258 11:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:36.258 { 00:16:36.258 "cntlid": 81, 00:16:36.258 "qid": 0, 00:16:36.258 "state": "enabled", 00:16:36.258 "listen_address": { 00:16:36.258 "trtype": "TCP", 00:16:36.258 "adrfam": "IPv4", 00:16:36.258 "traddr": "10.0.0.2", 00:16:36.258 "trsvcid": "4420" 00:16:36.258 }, 00:16:36.258 "peer_address": { 00:16:36.258 "trtype": "TCP", 00:16:36.258 "adrfam": "IPv4", 00:16:36.258 "traddr": "10.0.0.1", 00:16:36.258 "trsvcid": "56236" 00:16:36.258 }, 00:16:36.258 "auth": { 00:16:36.258 "state": "completed", 00:16:36.258 "digest": "sha384", 00:16:36.258 "dhgroup": "ffdhe6144" 00:16:36.258 } 00:16:36.258 } 00:16:36.258 ]' 00:16:36.258 11:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:36.258 11:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:36.258 11:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:36.258 11:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:36.258 11:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:36.258 11:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.258 11:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.258 11:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.515 11:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDBmM2NmODI2OGI2YzFiM2Y1Y2NlMGQyZDE2ZDM1NDdhZTE2YzNjYmFmYjZhODRleNC/ig==: 00:16:37.078 11:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.078 11:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:37.078 11:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:37.078 11:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.078 11:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:37.078 11:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:37.078 11:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:37.078 11:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:37.335 11:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 1 00:16:37.335 11:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:37.335 11:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:37.335 11:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:37.335 11:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:37.335 11:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:16:37.335 11:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:37.335 11:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.335 11:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:37.335 11:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:37.336 11:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:37.592 00:16:37.592 11:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:37.592 11:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:37.592 11:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.848 11:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.848 11:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.848 11:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:37.848 11:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.848 11:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:37.848 11:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:37.848 { 00:16:37.848 "cntlid": 83, 00:16:37.848 "qid": 0, 00:16:37.848 "state": "enabled", 00:16:37.848 "listen_address": { 00:16:37.848 "trtype": "TCP", 00:16:37.848 "adrfam": "IPv4", 00:16:37.848 "traddr": "10.0.0.2", 00:16:37.848 "trsvcid": "4420" 00:16:37.848 }, 00:16:37.848 "peer_address": { 00:16:37.848 "trtype": "TCP", 00:16:37.848 "adrfam": "IPv4", 00:16:37.848 "traddr": "10.0.0.1", 00:16:37.848 "trsvcid": "34450" 00:16:37.848 }, 00:16:37.848 "auth": { 00:16:37.848 "state": "completed", 00:16:37.848 "digest": "sha384", 00:16:37.848 "dhgroup": "ffdhe6144" 00:16:37.848 } 00:16:37.848 } 00:16:37.848 ]' 00:16:37.848 11:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:37.848 11:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:37.848 11:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:37.848 11:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:37.848 11:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:37.848 11:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.848 11:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.848 11:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.103 11:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:OWViZGE2NTdjYmQxZjQ1OTM2MzczNjE5MjljN2EzNmL7PpX3: 00:16:38.664 11:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.664 11:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:38.665 11:08:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:38.665 11:08:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.665 11:08:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:38.665 11:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:38.665 11:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:38.665 11:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:38.921 11:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 2 00:16:38.921 11:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:38.921 11:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:38.921 11:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:38.921 11:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:38.921 11:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 00:16:38.921 11:08:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:38.921 11:08:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.921 11:08:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:38.921 11:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:38.921 11:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:39.177 00:16:39.177 11:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:39.177 11:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.177 11:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:39.433 11:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.433 11:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.433 11:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:39.433 11:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.433 11:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:39.433 11:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:39.433 { 00:16:39.433 "cntlid": 85, 00:16:39.433 "qid": 0, 00:16:39.433 "state": "enabled", 00:16:39.433 "listen_address": { 00:16:39.433 "trtype": "TCP", 00:16:39.433 "adrfam": "IPv4", 00:16:39.433 "traddr": "10.0.0.2", 00:16:39.433 "trsvcid": "4420" 00:16:39.433 }, 00:16:39.433 "peer_address": { 00:16:39.433 "trtype": "TCP", 00:16:39.433 "adrfam": "IPv4", 00:16:39.433 "traddr": "10.0.0.1", 00:16:39.433 "trsvcid": "34486" 00:16:39.433 }, 00:16:39.433 "auth": { 00:16:39.433 "state": "completed", 00:16:39.433 "digest": "sha384", 00:16:39.433 "dhgroup": "ffdhe6144" 00:16:39.433 } 00:16:39.433 } 00:16:39.433 ]' 00:16:39.433 11:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:39.433 11:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:39.433 11:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:39.433 11:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:39.433 11:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:39.433 11:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.433 11:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.433 11:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.689 11:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGU4NzViNzg4ZDhkODc2YjAxMTQ3ZWEyMWQzYzM4YTc0MjQ3YzM3ZGY2OGI4ZWM0wk2Hmw==: 00:16:40.252 11:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.252 11:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:40.252 11:08:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:40.252 11:08:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.252 11:08:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:40.252 11:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:40.252 11:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:40.252 11:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:40.509 11:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 3 00:16:40.509 11:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:40.509 11:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:40.509 11:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:40.509 11:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:40.509 11:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:40.509 11:08:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:40.509 11:08:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.509 11:08:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:40.509 11:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:40.509 11:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:40.765 00:16:40.765 11:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:40.765 11:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:40.765 11:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.022 11:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.022 11:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.022 11:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:41.022 11:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.022 11:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:41.022 11:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:41.022 { 00:16:41.022 "cntlid": 87, 00:16:41.022 "qid": 0, 00:16:41.022 "state": "enabled", 00:16:41.022 "listen_address": { 00:16:41.022 "trtype": "TCP", 00:16:41.022 "adrfam": "IPv4", 00:16:41.022 "traddr": "10.0.0.2", 00:16:41.022 "trsvcid": "4420" 00:16:41.022 }, 00:16:41.022 "peer_address": { 00:16:41.022 "trtype": "TCP", 00:16:41.022 "adrfam": "IPv4", 00:16:41.022 "traddr": "10.0.0.1", 00:16:41.022 "trsvcid": "34512" 00:16:41.022 }, 00:16:41.022 "auth": { 00:16:41.022 "state": "completed", 00:16:41.022 "digest": "sha384", 00:16:41.022 "dhgroup": "ffdhe6144" 00:16:41.022 } 00:16:41.022 } 00:16:41.022 ]' 00:16:41.022 11:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:41.022 11:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:41.022 11:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:41.022 11:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:41.022 11:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:41.022 11:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.022 11:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.022 11:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.279 11:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NmEzYjZhZWNjN2IzNGVlNjBlZTUzMTMwMWEzZWYyZGZhYmI5ZjZmYzlhZmJjMTcwM2Y5NTM2NzA3ODgxMGU4N0wU71E=: 00:16:41.871 11:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.871 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:41.871 11:08:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:41.871 11:08:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.871 11:08:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:41.871 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:16:41.871 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:41.871 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:41.871 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:42.128 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 0 00:16:42.128 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:42.128 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:42.128 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:42.128 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:42.128 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 00:16:42.128 11:08:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:42.128 11:08:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.128 11:08:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:42.128 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:42.128 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:42.691 00:16:42.691 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:42.691 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:42.691 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.691 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.691 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.691 11:08:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:42.691 11:08:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.691 11:08:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:42.691 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:42.691 { 00:16:42.691 "cntlid": 89, 00:16:42.691 "qid": 0, 00:16:42.691 "state": "enabled", 00:16:42.691 "listen_address": { 00:16:42.691 "trtype": "TCP", 00:16:42.691 "adrfam": "IPv4", 00:16:42.691 "traddr": "10.0.0.2", 00:16:42.691 "trsvcid": "4420" 00:16:42.691 }, 00:16:42.691 "peer_address": { 00:16:42.691 "trtype": "TCP", 00:16:42.691 "adrfam": "IPv4", 00:16:42.691 "traddr": "10.0.0.1", 00:16:42.691 "trsvcid": "34526" 00:16:42.691 }, 00:16:42.691 "auth": { 00:16:42.691 "state": "completed", 00:16:42.691 "digest": "sha384", 00:16:42.691 "dhgroup": "ffdhe8192" 00:16:42.691 } 00:16:42.691 } 00:16:42.691 ]' 00:16:42.691 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:42.691 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:42.691 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:42.691 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:42.691 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:42.948 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.948 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.948 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.948 11:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDBmM2NmODI2OGI2YzFiM2Y1Y2NlMGQyZDE2ZDM1NDdhZTE2YzNjYmFmYjZhODRleNC/ig==: 00:16:43.513 11:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.513 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.513 11:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:43.513 11:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:43.513 11:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.513 11:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:43.513 11:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:43.513 11:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:43.513 11:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:43.770 11:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 1 00:16:43.770 11:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:43.770 11:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:43.770 11:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:43.770 11:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:43.770 11:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:16:43.770 11:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:43.770 11:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.770 11:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:43.770 11:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:43.770 11:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:44.335 00:16:44.335 11:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:44.335 11:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:44.335 11:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.593 11:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.593 11:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.593 11:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:44.593 11:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.593 11:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:44.593 11:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:44.593 { 00:16:44.593 "cntlid": 91, 00:16:44.593 "qid": 0, 00:16:44.593 "state": "enabled", 00:16:44.593 "listen_address": { 00:16:44.593 "trtype": "TCP", 00:16:44.593 "adrfam": "IPv4", 00:16:44.593 "traddr": "10.0.0.2", 00:16:44.593 "trsvcid": "4420" 00:16:44.593 }, 00:16:44.593 "peer_address": { 00:16:44.593 "trtype": "TCP", 00:16:44.593 "adrfam": "IPv4", 00:16:44.593 "traddr": "10.0.0.1", 00:16:44.593 "trsvcid": "34558" 00:16:44.593 }, 00:16:44.593 "auth": { 00:16:44.593 "state": "completed", 00:16:44.593 "digest": "sha384", 00:16:44.593 "dhgroup": "ffdhe8192" 00:16:44.593 } 00:16:44.593 } 00:16:44.593 ]' 00:16:44.593 11:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:44.593 11:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:44.593 11:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:44.593 11:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:44.593 11:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:44.593 11:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.593 11:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.593 11:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.850 11:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:OWViZGE2NTdjYmQxZjQ1OTM2MzczNjE5MjljN2EzNmL7PpX3: 00:16:45.414 11:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.414 11:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:45.414 11:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:45.414 11:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.414 11:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:45.414 11:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:45.414 11:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:45.414 11:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:45.414 11:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 2 00:16:45.414 11:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:45.414 11:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:45.414 11:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:45.414 11:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:45.414 11:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 00:16:45.414 11:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:45.414 11:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.671 11:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:45.671 11:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:45.671 11:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:45.928 00:16:45.928 11:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:45.928 11:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.928 11:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:46.185 11:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.185 11:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.185 11:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:46.185 11:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.185 11:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:46.185 11:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:46.185 { 00:16:46.185 "cntlid": 93, 00:16:46.185 "qid": 0, 00:16:46.185 "state": "enabled", 00:16:46.185 "listen_address": { 00:16:46.185 "trtype": "TCP", 00:16:46.185 "adrfam": "IPv4", 00:16:46.185 "traddr": "10.0.0.2", 00:16:46.185 "trsvcid": "4420" 00:16:46.185 }, 00:16:46.185 "peer_address": { 00:16:46.185 "trtype": "TCP", 00:16:46.185 "adrfam": "IPv4", 00:16:46.185 "traddr": "10.0.0.1", 00:16:46.185 "trsvcid": "34586" 00:16:46.185 }, 00:16:46.185 "auth": { 00:16:46.185 "state": "completed", 00:16:46.185 "digest": "sha384", 00:16:46.185 "dhgroup": "ffdhe8192" 00:16:46.185 } 00:16:46.185 } 00:16:46.185 ]' 00:16:46.185 11:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:46.185 11:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:46.185 11:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:46.185 11:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:46.185 11:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:46.185 11:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.185 11:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.185 11:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.442 11:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGU4NzViNzg4ZDhkODc2YjAxMTQ3ZWEyMWQzYzM4YTc0MjQ3YzM3ZGY2OGI4ZWM0wk2Hmw==: 00:16:47.006 11:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.006 11:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:47.006 11:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:47.006 11:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.006 11:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:47.006 11:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:47.006 11:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:47.006 11:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:47.263 11:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 3 00:16:47.263 11:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:47.263 11:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:47.263 11:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:47.263 11:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:47.263 11:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:47.263 11:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:47.263 11:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.263 11:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:47.263 11:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:47.263 11:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:47.828 00:16:47.828 11:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:47.828 11:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:47.828 11:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.828 11:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.828 11:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.828 11:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:47.828 11:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.828 11:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:47.828 11:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:47.828 { 00:16:47.828 "cntlid": 95, 00:16:47.828 "qid": 0, 00:16:47.828 "state": "enabled", 00:16:47.828 "listen_address": { 00:16:47.828 "trtype": "TCP", 00:16:47.828 "adrfam": "IPv4", 00:16:47.828 "traddr": "10.0.0.2", 00:16:47.828 "trsvcid": "4420" 00:16:47.828 }, 00:16:47.828 "peer_address": { 00:16:47.828 "trtype": "TCP", 00:16:47.828 "adrfam": "IPv4", 00:16:47.828 "traddr": "10.0.0.1", 00:16:47.828 "trsvcid": "44440" 00:16:47.828 }, 00:16:47.828 "auth": { 00:16:47.828 "state": "completed", 00:16:47.828 "digest": "sha384", 00:16:47.828 "dhgroup": "ffdhe8192" 00:16:47.828 } 00:16:47.828 } 00:16:47.828 ]' 00:16:47.828 11:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:47.828 11:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:47.828 11:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:48.086 11:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:48.086 11:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:48.086 11:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.086 11:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.086 11:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.086 11:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NmEzYjZhZWNjN2IzNGVlNjBlZTUzMTMwMWEzZWYyZGZhYmI5ZjZmYzlhZmJjMTcwM2Y5NTM2NzA3ODgxMGU4N0wU71E=: 00:16:48.652 11:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.652 11:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:48.652 11:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:48.652 11:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.652 11:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:48.652 11:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:16:48.652 11:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:16:48.652 11:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:48.652 11:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:48.652 11:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:48.908 11:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 0 00:16:48.909 11:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:48.909 11:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:48.909 11:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:48.909 11:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:48.909 11:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 00:16:48.909 11:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:48.909 11:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.909 11:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:48.909 11:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:48.909 11:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:49.166 00:16:49.166 11:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:49.166 11:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:49.166 11:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.423 11:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.423 11:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.423 11:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:49.423 11:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.423 11:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:49.423 11:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:49.423 { 00:16:49.423 "cntlid": 97, 00:16:49.423 "qid": 0, 00:16:49.423 "state": "enabled", 00:16:49.423 "listen_address": { 00:16:49.423 "trtype": "TCP", 00:16:49.423 "adrfam": "IPv4", 00:16:49.423 "traddr": "10.0.0.2", 00:16:49.423 "trsvcid": "4420" 00:16:49.423 }, 00:16:49.423 "peer_address": { 00:16:49.423 "trtype": "TCP", 00:16:49.423 "adrfam": "IPv4", 00:16:49.423 "traddr": "10.0.0.1", 00:16:49.423 "trsvcid": "44472" 00:16:49.423 }, 00:16:49.423 "auth": { 00:16:49.423 "state": "completed", 00:16:49.423 "digest": "sha512", 00:16:49.423 "dhgroup": "null" 00:16:49.423 } 00:16:49.423 } 00:16:49.423 ]' 00:16:49.423 11:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:49.423 11:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:49.423 11:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:49.423 11:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:16:49.423 11:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:49.423 11:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.423 11:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.423 11:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.680 11:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDBmM2NmODI2OGI2YzFiM2Y1Y2NlMGQyZDE2ZDM1NDdhZTE2YzNjYmFmYjZhODRleNC/ig==: 00:16:50.244 11:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.244 11:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:50.244 11:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:50.244 11:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.244 11:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:50.244 11:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:50.244 11:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:50.244 11:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:50.501 11:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 1 00:16:50.501 11:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:50.501 11:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:50.501 11:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:50.501 11:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:50.501 11:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:16:50.501 11:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:50.501 11:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.501 11:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:50.501 11:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:50.501 11:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:50.758 00:16:50.758 11:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:50.758 11:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:50.758 11:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.758 11:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.758 11:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.758 11:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:50.758 11:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.758 11:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:50.758 11:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:50.758 { 00:16:50.758 "cntlid": 99, 00:16:50.758 "qid": 0, 00:16:50.758 "state": "enabled", 00:16:50.758 "listen_address": { 00:16:50.758 "trtype": "TCP", 00:16:50.758 "adrfam": "IPv4", 00:16:50.758 "traddr": "10.0.0.2", 00:16:50.758 "trsvcid": "4420" 00:16:50.758 }, 00:16:50.758 "peer_address": { 00:16:50.758 "trtype": "TCP", 00:16:50.758 "adrfam": "IPv4", 00:16:50.758 "traddr": "10.0.0.1", 00:16:50.758 "trsvcid": "44516" 00:16:50.758 }, 00:16:50.758 "auth": { 00:16:50.758 "state": "completed", 00:16:50.758 "digest": "sha512", 00:16:50.758 "dhgroup": "null" 00:16:50.758 } 00:16:50.758 } 00:16:50.758 ]' 00:16:50.758 11:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:51.015 11:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:51.015 11:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:51.015 11:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:16:51.015 11:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:51.015 11:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.015 11:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.015 11:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.272 11:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:OWViZGE2NTdjYmQxZjQ1OTM2MzczNjE5MjljN2EzNmL7PpX3: 00:16:51.836 11:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.836 11:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:51.836 11:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:51.836 11:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.836 11:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:51.836 11:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:51.836 11:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:51.836 11:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:51.836 11:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 2 00:16:51.836 11:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:51.836 11:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:51.836 11:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:51.836 11:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:51.836 11:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 00:16:51.836 11:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:51.836 11:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.836 11:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:51.836 11:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:51.836 11:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:52.093 00:16:52.093 11:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:52.093 11:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:52.093 11:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.350 11:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.350 11:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.350 11:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:52.350 11:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.350 11:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:52.350 11:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:52.350 { 00:16:52.350 "cntlid": 101, 00:16:52.350 "qid": 0, 00:16:52.350 "state": "enabled", 00:16:52.350 "listen_address": { 00:16:52.350 "trtype": "TCP", 00:16:52.350 "adrfam": "IPv4", 00:16:52.350 "traddr": "10.0.0.2", 00:16:52.350 "trsvcid": "4420" 00:16:52.350 }, 00:16:52.350 "peer_address": { 00:16:52.350 "trtype": "TCP", 00:16:52.350 "adrfam": "IPv4", 00:16:52.350 "traddr": "10.0.0.1", 00:16:52.350 "trsvcid": "44552" 00:16:52.350 }, 00:16:52.350 "auth": { 00:16:52.350 "state": "completed", 00:16:52.350 "digest": "sha512", 00:16:52.350 "dhgroup": "null" 00:16:52.350 } 00:16:52.350 } 00:16:52.350 ]' 00:16:52.350 11:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:52.351 11:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:52.351 11:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:52.351 11:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:16:52.351 11:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:52.608 11:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.608 11:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.608 11:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.608 11:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGU4NzViNzg4ZDhkODc2YjAxMTQ3ZWEyMWQzYzM4YTc0MjQ3YzM3ZGY2OGI4ZWM0wk2Hmw==: 00:16:53.176 11:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.176 11:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:53.176 11:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:53.176 11:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.176 11:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:53.176 11:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:53.176 11:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:53.176 11:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:53.433 11:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 3 00:16:53.433 11:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:53.433 11:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:53.433 11:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:53.433 11:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:53.433 11:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:53.433 11:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:53.433 11:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.433 11:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:53.433 11:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:53.434 11:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:53.690 00:16:53.690 11:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:53.690 11:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:53.691 11:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.947 11:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.947 11:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.947 11:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:53.947 11:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.947 11:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:53.947 11:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:53.947 { 00:16:53.947 "cntlid": 103, 00:16:53.947 "qid": 0, 00:16:53.947 "state": "enabled", 00:16:53.947 "listen_address": { 00:16:53.947 "trtype": "TCP", 00:16:53.947 "adrfam": "IPv4", 00:16:53.947 "traddr": "10.0.0.2", 00:16:53.947 "trsvcid": "4420" 00:16:53.947 }, 00:16:53.947 "peer_address": { 00:16:53.947 "trtype": "TCP", 00:16:53.947 "adrfam": "IPv4", 00:16:53.947 "traddr": "10.0.0.1", 00:16:53.947 "trsvcid": "44600" 00:16:53.947 }, 00:16:53.947 "auth": { 00:16:53.947 "state": "completed", 00:16:53.947 "digest": "sha512", 00:16:53.947 "dhgroup": "null" 00:16:53.947 } 00:16:53.947 } 00:16:53.947 ]' 00:16:53.947 11:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:53.947 11:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:53.947 11:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:53.947 11:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:16:53.947 11:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:53.947 11:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.947 11:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.947 11:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.204 11:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NmEzYjZhZWNjN2IzNGVlNjBlZTUzMTMwMWEzZWYyZGZhYmI5ZjZmYzlhZmJjMTcwM2Y5NTM2NzA3ODgxMGU4N0wU71E=: 00:16:54.768 11:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.769 11:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:54.769 11:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:54.769 11:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.769 11:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:54.769 11:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:16:54.769 11:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:54.769 11:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:54.769 11:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:55.025 11:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 0 00:16:55.025 11:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:55.025 11:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:55.025 11:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:55.025 11:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:55.026 11:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 00:16:55.026 11:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:55.026 11:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.026 11:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:55.026 11:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:55.026 11:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:55.026 00:16:55.026 11:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:55.026 11:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.026 11:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:55.283 11:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.283 11:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.283 11:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:55.283 11:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.283 11:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:55.283 11:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:55.283 { 00:16:55.283 "cntlid": 105, 00:16:55.283 "qid": 0, 00:16:55.283 "state": "enabled", 00:16:55.283 "listen_address": { 00:16:55.283 "trtype": "TCP", 00:16:55.283 "adrfam": "IPv4", 00:16:55.283 "traddr": "10.0.0.2", 00:16:55.283 "trsvcid": "4420" 00:16:55.283 }, 00:16:55.283 "peer_address": { 00:16:55.283 "trtype": "TCP", 00:16:55.283 "adrfam": "IPv4", 00:16:55.283 "traddr": "10.0.0.1", 00:16:55.283 "trsvcid": "44622" 00:16:55.283 }, 00:16:55.283 "auth": { 00:16:55.283 "state": "completed", 00:16:55.283 "digest": "sha512", 00:16:55.283 "dhgroup": "ffdhe2048" 00:16:55.283 } 00:16:55.283 } 00:16:55.283 ]' 00:16:55.283 11:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:55.283 11:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:55.283 11:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:55.540 11:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:55.540 11:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:55.540 11:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.540 11:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.540 11:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.540 11:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDBmM2NmODI2OGI2YzFiM2Y1Y2NlMGQyZDE2ZDM1NDdhZTE2YzNjYmFmYjZhODRleNC/ig==: 00:16:56.104 11:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.104 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.104 11:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:56.104 11:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:56.104 11:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.104 11:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:56.104 11:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:56.104 11:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:56.104 11:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:56.361 11:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 1 00:16:56.361 11:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:56.361 11:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:56.361 11:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:56.361 11:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:56.361 11:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:16:56.362 11:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:56.362 11:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.362 11:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:56.362 11:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:56.362 11:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:56.619 00:16:56.619 11:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:56.619 11:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:56.619 11:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.876 11:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.876 11:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.876 11:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:56.876 11:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.876 11:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:56.876 11:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:56.876 { 00:16:56.876 "cntlid": 107, 00:16:56.876 "qid": 0, 00:16:56.876 "state": "enabled", 00:16:56.876 "listen_address": { 00:16:56.876 "trtype": "TCP", 00:16:56.876 "adrfam": "IPv4", 00:16:56.876 "traddr": "10.0.0.2", 00:16:56.876 "trsvcid": "4420" 00:16:56.876 }, 00:16:56.876 "peer_address": { 00:16:56.876 "trtype": "TCP", 00:16:56.876 "adrfam": "IPv4", 00:16:56.876 "traddr": "10.0.0.1", 00:16:56.876 "trsvcid": "44022" 00:16:56.876 }, 00:16:56.876 "auth": { 00:16:56.876 "state": "completed", 00:16:56.876 "digest": "sha512", 00:16:56.876 "dhgroup": "ffdhe2048" 00:16:56.876 } 00:16:56.876 } 00:16:56.876 ]' 00:16:56.876 11:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:56.876 11:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:56.876 11:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:56.876 11:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:56.876 11:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:56.876 11:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.876 11:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.876 11:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.133 11:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:OWViZGE2NTdjYmQxZjQ1OTM2MzczNjE5MjljN2EzNmL7PpX3: 00:16:57.697 11:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.697 11:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:57.697 11:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:57.697 11:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.697 11:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:57.697 11:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:57.697 11:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:57.697 11:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:57.956 11:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 2 00:16:57.956 11:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:57.956 11:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:57.956 11:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:57.956 11:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:57.956 11:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 00:16:57.956 11:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:57.956 11:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.956 11:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:57.956 11:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:57.956 11:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:57.956 00:16:58.221 11:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:58.221 11:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:58.221 11:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.221 11:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.221 11:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.221 11:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:58.221 11:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.221 11:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:58.221 11:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:58.221 { 00:16:58.221 "cntlid": 109, 00:16:58.221 "qid": 0, 00:16:58.221 "state": "enabled", 00:16:58.221 "listen_address": { 00:16:58.221 "trtype": "TCP", 00:16:58.221 "adrfam": "IPv4", 00:16:58.221 "traddr": "10.0.0.2", 00:16:58.221 "trsvcid": "4420" 00:16:58.221 }, 00:16:58.221 "peer_address": { 00:16:58.221 "trtype": "TCP", 00:16:58.221 "adrfam": "IPv4", 00:16:58.221 "traddr": "10.0.0.1", 00:16:58.221 "trsvcid": "44044" 00:16:58.221 }, 00:16:58.221 "auth": { 00:16:58.221 "state": "completed", 00:16:58.221 "digest": "sha512", 00:16:58.221 "dhgroup": "ffdhe2048" 00:16:58.221 } 00:16:58.221 } 00:16:58.221 ]' 00:16:58.221 11:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:58.477 11:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:58.478 11:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:58.478 11:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:58.478 11:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:58.478 11:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.478 11:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.478 11:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.734 11:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGU4NzViNzg4ZDhkODc2YjAxMTQ3ZWEyMWQzYzM4YTc0MjQ3YzM3ZGY2OGI4ZWM0wk2Hmw==: 00:16:59.298 11:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.298 11:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:59.298 11:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:59.298 11:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.298 11:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:59.298 11:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:59.298 11:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:59.298 11:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:59.298 11:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 3 00:16:59.298 11:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:59.298 11:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:59.298 11:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:59.298 11:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:59.298 11:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:59.298 11:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:59.298 11:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.298 11:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:59.298 11:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:59.298 11:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:59.555 00:16:59.555 11:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:59.555 11:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:59.555 11:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.812 11:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.812 11:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.812 11:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:59.812 11:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.812 11:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:59.812 11:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:59.812 { 00:16:59.812 "cntlid": 111, 00:16:59.812 "qid": 0, 00:16:59.812 "state": "enabled", 00:16:59.812 "listen_address": { 00:16:59.812 "trtype": "TCP", 00:16:59.812 "adrfam": "IPv4", 00:16:59.812 "traddr": "10.0.0.2", 00:16:59.812 "trsvcid": "4420" 00:16:59.812 }, 00:16:59.812 "peer_address": { 00:16:59.812 "trtype": "TCP", 00:16:59.812 "adrfam": "IPv4", 00:16:59.812 "traddr": "10.0.0.1", 00:16:59.812 "trsvcid": "44076" 00:16:59.812 }, 00:16:59.812 "auth": { 00:16:59.812 "state": "completed", 00:16:59.812 "digest": "sha512", 00:16:59.812 "dhgroup": "ffdhe2048" 00:16:59.812 } 00:16:59.812 } 00:16:59.812 ]' 00:16:59.812 11:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:59.812 11:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:59.812 11:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:59.812 11:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:59.812 11:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:00.069 11:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.069 11:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.069 11:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.069 11:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NmEzYjZhZWNjN2IzNGVlNjBlZTUzMTMwMWEzZWYyZGZhYmI5ZjZmYzlhZmJjMTcwM2Y5NTM2NzA3ODgxMGU4N0wU71E=: 00:17:00.633 11:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.633 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.633 11:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:00.633 11:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:00.633 11:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.633 11:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:00.633 11:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:00.633 11:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:00.633 11:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:00.633 11:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:00.890 11:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 0 00:17:00.890 11:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:00.890 11:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:00.890 11:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:00.890 11:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:00.890 11:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 00:17:00.890 11:08:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:00.890 11:08:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.890 11:08:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:00.890 11:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:00.890 11:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:01.148 00:17:01.148 11:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:01.148 11:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:01.148 11:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.405 11:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.405 11:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.405 11:08:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:01.405 11:08:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.405 11:08:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:01.405 11:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:01.405 { 00:17:01.405 "cntlid": 113, 00:17:01.405 "qid": 0, 00:17:01.405 "state": "enabled", 00:17:01.405 "listen_address": { 00:17:01.405 "trtype": "TCP", 00:17:01.405 "adrfam": "IPv4", 00:17:01.405 "traddr": "10.0.0.2", 00:17:01.405 "trsvcid": "4420" 00:17:01.405 }, 00:17:01.405 "peer_address": { 00:17:01.405 "trtype": "TCP", 00:17:01.405 "adrfam": "IPv4", 00:17:01.405 "traddr": "10.0.0.1", 00:17:01.405 "trsvcid": "44108" 00:17:01.405 }, 00:17:01.405 "auth": { 00:17:01.405 "state": "completed", 00:17:01.405 "digest": "sha512", 00:17:01.405 "dhgroup": "ffdhe3072" 00:17:01.405 } 00:17:01.405 } 00:17:01.405 ]' 00:17:01.405 11:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:01.405 11:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:01.405 11:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:01.405 11:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:01.405 11:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:01.405 11:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.405 11:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.405 11:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.662 11:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDBmM2NmODI2OGI2YzFiM2Y1Y2NlMGQyZDE2ZDM1NDdhZTE2YzNjYmFmYjZhODRleNC/ig==: 00:17:02.226 11:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.226 11:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:02.226 11:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:02.226 11:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.226 11:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:02.226 11:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:02.226 11:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:02.226 11:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:02.483 11:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 1 00:17:02.483 11:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:02.483 11:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:02.483 11:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:02.483 11:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:02.484 11:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:02.484 11:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:02.484 11:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.484 11:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:02.484 11:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:02.484 11:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:02.741 00:17:02.741 11:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:02.741 11:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:02.741 11:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.741 11:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.741 11:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.741 11:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:02.741 11:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.741 11:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:02.741 11:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:02.741 { 00:17:02.741 "cntlid": 115, 00:17:02.741 "qid": 0, 00:17:02.741 "state": "enabled", 00:17:02.741 "listen_address": { 00:17:02.741 "trtype": "TCP", 00:17:02.741 "adrfam": "IPv4", 00:17:02.741 "traddr": "10.0.0.2", 00:17:02.741 "trsvcid": "4420" 00:17:02.741 }, 00:17:02.741 "peer_address": { 00:17:02.741 "trtype": "TCP", 00:17:02.741 "adrfam": "IPv4", 00:17:02.741 "traddr": "10.0.0.1", 00:17:02.741 "trsvcid": "44120" 00:17:02.741 }, 00:17:02.741 "auth": { 00:17:02.741 "state": "completed", 00:17:02.741 "digest": "sha512", 00:17:02.741 "dhgroup": "ffdhe3072" 00:17:02.741 } 00:17:02.741 } 00:17:02.741 ]' 00:17:02.741 11:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:02.998 11:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:02.998 11:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:02.998 11:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:02.998 11:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:02.998 11:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.998 11:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.998 11:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.255 11:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:OWViZGE2NTdjYmQxZjQ1OTM2MzczNjE5MjljN2EzNmL7PpX3: 00:17:03.819 11:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.819 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.819 11:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:03.819 11:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:03.819 11:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.819 11:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:03.819 11:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:03.819 11:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:03.819 11:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:03.819 11:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 2 00:17:03.819 11:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:03.819 11:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:03.819 11:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:03.819 11:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:03.819 11:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 00:17:03.819 11:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:03.819 11:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.075 11:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:04.075 11:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:04.075 11:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:04.075 00:17:04.075 11:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:04.075 11:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:04.075 11:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.332 11:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.332 11:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.332 11:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:04.332 11:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.332 11:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:04.332 11:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:04.332 { 00:17:04.332 "cntlid": 117, 00:17:04.332 "qid": 0, 00:17:04.332 "state": "enabled", 00:17:04.332 "listen_address": { 00:17:04.332 "trtype": "TCP", 00:17:04.332 "adrfam": "IPv4", 00:17:04.332 "traddr": "10.0.0.2", 00:17:04.332 "trsvcid": "4420" 00:17:04.332 }, 00:17:04.332 "peer_address": { 00:17:04.332 "trtype": "TCP", 00:17:04.332 "adrfam": "IPv4", 00:17:04.332 "traddr": "10.0.0.1", 00:17:04.332 "trsvcid": "44148" 00:17:04.332 }, 00:17:04.332 "auth": { 00:17:04.332 "state": "completed", 00:17:04.332 "digest": "sha512", 00:17:04.332 "dhgroup": "ffdhe3072" 00:17:04.332 } 00:17:04.332 } 00:17:04.332 ]' 00:17:04.332 11:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:04.332 11:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:04.332 11:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:04.332 11:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:04.589 11:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:04.589 11:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.589 11:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.589 11:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.589 11:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGU4NzViNzg4ZDhkODc2YjAxMTQ3ZWEyMWQzYzM4YTc0MjQ3YzM3ZGY2OGI4ZWM0wk2Hmw==: 00:17:05.153 11:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.153 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.153 11:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:05.153 11:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:05.153 11:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.153 11:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:05.153 11:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:05.153 11:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:05.153 11:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:05.410 11:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 3 00:17:05.410 11:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:05.410 11:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:05.410 11:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:05.410 11:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:05.410 11:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:05.410 11:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:05.410 11:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.410 11:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:05.410 11:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:05.410 11:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:05.668 00:17:05.668 11:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:05.668 11:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:05.668 11:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.925 11:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.925 11:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.925 11:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:05.925 11:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.925 11:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:05.925 11:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:05.925 { 00:17:05.925 "cntlid": 119, 00:17:05.925 "qid": 0, 00:17:05.925 "state": "enabled", 00:17:05.925 "listen_address": { 00:17:05.925 "trtype": "TCP", 00:17:05.925 "adrfam": "IPv4", 00:17:05.925 "traddr": "10.0.0.2", 00:17:05.925 "trsvcid": "4420" 00:17:05.925 }, 00:17:05.925 "peer_address": { 00:17:05.925 "trtype": "TCP", 00:17:05.925 "adrfam": "IPv4", 00:17:05.925 "traddr": "10.0.0.1", 00:17:05.925 "trsvcid": "44156" 00:17:05.925 }, 00:17:05.925 "auth": { 00:17:05.925 "state": "completed", 00:17:05.925 "digest": "sha512", 00:17:05.925 "dhgroup": "ffdhe3072" 00:17:05.925 } 00:17:05.925 } 00:17:05.925 ]' 00:17:05.925 11:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:05.925 11:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:05.925 11:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:05.925 11:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:05.925 11:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:05.925 11:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.925 11:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.925 11:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.182 11:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NmEzYjZhZWNjN2IzNGVlNjBlZTUzMTMwMWEzZWYyZGZhYmI5ZjZmYzlhZmJjMTcwM2Y5NTM2NzA3ODgxMGU4N0wU71E=: 00:17:06.745 11:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.745 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.745 11:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:06.745 11:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:06.745 11:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.745 11:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:06.745 11:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:06.745 11:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:06.745 11:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:06.745 11:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:07.002 11:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 0 00:17:07.002 11:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:07.002 11:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:07.002 11:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:07.002 11:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:07.002 11:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 00:17:07.002 11:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:07.002 11:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.002 11:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:07.002 11:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:07.002 11:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:07.258 00:17:07.258 11:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:07.258 11:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.258 11:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:07.514 11:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.514 11:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.514 11:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:07.514 11:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.514 11:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:07.514 11:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:07.514 { 00:17:07.514 "cntlid": 121, 00:17:07.514 "qid": 0, 00:17:07.514 "state": "enabled", 00:17:07.514 "listen_address": { 00:17:07.514 "trtype": "TCP", 00:17:07.514 "adrfam": "IPv4", 00:17:07.514 "traddr": "10.0.0.2", 00:17:07.514 "trsvcid": "4420" 00:17:07.514 }, 00:17:07.514 "peer_address": { 00:17:07.514 "trtype": "TCP", 00:17:07.514 "adrfam": "IPv4", 00:17:07.514 "traddr": "10.0.0.1", 00:17:07.514 "trsvcid": "47886" 00:17:07.514 }, 00:17:07.514 "auth": { 00:17:07.514 "state": "completed", 00:17:07.514 "digest": "sha512", 00:17:07.514 "dhgroup": "ffdhe4096" 00:17:07.514 } 00:17:07.514 } 00:17:07.514 ]' 00:17:07.514 11:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:07.514 11:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:07.514 11:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:07.514 11:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:07.514 11:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:07.514 11:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.514 11:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.514 11:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.772 11:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDBmM2NmODI2OGI2YzFiM2Y1Y2NlMGQyZDE2ZDM1NDdhZTE2YzNjYmFmYjZhODRleNC/ig==: 00:17:08.336 11:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.336 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.336 11:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:08.336 11:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:08.336 11:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.336 11:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:08.336 11:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:08.336 11:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:08.336 11:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:08.594 11:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 1 00:17:08.594 11:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:08.594 11:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:08.594 11:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:08.594 11:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:08.595 11:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:08.595 11:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:08.595 11:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.595 11:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:08.595 11:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:08.595 11:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:08.853 00:17:08.853 11:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:08.853 11:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:08.853 11:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.853 11:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.853 11:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.853 11:09:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:08.853 11:09:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.853 11:09:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:08.853 11:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:08.853 { 00:17:08.853 "cntlid": 123, 00:17:08.853 "qid": 0, 00:17:08.853 "state": "enabled", 00:17:08.853 "listen_address": { 00:17:08.853 "trtype": "TCP", 00:17:08.853 "adrfam": "IPv4", 00:17:08.853 "traddr": "10.0.0.2", 00:17:08.853 "trsvcid": "4420" 00:17:08.853 }, 00:17:08.853 "peer_address": { 00:17:08.853 "trtype": "TCP", 00:17:08.853 "adrfam": "IPv4", 00:17:08.853 "traddr": "10.0.0.1", 00:17:08.853 "trsvcid": "47924" 00:17:08.853 }, 00:17:08.853 "auth": { 00:17:08.853 "state": "completed", 00:17:08.853 "digest": "sha512", 00:17:08.853 "dhgroup": "ffdhe4096" 00:17:08.853 } 00:17:08.853 } 00:17:08.853 ]' 00:17:08.853 11:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:09.112 11:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:09.112 11:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:09.112 11:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:09.112 11:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:09.112 11:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.112 11:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.112 11:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.370 11:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:OWViZGE2NTdjYmQxZjQ1OTM2MzczNjE5MjljN2EzNmL7PpX3: 00:17:09.936 11:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.936 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.936 11:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:09.936 11:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:09.936 11:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.936 11:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:09.936 11:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:09.936 11:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:09.936 11:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:09.936 11:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 2 00:17:09.936 11:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:09.936 11:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:09.936 11:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:09.936 11:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:09.936 11:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 00:17:09.936 11:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:09.936 11:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.936 11:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:09.936 11:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:09.936 11:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:10.194 00:17:10.451 11:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:10.451 11:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:10.451 11:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.451 11:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.451 11:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.451 11:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:10.451 11:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.451 11:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:10.451 11:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:10.451 { 00:17:10.451 "cntlid": 125, 00:17:10.451 "qid": 0, 00:17:10.451 "state": "enabled", 00:17:10.451 "listen_address": { 00:17:10.451 "trtype": "TCP", 00:17:10.451 "adrfam": "IPv4", 00:17:10.451 "traddr": "10.0.0.2", 00:17:10.451 "trsvcid": "4420" 00:17:10.451 }, 00:17:10.451 "peer_address": { 00:17:10.451 "trtype": "TCP", 00:17:10.451 "adrfam": "IPv4", 00:17:10.451 "traddr": "10.0.0.1", 00:17:10.451 "trsvcid": "47956" 00:17:10.451 }, 00:17:10.451 "auth": { 00:17:10.451 "state": "completed", 00:17:10.451 "digest": "sha512", 00:17:10.451 "dhgroup": "ffdhe4096" 00:17:10.451 } 00:17:10.451 } 00:17:10.451 ]' 00:17:10.451 11:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:10.451 11:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:10.452 11:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:10.709 11:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:10.709 11:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:10.709 11:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.709 11:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.709 11:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.709 11:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGU4NzViNzg4ZDhkODc2YjAxMTQ3ZWEyMWQzYzM4YTc0MjQ3YzM3ZGY2OGI4ZWM0wk2Hmw==: 00:17:11.274 11:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.274 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.274 11:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:11.274 11:09:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:11.274 11:09:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.274 11:09:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:11.274 11:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:11.274 11:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:11.274 11:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:11.531 11:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 3 00:17:11.531 11:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:11.531 11:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:11.531 11:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:11.531 11:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:11.531 11:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:11.531 11:09:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:11.531 11:09:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.531 11:09:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:11.531 11:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:11.531 11:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:11.787 00:17:11.787 11:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:11.787 11:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:11.787 11:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.044 11:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.044 11:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.044 11:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:12.044 11:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.044 11:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:12.044 11:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:12.044 { 00:17:12.044 "cntlid": 127, 00:17:12.044 "qid": 0, 00:17:12.044 "state": "enabled", 00:17:12.044 "listen_address": { 00:17:12.044 "trtype": "TCP", 00:17:12.044 "adrfam": "IPv4", 00:17:12.044 "traddr": "10.0.0.2", 00:17:12.044 "trsvcid": "4420" 00:17:12.044 }, 00:17:12.044 "peer_address": { 00:17:12.044 "trtype": "TCP", 00:17:12.044 "adrfam": "IPv4", 00:17:12.044 "traddr": "10.0.0.1", 00:17:12.044 "trsvcid": "47984" 00:17:12.044 }, 00:17:12.044 "auth": { 00:17:12.044 "state": "completed", 00:17:12.044 "digest": "sha512", 00:17:12.044 "dhgroup": "ffdhe4096" 00:17:12.044 } 00:17:12.044 } 00:17:12.044 ]' 00:17:12.044 11:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:12.044 11:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:12.044 11:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:12.044 11:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:12.044 11:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:12.044 11:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.300 11:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.300 11:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.300 11:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NmEzYjZhZWNjN2IzNGVlNjBlZTUzMTMwMWEzZWYyZGZhYmI5ZjZmYzlhZmJjMTcwM2Y5NTM2NzA3ODgxMGU4N0wU71E=: 00:17:12.863 11:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.863 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.864 11:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:12.864 11:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:12.864 11:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.864 11:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:12.864 11:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:12.864 11:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:12.864 11:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:12.864 11:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:13.121 11:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 0 00:17:13.121 11:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:13.121 11:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:13.121 11:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:13.121 11:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:13.121 11:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 00:17:13.121 11:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:13.121 11:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.121 11:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:13.121 11:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:13.121 11:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:13.377 00:17:13.377 11:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:13.377 11:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:13.377 11:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.634 11:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.634 11:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.634 11:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:13.634 11:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.634 11:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:13.634 11:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:13.634 { 00:17:13.634 "cntlid": 129, 00:17:13.634 "qid": 0, 00:17:13.634 "state": "enabled", 00:17:13.634 "listen_address": { 00:17:13.634 "trtype": "TCP", 00:17:13.634 "adrfam": "IPv4", 00:17:13.634 "traddr": "10.0.0.2", 00:17:13.634 "trsvcid": "4420" 00:17:13.634 }, 00:17:13.634 "peer_address": { 00:17:13.634 "trtype": "TCP", 00:17:13.634 "adrfam": "IPv4", 00:17:13.634 "traddr": "10.0.0.1", 00:17:13.634 "trsvcid": "48018" 00:17:13.634 }, 00:17:13.634 "auth": { 00:17:13.634 "state": "completed", 00:17:13.634 "digest": "sha512", 00:17:13.634 "dhgroup": "ffdhe6144" 00:17:13.634 } 00:17:13.634 } 00:17:13.634 ]' 00:17:13.634 11:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:13.634 11:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:13.634 11:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:13.634 11:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:13.634 11:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:13.634 11:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.634 11:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.634 11:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.891 11:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDBmM2NmODI2OGI2YzFiM2Y1Y2NlMGQyZDE2ZDM1NDdhZTE2YzNjYmFmYjZhODRleNC/ig==: 00:17:14.481 11:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.481 11:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:14.481 11:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:14.481 11:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.481 11:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:14.481 11:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:14.481 11:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:14.481 11:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:14.758 11:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 1 00:17:14.758 11:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:14.758 11:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:14.758 11:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:14.758 11:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:14.758 11:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:14.758 11:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:14.758 11:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.758 11:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:14.758 11:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:14.758 11:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:15.014 00:17:15.014 11:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:15.014 11:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:15.014 11:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.272 11:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.272 11:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.272 11:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:15.272 11:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.272 11:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:15.272 11:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:15.272 { 00:17:15.272 "cntlid": 131, 00:17:15.272 "qid": 0, 00:17:15.272 "state": "enabled", 00:17:15.272 "listen_address": { 00:17:15.272 "trtype": "TCP", 00:17:15.272 "adrfam": "IPv4", 00:17:15.272 "traddr": "10.0.0.2", 00:17:15.272 "trsvcid": "4420" 00:17:15.272 }, 00:17:15.272 "peer_address": { 00:17:15.272 "trtype": "TCP", 00:17:15.272 "adrfam": "IPv4", 00:17:15.272 "traddr": "10.0.0.1", 00:17:15.272 "trsvcid": "48056" 00:17:15.272 }, 00:17:15.272 "auth": { 00:17:15.272 "state": "completed", 00:17:15.272 "digest": "sha512", 00:17:15.272 "dhgroup": "ffdhe6144" 00:17:15.272 } 00:17:15.272 } 00:17:15.272 ]' 00:17:15.272 11:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:15.272 11:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:15.272 11:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:15.272 11:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:15.272 11:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:15.272 11:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.272 11:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.272 11:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.529 11:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:OWViZGE2NTdjYmQxZjQ1OTM2MzczNjE5MjljN2EzNmL7PpX3: 00:17:16.095 11:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.095 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.095 11:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:16.095 11:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:16.095 11:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.095 11:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:16.095 11:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:16.095 11:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:16.095 11:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:16.352 11:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 2 00:17:16.352 11:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:16.353 11:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:16.353 11:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:16.353 11:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:16.353 11:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 00:17:16.353 11:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:16.353 11:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.353 11:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:16.353 11:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:16.353 11:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:16.610 00:17:16.610 11:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:16.610 11:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:16.610 11:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.868 11:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.868 11:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.868 11:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:16.868 11:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.868 11:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:16.868 11:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:16.868 { 00:17:16.868 "cntlid": 133, 00:17:16.868 "qid": 0, 00:17:16.868 "state": "enabled", 00:17:16.868 "listen_address": { 00:17:16.868 "trtype": "TCP", 00:17:16.868 "adrfam": "IPv4", 00:17:16.868 "traddr": "10.0.0.2", 00:17:16.868 "trsvcid": "4420" 00:17:16.868 }, 00:17:16.868 "peer_address": { 00:17:16.868 "trtype": "TCP", 00:17:16.868 "adrfam": "IPv4", 00:17:16.868 "traddr": "10.0.0.1", 00:17:16.868 "trsvcid": "34922" 00:17:16.868 }, 00:17:16.868 "auth": { 00:17:16.868 "state": "completed", 00:17:16.868 "digest": "sha512", 00:17:16.868 "dhgroup": "ffdhe6144" 00:17:16.868 } 00:17:16.868 } 00:17:16.868 ]' 00:17:16.868 11:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:16.868 11:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:16.868 11:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:16.868 11:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:16.868 11:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:16.868 11:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.868 11:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.868 11:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.126 11:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGU4NzViNzg4ZDhkODc2YjAxMTQ3ZWEyMWQzYzM4YTc0MjQ3YzM3ZGY2OGI4ZWM0wk2Hmw==: 00:17:17.692 11:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.692 11:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:17.692 11:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:17.692 11:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.692 11:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:17.692 11:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:17.692 11:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:17.692 11:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:17.949 11:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 3 00:17:17.949 11:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:17.949 11:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:17.949 11:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:17.949 11:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:17.949 11:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:17.949 11:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:17.949 11:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.949 11:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:17.949 11:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:17.949 11:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:18.207 00:17:18.207 11:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:18.207 11:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:18.207 11:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.466 11:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.466 11:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.466 11:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:18.466 11:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.466 11:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:18.466 11:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:18.466 { 00:17:18.466 "cntlid": 135, 00:17:18.466 "qid": 0, 00:17:18.466 "state": "enabled", 00:17:18.466 "listen_address": { 00:17:18.466 "trtype": "TCP", 00:17:18.466 "adrfam": "IPv4", 00:17:18.466 "traddr": "10.0.0.2", 00:17:18.466 "trsvcid": "4420" 00:17:18.466 }, 00:17:18.466 "peer_address": { 00:17:18.466 "trtype": "TCP", 00:17:18.466 "adrfam": "IPv4", 00:17:18.466 "traddr": "10.0.0.1", 00:17:18.466 "trsvcid": "34950" 00:17:18.466 }, 00:17:18.466 "auth": { 00:17:18.466 "state": "completed", 00:17:18.466 "digest": "sha512", 00:17:18.466 "dhgroup": "ffdhe6144" 00:17:18.466 } 00:17:18.466 } 00:17:18.466 ]' 00:17:18.466 11:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:18.466 11:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:18.466 11:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:18.466 11:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:18.466 11:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:18.466 11:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.466 11:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.466 11:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.723 11:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NmEzYjZhZWNjN2IzNGVlNjBlZTUzMTMwMWEzZWYyZGZhYmI5ZjZmYzlhZmJjMTcwM2Y5NTM2NzA3ODgxMGU4N0wU71E=: 00:17:19.290 11:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.290 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.290 11:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:19.290 11:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:19.290 11:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.290 11:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:19.290 11:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:19.290 11:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:19.290 11:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:19.290 11:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:19.548 11:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 0 00:17:19.548 11:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:19.548 11:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:19.548 11:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:19.548 11:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:19.548 11:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 00:17:19.548 11:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:19.548 11:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.548 11:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:19.548 11:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:19.548 11:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:19.805 00:17:19.805 11:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:19.805 11:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.805 11:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:20.074 11:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.074 11:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.074 11:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:20.074 11:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.074 11:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:20.074 11:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:20.074 { 00:17:20.074 "cntlid": 137, 00:17:20.074 "qid": 0, 00:17:20.074 "state": "enabled", 00:17:20.074 "listen_address": { 00:17:20.074 "trtype": "TCP", 00:17:20.074 "adrfam": "IPv4", 00:17:20.074 "traddr": "10.0.0.2", 00:17:20.074 "trsvcid": "4420" 00:17:20.074 }, 00:17:20.074 "peer_address": { 00:17:20.074 "trtype": "TCP", 00:17:20.074 "adrfam": "IPv4", 00:17:20.074 "traddr": "10.0.0.1", 00:17:20.074 "trsvcid": "34970" 00:17:20.074 }, 00:17:20.074 "auth": { 00:17:20.074 "state": "completed", 00:17:20.074 "digest": "sha512", 00:17:20.074 "dhgroup": "ffdhe8192" 00:17:20.074 } 00:17:20.074 } 00:17:20.074 ]' 00:17:20.074 11:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:20.074 11:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:20.074 11:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:20.074 11:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:20.074 11:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:20.344 11:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.344 11:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.344 11:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.344 11:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDBmM2NmODI2OGI2YzFiM2Y1Y2NlMGQyZDE2ZDM1NDdhZTE2YzNjYmFmYjZhODRleNC/ig==: 00:17:20.909 11:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.909 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.909 11:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:20.909 11:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:20.909 11:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.909 11:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:20.909 11:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:20.909 11:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:20.909 11:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:21.166 11:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 1 00:17:21.166 11:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:21.166 11:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:21.166 11:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:21.166 11:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:21.166 11:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:21.166 11:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:21.166 11:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.166 11:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:21.166 11:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:21.166 11:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:21.730 00:17:21.730 11:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:21.730 11:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:21.730 11:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.730 11:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.730 11:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.730 11:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:21.730 11:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.730 11:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:21.730 11:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:21.730 { 00:17:21.730 "cntlid": 139, 00:17:21.730 "qid": 0, 00:17:21.730 "state": "enabled", 00:17:21.730 "listen_address": { 00:17:21.730 "trtype": "TCP", 00:17:21.730 "adrfam": "IPv4", 00:17:21.730 "traddr": "10.0.0.2", 00:17:21.730 "trsvcid": "4420" 00:17:21.730 }, 00:17:21.730 "peer_address": { 00:17:21.730 "trtype": "TCP", 00:17:21.730 "adrfam": "IPv4", 00:17:21.730 "traddr": "10.0.0.1", 00:17:21.730 "trsvcid": "34998" 00:17:21.730 }, 00:17:21.730 "auth": { 00:17:21.730 "state": "completed", 00:17:21.730 "digest": "sha512", 00:17:21.730 "dhgroup": "ffdhe8192" 00:17:21.731 } 00:17:21.731 } 00:17:21.731 ]' 00:17:21.731 11:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:21.731 11:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:21.731 11:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:21.988 11:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:21.988 11:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:21.988 11:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.988 11:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.988 11:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.988 11:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:OWViZGE2NTdjYmQxZjQ1OTM2MzczNjE5MjljN2EzNmL7PpX3: 00:17:22.552 11:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.552 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.552 11:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:22.552 11:09:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:22.552 11:09:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.552 11:09:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:22.552 11:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:22.552 11:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:22.553 11:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:22.810 11:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 2 00:17:22.810 11:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:22.810 11:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:22.810 11:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:22.810 11:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:22.810 11:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 00:17:22.810 11:09:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:22.810 11:09:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.810 11:09:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:22.810 11:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:22.810 11:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:23.374 00:17:23.374 11:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:23.374 11:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:23.374 11:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.374 11:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.631 11:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.631 11:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:23.631 11:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.631 11:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:23.631 11:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:23.631 { 00:17:23.631 "cntlid": 141, 00:17:23.631 "qid": 0, 00:17:23.631 "state": "enabled", 00:17:23.631 "listen_address": { 00:17:23.631 "trtype": "TCP", 00:17:23.631 "adrfam": "IPv4", 00:17:23.631 "traddr": "10.0.0.2", 00:17:23.631 "trsvcid": "4420" 00:17:23.631 }, 00:17:23.631 "peer_address": { 00:17:23.631 "trtype": "TCP", 00:17:23.631 "adrfam": "IPv4", 00:17:23.631 "traddr": "10.0.0.1", 00:17:23.631 "trsvcid": "35022" 00:17:23.631 }, 00:17:23.631 "auth": { 00:17:23.631 "state": "completed", 00:17:23.631 "digest": "sha512", 00:17:23.631 "dhgroup": "ffdhe8192" 00:17:23.631 } 00:17:23.631 } 00:17:23.631 ]' 00:17:23.631 11:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:23.631 11:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:23.631 11:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:23.631 11:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:23.631 11:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:23.631 11:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.631 11:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.631 11:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.888 11:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGU4NzViNzg4ZDhkODc2YjAxMTQ3ZWEyMWQzYzM4YTc0MjQ3YzM3ZGY2OGI4ZWM0wk2Hmw==: 00:17:24.452 11:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.452 11:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:24.452 11:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:24.452 11:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.452 11:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:24.452 11:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:24.452 11:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:24.452 11:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:24.452 11:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 3 00:17:24.452 11:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:24.452 11:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:24.452 11:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:24.452 11:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:24.452 11:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:24.452 11:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:24.452 11:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.709 11:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:24.709 11:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:24.709 11:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:24.966 00:17:24.966 11:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:24.966 11:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:24.966 11:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.223 11:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.223 11:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.223 11:09:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:25.223 11:09:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.223 11:09:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:25.223 11:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:25.223 { 00:17:25.223 "cntlid": 143, 00:17:25.223 "qid": 0, 00:17:25.223 "state": "enabled", 00:17:25.223 "listen_address": { 00:17:25.223 "trtype": "TCP", 00:17:25.223 "adrfam": "IPv4", 00:17:25.223 "traddr": "10.0.0.2", 00:17:25.223 "trsvcid": "4420" 00:17:25.223 }, 00:17:25.223 "peer_address": { 00:17:25.223 "trtype": "TCP", 00:17:25.223 "adrfam": "IPv4", 00:17:25.223 "traddr": "10.0.0.1", 00:17:25.223 "trsvcid": "35056" 00:17:25.223 }, 00:17:25.223 "auth": { 00:17:25.223 "state": "completed", 00:17:25.223 "digest": "sha512", 00:17:25.223 "dhgroup": "ffdhe8192" 00:17:25.223 } 00:17:25.223 } 00:17:25.223 ]' 00:17:25.223 11:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:25.223 11:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:25.223 11:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:25.223 11:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:25.223 11:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:25.481 11:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.481 11:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.481 11:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.481 11:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NmEzYjZhZWNjN2IzNGVlNjBlZTUzMTMwMWEzZWYyZGZhYmI5ZjZmYzlhZmJjMTcwM2Y5NTM2NzA3ODgxMGU4N0wU71E=: 00:17:26.045 11:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.045 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.045 11:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:26.045 11:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:26.045 11:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.045 11:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:26.045 11:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:17:26.045 11:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # printf %s sha256,sha384,sha512 00:17:26.045 11:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:17:26.045 11:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:26.045 11:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:26.045 11:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:26.314 11:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@107 -- # connect_authenticate sha512 ffdhe8192 0 00:17:26.314 11:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:26.314 11:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:26.314 11:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:26.314 11:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:26.314 11:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 00:17:26.314 11:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:26.314 11:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.314 11:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:26.314 11:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:26.314 11:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:26.878 00:17:26.878 11:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:26.878 11:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:26.878 11:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.878 11:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.878 11:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.878 11:09:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:26.878 11:09:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.878 11:09:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:26.878 11:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:26.878 { 00:17:26.878 "cntlid": 145, 00:17:26.878 "qid": 0, 00:17:26.878 "state": "enabled", 00:17:26.878 "listen_address": { 00:17:26.878 "trtype": "TCP", 00:17:26.878 "adrfam": "IPv4", 00:17:26.878 "traddr": "10.0.0.2", 00:17:26.878 "trsvcid": "4420" 00:17:26.878 }, 00:17:26.878 "peer_address": { 00:17:26.878 "trtype": "TCP", 00:17:26.878 "adrfam": "IPv4", 00:17:26.878 "traddr": "10.0.0.1", 00:17:26.878 "trsvcid": "44212" 00:17:26.878 }, 00:17:26.878 "auth": { 00:17:26.878 "state": "completed", 00:17:26.878 "digest": "sha512", 00:17:26.878 "dhgroup": "ffdhe8192" 00:17:26.878 } 00:17:26.878 } 00:17:26.878 ]' 00:17:26.878 11:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:27.135 11:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:27.135 11:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:27.135 11:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:27.135 11:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:27.135 11:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.135 11:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.135 11:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.393 11:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDBmM2NmODI2OGI2YzFiM2Y1Y2NlMGQyZDE2ZDM1NDdhZTE2YzNjYmFmYjZhODRleNC/ig==: 00:17:27.957 11:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.957 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.957 11:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:27.957 11:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:27.957 11:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.957 11:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:27.957 11:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@110 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:27.957 11:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:27.957 11:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.957 11:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:27.957 11:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@111 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:27.957 11:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:17:27.957 11:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:27.957 11:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:17:27.957 11:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:27.957 11:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:17:27.957 11:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:27.957 11:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:27.957 11:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:28.214 request: 00:17:28.214 { 00:17:28.214 "name": "nvme0", 00:17:28.214 "trtype": "tcp", 00:17:28.214 "traddr": "10.0.0.2", 00:17:28.214 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:28.214 "adrfam": "ipv4", 00:17:28.215 "trsvcid": "4420", 00:17:28.215 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:28.215 "dhchap_key": "key2", 00:17:28.215 "method": "bdev_nvme_attach_controller", 00:17:28.215 "req_id": 1 00:17:28.215 } 00:17:28.215 Got JSON-RPC error response 00:17:28.215 response: 00:17:28.215 { 00:17:28.215 "code": -32602, 00:17:28.215 "message": "Invalid parameters" 00:17:28.215 } 00:17:28.215 11:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:17:28.215 11:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:17:28.215 11:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:17:28.215 11:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:17:28.215 11:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:28.215 11:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:28.215 11:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.472 11:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:28.472 11:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@116 -- # trap - SIGINT SIGTERM EXIT 00:17:28.472 11:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # cleanup 00:17:28.472 11:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2246387 00:17:28.472 11:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@947 -- # '[' -z 2246387 ']' 00:17:28.472 11:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # kill -0 2246387 00:17:28.472 11:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # uname 00:17:28.472 11:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:17:28.472 11:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2246387 00:17:28.472 11:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:17:28.472 11:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:17:28.472 11:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2246387' 00:17:28.472 killing process with pid 2246387 00:17:28.472 11:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # kill 2246387 00:17:28.472 11:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@971 -- # wait 2246387 00:17:28.729 11:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:28.729 11:09:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:28.729 11:09:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:17:28.729 11:09:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:28.729 11:09:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:17:28.729 11:09:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:28.729 11:09:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:28.729 rmmod nvme_tcp 00:17:28.729 rmmod nvme_fabrics 00:17:28.729 rmmod nvme_keyring 00:17:28.729 11:09:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:28.729 11:09:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:17:28.729 11:09:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:17:28.729 11:09:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 2246140 ']' 00:17:28.729 11:09:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 2246140 00:17:28.729 11:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@947 -- # '[' -z 2246140 ']' 00:17:28.729 11:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # kill -0 2246140 00:17:28.730 11:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # uname 00:17:28.730 11:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:17:28.730 11:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2246140 00:17:28.730 11:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:17:28.730 11:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:17:28.730 11:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2246140' 00:17:28.730 killing process with pid 2246140 00:17:28.730 11:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # kill 2246140 00:17:28.730 11:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@971 -- # wait 2246140 00:17:28.987 11:09:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:28.987 11:09:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:28.987 11:09:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:28.987 11:09:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:28.987 11:09:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:28.987 11:09:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.987 11:09:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:28.987 11:09:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:31.518 11:09:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:31.518 11:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.HWV /tmp/spdk.key-sha256.a4e /tmp/spdk.key-sha384.yrg /tmp/spdk.key-sha512.iXv /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:31.518 00:17:31.518 real 2m4.581s 00:17:31.518 user 4m44.699s 00:17:31.518 sys 0m19.741s 00:17:31.518 11:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # xtrace_disable 00:17:31.518 11:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.518 ************************************ 00:17:31.518 END TEST nvmf_auth_target 00:17:31.518 ************************************ 00:17:31.518 11:09:28 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:17:31.518 11:09:28 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:31.518 11:09:28 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:17:31.518 11:09:28 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:17:31.518 11:09:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:31.518 ************************************ 00:17:31.518 START TEST nvmf_bdevio_no_huge 00:17:31.518 ************************************ 00:17:31.518 11:09:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:31.518 * Looking for test storage... 00:17:31.518 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:31.518 11:09:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:31.518 11:09:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:31.518 11:09:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:31.518 11:09:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:31.518 11:09:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:31.518 11:09:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:31.518 11:09:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:31.518 11:09:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:31.518 11:09:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:31.518 11:09:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:31.518 11:09:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:31.518 11:09:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:31.518 11:09:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:31.518 11:09:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:31.518 11:09:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:31.518 11:09:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:31.518 11:09:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:31.518 11:09:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:31.518 11:09:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:31.518 11:09:28 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:31.518 11:09:28 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:31.518 11:09:28 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:31.518 11:09:28 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.518 11:09:28 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.518 11:09:28 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.518 11:09:28 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:31.518 11:09:28 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.518 11:09:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:17:31.518 11:09:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:31.518 11:09:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:31.518 11:09:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:31.518 11:09:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:31.518 11:09:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:31.518 11:09:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:31.518 11:09:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:31.518 11:09:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:31.518 11:09:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:31.518 11:09:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:31.518 11:09:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:31.518 11:09:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:31.518 11:09:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:31.518 11:09:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:31.518 11:09:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:31.518 11:09:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:31.518 11:09:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:31.518 11:09:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:31.518 11:09:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:31.518 11:09:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:31.518 11:09:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:31.518 11:09:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:17:31.518 11:09:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:36.806 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:36.806 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:17:36.806 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:36.806 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:36.806 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:36.806 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:36.806 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:36.806 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:36.807 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:36.807 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:36.807 Found net devices under 0000:86:00.0: cvl_0_0 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:36.807 Found net devices under 0000:86:00.1: cvl_0_1 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:36.807 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:36.807 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:17:36.807 00:17:36.807 --- 10.0.0.2 ping statistics --- 00:17:36.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.807 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:36.807 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:36.807 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:17:36.807 00:17:36.807 --- 10.0.0.1 ping statistics --- 00:17:36.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.807 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:36.807 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:36.808 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@721 -- # xtrace_disable 00:17:36.808 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:36.808 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=2270302 00:17:36.808 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 2270302 00:17:36.808 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@828 -- # '[' -z 2270302 ']' 00:17:36.808 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.808 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local max_retries=100 00:17:36.808 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.808 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:36.808 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # xtrace_disable 00:17:36.808 11:09:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:36.808 [2024-05-15 11:09:33.681188] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:17:36.808 [2024-05-15 11:09:33.681232] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:36.808 [2024-05-15 11:09:33.744445] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:36.808 [2024-05-15 11:09:33.828688] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:36.808 [2024-05-15 11:09:33.828722] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:36.808 [2024-05-15 11:09:33.828729] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:36.808 [2024-05-15 11:09:33.828735] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:36.808 [2024-05-15 11:09:33.828740] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:36.808 [2024-05-15 11:09:33.828855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:36.808 [2024-05-15 11:09:33.828982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:36.808 [2024-05-15 11:09:33.829092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:36.808 [2024-05-15 11:09:33.829094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:37.371 11:09:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:17:37.371 11:09:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@861 -- # return 0 00:17:37.371 11:09:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:37.371 11:09:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@727 -- # xtrace_disable 00:17:37.371 11:09:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:37.371 11:09:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:37.371 11:09:34 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:37.371 11:09:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:37.371 11:09:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:37.371 [2024-05-15 11:09:34.524245] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:37.371 11:09:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:37.371 11:09:34 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:37.371 11:09:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:37.371 11:09:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:37.371 Malloc0 00:17:37.371 11:09:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:37.371 11:09:34 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:37.371 11:09:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:37.371 11:09:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:37.371 11:09:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:37.371 11:09:34 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:37.371 11:09:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:37.371 11:09:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:37.372 11:09:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:37.372 11:09:34 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:37.372 11:09:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:37.372 11:09:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:37.372 [2024-05-15 11:09:34.560307] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:37.372 [2024-05-15 11:09:34.560498] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:37.372 11:09:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:37.372 11:09:34 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:37.372 11:09:34 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:37.372 11:09:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:17:37.372 11:09:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:17:37.372 11:09:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:37.372 11:09:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:37.372 { 00:17:37.372 "params": { 00:17:37.372 "name": "Nvme$subsystem", 00:17:37.372 "trtype": "$TEST_TRANSPORT", 00:17:37.372 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:37.372 "adrfam": "ipv4", 00:17:37.372 "trsvcid": "$NVMF_PORT", 00:17:37.372 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:37.372 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:37.372 "hdgst": ${hdgst:-false}, 00:17:37.372 "ddgst": ${ddgst:-false} 00:17:37.372 }, 00:17:37.372 "method": "bdev_nvme_attach_controller" 00:17:37.372 } 00:17:37.372 EOF 00:17:37.372 )") 00:17:37.372 11:09:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:17:37.372 11:09:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:17:37.372 11:09:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:17:37.372 11:09:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:37.372 "params": { 00:17:37.372 "name": "Nvme1", 00:17:37.372 "trtype": "tcp", 00:17:37.372 "traddr": "10.0.0.2", 00:17:37.372 "adrfam": "ipv4", 00:17:37.372 "trsvcid": "4420", 00:17:37.372 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:37.372 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:37.372 "hdgst": false, 00:17:37.372 "ddgst": false 00:17:37.372 }, 00:17:37.372 "method": "bdev_nvme_attach_controller" 00:17:37.372 }' 00:17:37.372 [2024-05-15 11:09:34.609190] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:17:37.372 [2024-05-15 11:09:34.609236] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2270478 ] 00:17:37.629 [2024-05-15 11:09:34.669360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:37.629 [2024-05-15 11:09:34.755733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:37.629 [2024-05-15 11:09:34.755827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.629 [2024-05-15 11:09:34.755827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:37.886 I/O targets: 00:17:37.886 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:37.886 00:17:37.886 00:17:37.886 CUnit - A unit testing framework for C - Version 2.1-3 00:17:37.886 http://cunit.sourceforge.net/ 00:17:37.886 00:17:37.886 00:17:37.886 Suite: bdevio tests on: Nvme1n1 00:17:37.886 Test: blockdev write read block ...passed 00:17:37.886 Test: blockdev write zeroes read block ...passed 00:17:38.143 Test: blockdev write zeroes read no split ...passed 00:17:38.143 Test: blockdev write zeroes read split ...passed 00:17:38.143 Test: blockdev write zeroes read split partial ...passed 00:17:38.143 Test: blockdev reset ...[2024-05-15 11:09:35.257461] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:38.143 [2024-05-15 11:09:35.257530] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b15a0 (9): Bad file descriptor 00:17:38.143 [2024-05-15 11:09:35.276993] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:38.143 passed 00:17:38.143 Test: blockdev write read 8 blocks ...passed 00:17:38.143 Test: blockdev write read size > 128k ...passed 00:17:38.143 Test: blockdev write read invalid size ...passed 00:17:38.143 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:38.143 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:38.143 Test: blockdev write read max offset ...passed 00:17:38.400 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:38.400 Test: blockdev writev readv 8 blocks ...passed 00:17:38.400 Test: blockdev writev readv 30 x 1block ...passed 00:17:38.400 Test: blockdev writev readv block ...passed 00:17:38.400 Test: blockdev writev readv size > 128k ...passed 00:17:38.400 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:38.400 Test: blockdev comparev and writev ...[2024-05-15 11:09:35.529904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:38.400 [2024-05-15 11:09:35.529933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.400 [2024-05-15 11:09:35.529946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:38.400 [2024-05-15 11:09:35.529954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:38.400 [2024-05-15 11:09:35.530198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:38.400 [2024-05-15 11:09:35.530208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:38.400 [2024-05-15 11:09:35.530220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:38.400 [2024-05-15 11:09:35.530227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:38.400 [2024-05-15 11:09:35.530457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:38.400 [2024-05-15 11:09:35.530466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:38.400 [2024-05-15 11:09:35.530477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:38.400 [2024-05-15 11:09:35.530484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:38.400 [2024-05-15 11:09:35.530715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:38.400 [2024-05-15 11:09:35.530724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:38.400 [2024-05-15 11:09:35.530735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:38.401 [2024-05-15 11:09:35.530743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:38.401 passed 00:17:38.401 Test: blockdev nvme passthru rw ...passed 00:17:38.401 Test: blockdev nvme passthru vendor specific ...[2024-05-15 11:09:35.613528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:38.401 [2024-05-15 11:09:35.613545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:38.401 [2024-05-15 11:09:35.613661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:38.401 [2024-05-15 11:09:35.613670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:38.401 [2024-05-15 11:09:35.613785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:38.401 [2024-05-15 11:09:35.613794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:38.401 [2024-05-15 11:09:35.613905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:38.401 [2024-05-15 11:09:35.613914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:38.401 passed 00:17:38.401 Test: blockdev nvme admin passthru ...passed 00:17:38.658 Test: blockdev copy ...passed 00:17:38.658 00:17:38.658 Run Summary: Type Total Ran Passed Failed Inactive 00:17:38.658 suites 1 1 n/a 0 0 00:17:38.658 tests 23 23 23 0 0 00:17:38.658 asserts 152 152 152 0 n/a 00:17:38.658 00:17:38.658 Elapsed time = 1.226 seconds 00:17:38.915 11:09:35 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:38.915 11:09:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:38.915 11:09:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:38.915 11:09:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:38.915 11:09:35 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:38.915 11:09:35 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:38.915 11:09:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:38.915 11:09:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:17:38.915 11:09:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:38.915 11:09:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:17:38.915 11:09:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:38.915 11:09:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:38.915 rmmod nvme_tcp 00:17:38.915 rmmod nvme_fabrics 00:17:38.915 rmmod nvme_keyring 00:17:38.915 11:09:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:38.915 11:09:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:17:38.915 11:09:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:17:38.915 11:09:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 2270302 ']' 00:17:38.915 11:09:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 2270302 00:17:38.915 11:09:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@947 -- # '[' -z 2270302 ']' 00:17:38.915 11:09:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # kill -0 2270302 00:17:38.915 11:09:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # uname 00:17:38.915 11:09:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:17:38.916 11:09:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2270302 00:17:38.916 11:09:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # process_name=reactor_3 00:17:38.916 11:09:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' reactor_3 = sudo ']' 00:17:38.916 11:09:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2270302' 00:17:38.916 killing process with pid 2270302 00:17:38.916 11:09:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # kill 2270302 00:17:38.916 [2024-05-15 11:09:36.091920] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:38.916 11:09:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # wait 2270302 00:17:39.173 11:09:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:39.173 11:09:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:39.173 11:09:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:39.173 11:09:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:39.173 11:09:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:39.173 11:09:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:39.173 11:09:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:39.173 11:09:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:41.699 11:09:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:41.699 00:17:41.699 real 0m10.157s 00:17:41.699 user 0m13.929s 00:17:41.699 sys 0m4.857s 00:17:41.699 11:09:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # xtrace_disable 00:17:41.699 11:09:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:41.699 ************************************ 00:17:41.699 END TEST nvmf_bdevio_no_huge 00:17:41.699 ************************************ 00:17:41.699 11:09:38 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:41.699 11:09:38 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:17:41.699 11:09:38 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:17:41.699 11:09:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:41.699 ************************************ 00:17:41.699 START TEST nvmf_tls 00:17:41.699 ************************************ 00:17:41.699 11:09:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:41.699 * Looking for test storage... 00:17:41.699 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:41.699 11:09:38 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:41.699 11:09:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:41.699 11:09:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:41.699 11:09:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:41.699 11:09:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:41.699 11:09:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:41.699 11:09:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:41.699 11:09:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:41.699 11:09:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:41.699 11:09:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:41.699 11:09:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:41.699 11:09:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:41.699 11:09:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:41.699 11:09:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:41.699 11:09:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:41.699 11:09:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:41.699 11:09:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:41.699 11:09:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:41.699 11:09:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:41.699 11:09:38 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:41.699 11:09:38 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:41.699 11:09:38 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:41.699 11:09:38 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.699 11:09:38 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.699 11:09:38 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.699 11:09:38 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:41.699 11:09:38 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.699 11:09:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:17:41.699 11:09:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:41.699 11:09:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:41.699 11:09:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:41.699 11:09:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:41.699 11:09:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:41.699 11:09:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:41.699 11:09:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:41.699 11:09:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:41.699 11:09:38 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:41.699 11:09:38 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:17:41.699 11:09:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:41.699 11:09:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:41.699 11:09:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:41.699 11:09:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:41.699 11:09:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:41.699 11:09:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:41.699 11:09:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:41.699 11:09:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:41.699 11:09:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:41.699 11:09:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:41.699 11:09:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:17:41.699 11:09:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:46.956 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:46.956 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:46.956 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:46.956 Found net devices under 0000:86:00.0: cvl_0_0 00:17:46.957 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:46.957 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:46.957 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:46.957 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:46.957 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:46.957 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:46.957 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:46.957 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:46.957 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:46.957 Found net devices under 0000:86:00.1: cvl_0_1 00:17:46.957 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:46.957 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:46.957 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:17:46.957 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:46.957 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:46.957 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:46.957 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:46.957 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:46.957 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:46.957 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:46.957 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:46.957 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:46.957 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:46.957 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:46.957 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:46.957 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:46.957 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:46.957 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:46.957 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:46.957 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:46.957 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:46.957 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:46.957 11:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:46.957 11:09:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:46.957 11:09:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:46.957 11:09:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:46.957 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:46.957 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:17:46.957 00:17:46.957 --- 10.0.0.2 ping statistics --- 00:17:46.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.957 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:17:46.957 11:09:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:46.957 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:46.957 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:17:46.957 00:17:46.957 --- 10.0.0.1 ping statistics --- 00:17:46.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.957 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:17:46.957 11:09:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:46.957 11:09:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:17:46.957 11:09:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:46.957 11:09:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:46.957 11:09:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:46.957 11:09:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:46.957 11:09:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:46.957 11:09:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:46.957 11:09:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:46.957 11:09:44 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:46.957 11:09:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:46.957 11:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:17:46.957 11:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:46.957 11:09:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2274087 00:17:46.957 11:09:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:46.957 11:09:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2274087 00:17:46.957 11:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2274087 ']' 00:17:46.957 11:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.957 11:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:17:46.957 11:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.957 11:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:17:46.957 11:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:46.957 [2024-05-15 11:09:44.103456] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:17:46.957 [2024-05-15 11:09:44.103501] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:46.957 EAL: No free 2048 kB hugepages reported on node 1 00:17:46.957 [2024-05-15 11:09:44.162721] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.214 [2024-05-15 11:09:44.247186] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:47.214 [2024-05-15 11:09:44.247219] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:47.214 [2024-05-15 11:09:44.247226] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:47.214 [2024-05-15 11:09:44.247232] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:47.214 [2024-05-15 11:09:44.247237] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:47.214 [2024-05-15 11:09:44.247254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:47.777 11:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:17:47.777 11:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:17:47.777 11:09:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:47.777 11:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:17:47.777 11:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:47.777 11:09:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:47.777 11:09:44 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:17:47.777 11:09:44 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:48.033 true 00:17:48.034 11:09:45 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:48.034 11:09:45 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:17:48.034 11:09:45 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:17:48.034 11:09:45 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:17:48.034 11:09:45 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:48.289 11:09:45 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:48.290 11:09:45 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:17:48.547 11:09:45 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:17:48.547 11:09:45 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:17:48.547 11:09:45 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:48.803 11:09:45 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:48.803 11:09:45 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:17:48.803 11:09:45 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:17:48.803 11:09:45 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:17:48.803 11:09:45 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:48.803 11:09:45 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:17:49.060 11:09:46 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:17:49.061 11:09:46 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:17:49.061 11:09:46 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:49.317 11:09:46 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:49.318 11:09:46 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:17:49.318 11:09:46 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:17:49.318 11:09:46 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:17:49.318 11:09:46 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:49.574 11:09:46 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:49.575 11:09:46 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:17:49.832 11:09:46 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:17:49.832 11:09:46 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:17:49.832 11:09:46 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:49.832 11:09:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:49.832 11:09:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:49.832 11:09:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:49.832 11:09:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:17:49.832 11:09:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:49.832 11:09:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:49.832 11:09:46 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:49.832 11:09:46 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:49.832 11:09:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:49.832 11:09:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:49.832 11:09:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:49.832 11:09:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:17:49.832 11:09:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:49.832 11:09:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:49.832 11:09:46 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:49.832 11:09:46 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:17:49.832 11:09:46 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.TGwGvNYWEa 00:17:49.832 11:09:46 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:49.832 11:09:46 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.o5GNrt5iDo 00:17:49.832 11:09:46 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:49.832 11:09:46 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:49.832 11:09:46 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.TGwGvNYWEa 00:17:49.832 11:09:46 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.o5GNrt5iDo 00:17:49.832 11:09:46 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:50.089 11:09:47 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:50.089 11:09:47 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.TGwGvNYWEa 00:17:50.089 11:09:47 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.TGwGvNYWEa 00:17:50.089 11:09:47 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:50.346 [2024-05-15 11:09:47.493004] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:50.346 11:09:47 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:50.603 11:09:47 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:50.603 [2024-05-15 11:09:47.813798] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:50.603 [2024-05-15 11:09:47.813852] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:50.603 [2024-05-15 11:09:47.814035] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:50.603 11:09:47 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:50.859 malloc0 00:17:50.859 11:09:47 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:51.116 11:09:48 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.TGwGvNYWEa 00:17:51.116 [2024-05-15 11:09:48.295160] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:51.116 11:09:48 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.TGwGvNYWEa 00:17:51.116 EAL: No free 2048 kB hugepages reported on node 1 00:18:01.174 Initializing NVMe Controllers 00:18:01.174 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:01.174 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:01.174 Initialization complete. Launching workers. 00:18:01.174 ======================================================== 00:18:01.174 Latency(us) 00:18:01.174 Device Information : IOPS MiB/s Average min max 00:18:01.175 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16503.05 64.47 3878.48 819.33 6046.08 00:18:01.175 ======================================================== 00:18:01.175 Total : 16503.05 64.47 3878.48 819.33 6046.08 00:18:01.175 00:18:01.175 11:09:58 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TGwGvNYWEa 00:18:01.175 11:09:58 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:01.175 11:09:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:01.175 11:09:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:01.175 11:09:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.TGwGvNYWEa' 00:18:01.175 11:09:58 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:01.175 11:09:58 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:01.175 11:09:58 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2276643 00:18:01.175 11:09:58 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:01.175 11:09:58 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2276643 /var/tmp/bdevperf.sock 00:18:01.175 11:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2276643 ']' 00:18:01.175 11:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:01.175 11:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:01.175 11:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:01.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:01.175 11:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:01.175 11:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:01.433 [2024-05-15 11:09:58.455902] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:18:01.433 [2024-05-15 11:09:58.455949] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2276643 ] 00:18:01.433 EAL: No free 2048 kB hugepages reported on node 1 00:18:01.433 [2024-05-15 11:09:58.505503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.433 [2024-05-15 11:09:58.584342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:01.999 11:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:01.999 11:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:18:01.999 11:09:59 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.TGwGvNYWEa 00:18:02.257 [2024-05-15 11:09:59.391243] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:02.257 [2024-05-15 11:09:59.391328] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:02.257 TLSTESTn1 00:18:02.257 11:09:59 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:02.515 Running I/O for 10 seconds... 00:18:12.485 00:18:12.485 Latency(us) 00:18:12.485 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.485 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:12.485 Verification LBA range: start 0x0 length 0x2000 00:18:12.485 TLSTESTn1 : 10.04 3845.24 15.02 0.00 0.00 33228.82 4929.45 51289.04 00:18:12.485 =================================================================================================================== 00:18:12.485 Total : 3845.24 15.02 0.00 0.00 33228.82 4929.45 51289.04 00:18:12.485 0 00:18:12.485 11:10:09 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:12.485 11:10:09 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2276643 00:18:12.485 11:10:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2276643 ']' 00:18:12.485 11:10:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2276643 00:18:12.485 11:10:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:18:12.485 11:10:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:12.485 11:10:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2276643 00:18:12.485 11:10:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:18:12.485 11:10:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:18:12.485 11:10:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2276643' 00:18:12.485 killing process with pid 2276643 00:18:12.485 11:10:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2276643 00:18:12.485 Received shutdown signal, test time was about 10.000000 seconds 00:18:12.485 00:18:12.485 Latency(us) 00:18:12.485 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.485 =================================================================================================================== 00:18:12.485 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:12.485 [2024-05-15 11:10:09.673616] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:12.485 11:10:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2276643 00:18:12.750 11:10:09 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.o5GNrt5iDo 00:18:12.751 11:10:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:18:12.751 11:10:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.o5GNrt5iDo 00:18:12.751 11:10:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:18:12.751 11:10:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:12.751 11:10:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:18:12.751 11:10:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:12.751 11:10:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.o5GNrt5iDo 00:18:12.751 11:10:09 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:12.751 11:10:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:12.751 11:10:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:12.751 11:10:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.o5GNrt5iDo' 00:18:12.751 11:10:09 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:12.751 11:10:09 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:12.751 11:10:09 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2278485 00:18:12.751 11:10:09 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:12.751 11:10:09 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2278485 /var/tmp/bdevperf.sock 00:18:12.751 11:10:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2278485 ']' 00:18:12.751 11:10:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:12.751 11:10:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:12.751 11:10:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:12.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:12.751 11:10:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:12.751 11:10:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:12.751 [2024-05-15 11:10:09.912701] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:18:12.751 [2024-05-15 11:10:09.912746] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2278485 ] 00:18:12.751 EAL: No free 2048 kB hugepages reported on node 1 00:18:12.751 [2024-05-15 11:10:09.962758] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.009 [2024-05-15 11:10:10.046102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:13.575 11:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:13.575 11:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:18:13.575 11:10:10 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.o5GNrt5iDo 00:18:13.833 [2024-05-15 11:10:10.905270] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:13.833 [2024-05-15 11:10:10.905360] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:13.833 [2024-05-15 11:10:10.909906] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:13.833 [2024-05-15 11:10:10.910539] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1918490 (107): Transport endpoint is not connected 00:18:13.833 [2024-05-15 11:10:10.911531] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1918490 (9): Bad file descriptor 00:18:13.833 [2024-05-15 11:10:10.912532] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:13.833 [2024-05-15 11:10:10.912541] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:13.833 [2024-05-15 11:10:10.912549] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:13.833 request: 00:18:13.833 { 00:18:13.833 "name": "TLSTEST", 00:18:13.833 "trtype": "tcp", 00:18:13.833 "traddr": "10.0.0.2", 00:18:13.833 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:13.833 "adrfam": "ipv4", 00:18:13.833 "trsvcid": "4420", 00:18:13.833 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:13.833 "psk": "/tmp/tmp.o5GNrt5iDo", 00:18:13.833 "method": "bdev_nvme_attach_controller", 00:18:13.833 "req_id": 1 00:18:13.833 } 00:18:13.833 Got JSON-RPC error response 00:18:13.833 response: 00:18:13.833 { 00:18:13.833 "code": -32602, 00:18:13.833 "message": "Invalid parameters" 00:18:13.833 } 00:18:13.833 11:10:10 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2278485 00:18:13.833 11:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2278485 ']' 00:18:13.833 11:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2278485 00:18:13.833 11:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:18:13.833 11:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:13.833 11:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2278485 00:18:13.833 11:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:18:13.833 11:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:18:13.833 11:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2278485' 00:18:13.833 killing process with pid 2278485 00:18:13.833 11:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2278485 00:18:13.833 Received shutdown signal, test time was about 10.000000 seconds 00:18:13.833 00:18:13.833 Latency(us) 00:18:13.833 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.833 =================================================================================================================== 00:18:13.833 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:13.833 [2024-05-15 11:10:10.971050] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:13.833 11:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2278485 00:18:14.091 11:10:11 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:14.091 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:18:14.091 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:18:14.091 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:18:14.091 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:18:14.091 11:10:11 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.TGwGvNYWEa 00:18:14.091 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:18:14.091 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.TGwGvNYWEa 00:18:14.091 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:18:14.091 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:14.091 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:18:14.091 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:14.091 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.TGwGvNYWEa 00:18:14.091 11:10:11 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:14.091 11:10:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:14.091 11:10:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:14.091 11:10:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.TGwGvNYWEa' 00:18:14.091 11:10:11 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:14.091 11:10:11 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:14.091 11:10:11 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2278717 00:18:14.091 11:10:11 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:14.091 11:10:11 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2278717 /var/tmp/bdevperf.sock 00:18:14.091 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2278717 ']' 00:18:14.091 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:14.091 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:14.091 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:14.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:14.091 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:14.091 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:14.091 [2024-05-15 11:10:11.202044] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:18:14.091 [2024-05-15 11:10:11.202087] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2278717 ] 00:18:14.091 EAL: No free 2048 kB hugepages reported on node 1 00:18:14.091 [2024-05-15 11:10:11.250509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.091 [2024-05-15 11:10:11.323412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:14.348 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:14.348 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:18:14.348 11:10:11 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.TGwGvNYWEa 00:18:14.348 [2024-05-15 11:10:11.576119] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:14.348 [2024-05-15 11:10:11.576209] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:14.348 [2024-05-15 11:10:11.580746] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:14.348 [2024-05-15 11:10:11.580768] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:14.348 [2024-05-15 11:10:11.580815] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:14.348 [2024-05-15 11:10:11.581441] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a9490 (107): Transport endpoint is not connected 00:18:14.348 [2024-05-15 11:10:11.582432] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a9490 (9): Bad file descriptor 00:18:14.348 [2024-05-15 11:10:11.583433] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:14.348 [2024-05-15 11:10:11.583442] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:14.348 [2024-05-15 11:10:11.583450] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:14.348 request: 00:18:14.348 { 00:18:14.348 "name": "TLSTEST", 00:18:14.348 "trtype": "tcp", 00:18:14.348 "traddr": "10.0.0.2", 00:18:14.348 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:14.348 "adrfam": "ipv4", 00:18:14.348 "trsvcid": "4420", 00:18:14.348 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:14.348 "psk": "/tmp/tmp.TGwGvNYWEa", 00:18:14.348 "method": "bdev_nvme_attach_controller", 00:18:14.348 "req_id": 1 00:18:14.348 } 00:18:14.348 Got JSON-RPC error response 00:18:14.348 response: 00:18:14.348 { 00:18:14.348 "code": -32602, 00:18:14.348 "message": "Invalid parameters" 00:18:14.348 } 00:18:14.348 11:10:11 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2278717 00:18:14.348 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2278717 ']' 00:18:14.348 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2278717 00:18:14.348 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:18:14.349 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:14.349 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2278717 00:18:14.606 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:18:14.606 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:18:14.606 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2278717' 00:18:14.606 killing process with pid 2278717 00:18:14.606 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2278717 00:18:14.606 Received shutdown signal, test time was about 10.000000 seconds 00:18:14.606 00:18:14.606 Latency(us) 00:18:14.606 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.606 =================================================================================================================== 00:18:14.606 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:14.606 [2024-05-15 11:10:11.640544] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:14.606 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2278717 00:18:14.606 11:10:11 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:14.606 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:18:14.606 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:18:14.606 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:18:14.606 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:18:14.606 11:10:11 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.TGwGvNYWEa 00:18:14.606 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:18:14.606 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.TGwGvNYWEa 00:18:14.606 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:18:14.606 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:14.606 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:18:14.606 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:14.606 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.TGwGvNYWEa 00:18:14.606 11:10:11 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:14.606 11:10:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:14.606 11:10:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:14.606 11:10:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.TGwGvNYWEa' 00:18:14.606 11:10:11 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:14.606 11:10:11 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2278739 00:18:14.606 11:10:11 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:14.606 11:10:11 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:14.606 11:10:11 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2278739 /var/tmp/bdevperf.sock 00:18:14.606 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2278739 ']' 00:18:14.606 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:14.607 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:14.607 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:14.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:14.607 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:14.607 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:14.864 [2024-05-15 11:10:11.884887] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:18:14.864 [2024-05-15 11:10:11.884932] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2278739 ] 00:18:14.864 EAL: No free 2048 kB hugepages reported on node 1 00:18:14.864 [2024-05-15 11:10:11.934080] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.864 [2024-05-15 11:10:12.010931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:15.805 11:10:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:15.805 11:10:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:18:15.805 11:10:12 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.TGwGvNYWEa 00:18:15.805 [2024-05-15 11:10:12.860723] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:15.805 [2024-05-15 11:10:12.860790] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:15.805 [2024-05-15 11:10:12.865234] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:15.805 [2024-05-15 11:10:12.865253] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:15.805 [2024-05-15 11:10:12.865275] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:15.805 [2024-05-15 11:10:12.865957] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1954490 (107): Transport endpoint is not connected 00:18:15.805 [2024-05-15 11:10:12.866951] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1954490 (9): Bad file descriptor 00:18:15.805 [2024-05-15 11:10:12.867952] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:18:15.805 [2024-05-15 11:10:12.867964] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:15.805 [2024-05-15 11:10:12.867973] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:18:15.805 request: 00:18:15.805 { 00:18:15.805 "name": "TLSTEST", 00:18:15.805 "trtype": "tcp", 00:18:15.805 "traddr": "10.0.0.2", 00:18:15.805 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:15.805 "adrfam": "ipv4", 00:18:15.805 "trsvcid": "4420", 00:18:15.805 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:15.805 "psk": "/tmp/tmp.TGwGvNYWEa", 00:18:15.805 "method": "bdev_nvme_attach_controller", 00:18:15.805 "req_id": 1 00:18:15.805 } 00:18:15.805 Got JSON-RPC error response 00:18:15.805 response: 00:18:15.805 { 00:18:15.805 "code": -32602, 00:18:15.805 "message": "Invalid parameters" 00:18:15.805 } 00:18:15.805 11:10:12 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2278739 00:18:15.805 11:10:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2278739 ']' 00:18:15.805 11:10:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2278739 00:18:15.805 11:10:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:18:15.805 11:10:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:15.805 11:10:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2278739 00:18:15.805 11:10:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:18:15.805 11:10:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:18:15.805 11:10:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2278739' 00:18:15.805 killing process with pid 2278739 00:18:15.805 11:10:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2278739 00:18:15.805 Received shutdown signal, test time was about 10.000000 seconds 00:18:15.805 00:18:15.805 Latency(us) 00:18:15.805 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.805 =================================================================================================================== 00:18:15.805 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:15.805 [2024-05-15 11:10:12.940503] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:15.805 11:10:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2278739 00:18:16.063 11:10:13 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:16.063 11:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:18:16.063 11:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:18:16.063 11:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:18:16.063 11:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:18:16.063 11:10:13 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:16.063 11:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:18:16.063 11:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:16.063 11:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:18:16.063 11:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:16.063 11:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:18:16.063 11:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:16.063 11:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:16.063 11:10:13 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:16.063 11:10:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:16.063 11:10:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:16.063 11:10:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:16.063 11:10:13 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:16.063 11:10:13 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2278973 00:18:16.063 11:10:13 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:16.063 11:10:13 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:16.063 11:10:13 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2278973 /var/tmp/bdevperf.sock 00:18:16.063 11:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2278973 ']' 00:18:16.063 11:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:16.063 11:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:16.063 11:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:16.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:16.063 11:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:16.063 11:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:16.063 [2024-05-15 11:10:13.187242] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:18:16.063 [2024-05-15 11:10:13.187287] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2278973 ] 00:18:16.063 EAL: No free 2048 kB hugepages reported on node 1 00:18:16.063 [2024-05-15 11:10:13.236665] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.063 [2024-05-15 11:10:13.303799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:16.320 11:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:16.320 11:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:18:16.320 11:10:13 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:16.320 [2024-05-15 11:10:13.554461] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:16.320 [2024-05-15 11:10:13.556329] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x876b30 (9): Bad file descriptor 00:18:16.320 [2024-05-15 11:10:13.557326] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:16.320 [2024-05-15 11:10:13.557337] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:16.320 [2024-05-15 11:10:13.557347] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:16.320 request: 00:18:16.320 { 00:18:16.320 "name": "TLSTEST", 00:18:16.320 "trtype": "tcp", 00:18:16.320 "traddr": "10.0.0.2", 00:18:16.320 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:16.321 "adrfam": "ipv4", 00:18:16.321 "trsvcid": "4420", 00:18:16.321 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:16.321 "method": "bdev_nvme_attach_controller", 00:18:16.321 "req_id": 1 00:18:16.321 } 00:18:16.321 Got JSON-RPC error response 00:18:16.321 response: 00:18:16.321 { 00:18:16.321 "code": -32602, 00:18:16.321 "message": "Invalid parameters" 00:18:16.321 } 00:18:16.321 11:10:13 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2278973 00:18:16.321 11:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2278973 ']' 00:18:16.321 11:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2278973 00:18:16.321 11:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:18:16.321 11:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:16.321 11:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2278973 00:18:16.577 11:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:18:16.577 11:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:18:16.577 11:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2278973' 00:18:16.577 killing process with pid 2278973 00:18:16.577 11:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2278973 00:18:16.577 Received shutdown signal, test time was about 10.000000 seconds 00:18:16.577 00:18:16.577 Latency(us) 00:18:16.577 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.577 =================================================================================================================== 00:18:16.577 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:16.577 11:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2278973 00:18:16.577 11:10:13 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:16.577 11:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:18:16.577 11:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:18:16.577 11:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:18:16.577 11:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:18:16.577 11:10:13 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 2274087 00:18:16.577 11:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2274087 ']' 00:18:16.577 11:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2274087 00:18:16.577 11:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:18:16.577 11:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:16.577 11:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2274087 00:18:16.835 11:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:18:16.835 11:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:18:16.835 11:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2274087' 00:18:16.835 killing process with pid 2274087 00:18:16.835 11:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2274087 00:18:16.835 [2024-05-15 11:10:13.865602] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:16.835 [2024-05-15 11:10:13.865632] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:16.835 11:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2274087 00:18:16.835 11:10:14 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:16.835 11:10:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:16.835 11:10:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:16.835 11:10:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:16.835 11:10:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:16.835 11:10:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:18:16.835 11:10:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:17.092 11:10:14 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:17.092 11:10:14 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:18:17.092 11:10:14 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.1HZT47fWdD 00:18:17.092 11:10:14 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:17.092 11:10:14 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.1HZT47fWdD 00:18:17.092 11:10:14 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:18:17.092 11:10:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:17.092 11:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:18:17.093 11:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:17.093 11:10:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2279217 00:18:17.093 11:10:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2279217 00:18:17.093 11:10:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:17.093 11:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2279217 ']' 00:18:17.093 11:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.093 11:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:17.093 11:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:17.093 11:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:17.093 11:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:17.093 [2024-05-15 11:10:14.189954] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:18:17.093 [2024-05-15 11:10:14.190004] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:17.093 EAL: No free 2048 kB hugepages reported on node 1 00:18:17.093 [2024-05-15 11:10:14.246078] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.093 [2024-05-15 11:10:14.312347] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:17.093 [2024-05-15 11:10:14.312389] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:17.093 [2024-05-15 11:10:14.312395] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:17.093 [2024-05-15 11:10:14.312401] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:17.093 [2024-05-15 11:10:14.312405] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:17.093 [2024-05-15 11:10:14.312424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:18.023 11:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:18.023 11:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:18:18.023 11:10:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:18.023 11:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:18:18.023 11:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.023 11:10:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:18.023 11:10:15 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.1HZT47fWdD 00:18:18.023 11:10:15 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.1HZT47fWdD 00:18:18.023 11:10:15 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:18.023 [2024-05-15 11:10:15.167825] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:18.023 11:10:15 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:18.281 11:10:15 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:18.281 [2024-05-15 11:10:15.500648] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:18.281 [2024-05-15 11:10:15.500693] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:18.281 [2024-05-15 11:10:15.500875] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:18.281 11:10:15 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:18.537 malloc0 00:18:18.537 11:10:15 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:18.794 11:10:15 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1HZT47fWdD 00:18:18.794 [2024-05-15 11:10:16.002100] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:18.794 11:10:16 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1HZT47fWdD 00:18:18.794 11:10:16 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:18.794 11:10:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:18.794 11:10:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:18.794 11:10:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.1HZT47fWdD' 00:18:18.794 11:10:16 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:18.794 11:10:16 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:18.794 11:10:16 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2279480 00:18:18.794 11:10:16 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:18.794 11:10:16 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2279480 /var/tmp/bdevperf.sock 00:18:18.794 11:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2279480 ']' 00:18:18.794 11:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:18.794 11:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:18.795 11:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:18.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:18.795 11:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:18.795 11:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.795 [2024-05-15 11:10:16.052043] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:18:18.795 [2024-05-15 11:10:16.052088] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2279480 ] 00:18:19.052 EAL: No free 2048 kB hugepages reported on node 1 00:18:19.052 [2024-05-15 11:10:16.101357] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.052 [2024-05-15 11:10:16.173928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:19.615 11:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:19.615 11:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:18:19.615 11:10:16 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1HZT47fWdD 00:18:19.872 [2024-05-15 11:10:17.008742] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:19.872 [2024-05-15 11:10:17.008817] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:19.872 TLSTESTn1 00:18:19.872 11:10:17 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:20.129 Running I/O for 10 seconds... 00:18:30.093 00:18:30.093 Latency(us) 00:18:30.093 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.093 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:30.093 Verification LBA range: start 0x0 length 0x2000 00:18:30.093 TLSTESTn1 : 10.01 5498.77 21.48 0.00 0.00 23242.38 5670.29 26442.35 00:18:30.093 =================================================================================================================== 00:18:30.093 Total : 5498.77 21.48 0.00 0.00 23242.38 5670.29 26442.35 00:18:30.093 0 00:18:30.093 11:10:27 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:30.093 11:10:27 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2279480 00:18:30.093 11:10:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2279480 ']' 00:18:30.093 11:10:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2279480 00:18:30.093 11:10:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:18:30.093 11:10:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:30.093 11:10:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2279480 00:18:30.093 11:10:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:18:30.093 11:10:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:18:30.093 11:10:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2279480' 00:18:30.093 killing process with pid 2279480 00:18:30.093 11:10:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2279480 00:18:30.093 Received shutdown signal, test time was about 10.000000 seconds 00:18:30.093 00:18:30.093 Latency(us) 00:18:30.093 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.093 =================================================================================================================== 00:18:30.093 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:30.093 [2024-05-15 11:10:27.276895] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:30.093 11:10:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2279480 00:18:30.352 11:10:27 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.1HZT47fWdD 00:18:30.352 11:10:27 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1HZT47fWdD 00:18:30.352 11:10:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:18:30.352 11:10:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1HZT47fWdD 00:18:30.352 11:10:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:18:30.352 11:10:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:30.352 11:10:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:18:30.352 11:10:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:30.352 11:10:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1HZT47fWdD 00:18:30.352 11:10:27 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:30.352 11:10:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:30.352 11:10:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:30.352 11:10:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.1HZT47fWdD' 00:18:30.352 11:10:27 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:30.352 11:10:27 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2281318 00:18:30.352 11:10:27 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:30.352 11:10:27 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:30.352 11:10:27 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2281318 /var/tmp/bdevperf.sock 00:18:30.352 11:10:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2281318 ']' 00:18:30.352 11:10:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:30.352 11:10:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:30.352 11:10:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:30.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:30.352 11:10:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:30.352 11:10:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:30.352 [2024-05-15 11:10:27.540793] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:18:30.352 [2024-05-15 11:10:27.540841] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2281318 ] 00:18:30.352 EAL: No free 2048 kB hugepages reported on node 1 00:18:30.352 [2024-05-15 11:10:27.592474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.610 [2024-05-15 11:10:27.660374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:31.176 11:10:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:31.176 11:10:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:18:31.177 11:10:28 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1HZT47fWdD 00:18:31.434 [2024-05-15 11:10:28.494615] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:31.434 [2024-05-15 11:10:28.494667] bdev_nvme.c:6105:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:31.434 [2024-05-15 11:10:28.494674] bdev_nvme.c:6214:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.1HZT47fWdD 00:18:31.434 request: 00:18:31.434 { 00:18:31.434 "name": "TLSTEST", 00:18:31.434 "trtype": "tcp", 00:18:31.434 "traddr": "10.0.0.2", 00:18:31.434 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:31.434 "adrfam": "ipv4", 00:18:31.434 "trsvcid": "4420", 00:18:31.434 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:31.434 "psk": "/tmp/tmp.1HZT47fWdD", 00:18:31.434 "method": "bdev_nvme_attach_controller", 00:18:31.434 "req_id": 1 00:18:31.434 } 00:18:31.434 Got JSON-RPC error response 00:18:31.434 response: 00:18:31.434 { 00:18:31.434 "code": -1, 00:18:31.434 "message": "Operation not permitted" 00:18:31.434 } 00:18:31.434 11:10:28 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2281318 00:18:31.434 11:10:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2281318 ']' 00:18:31.434 11:10:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2281318 00:18:31.434 11:10:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:18:31.434 11:10:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:31.434 11:10:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2281318 00:18:31.434 11:10:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:18:31.434 11:10:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:18:31.434 11:10:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2281318' 00:18:31.434 killing process with pid 2281318 00:18:31.434 11:10:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2281318 00:18:31.434 Received shutdown signal, test time was about 10.000000 seconds 00:18:31.434 00:18:31.434 Latency(us) 00:18:31.434 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.434 =================================================================================================================== 00:18:31.434 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:31.434 11:10:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2281318 00:18:31.692 11:10:28 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:31.692 11:10:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:18:31.692 11:10:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:18:31.692 11:10:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:18:31.692 11:10:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:18:31.692 11:10:28 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 2279217 00:18:31.692 11:10:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2279217 ']' 00:18:31.692 11:10:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2279217 00:18:31.692 11:10:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:18:31.692 11:10:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:31.692 11:10:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2279217 00:18:31.692 11:10:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:18:31.692 11:10:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:18:31.692 11:10:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2279217' 00:18:31.692 killing process with pid 2279217 00:18:31.692 11:10:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2279217 00:18:31.692 [2024-05-15 11:10:28.801378] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:31.692 [2024-05-15 11:10:28.801420] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:31.692 11:10:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2279217 00:18:31.949 11:10:29 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:18:31.949 11:10:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:31.949 11:10:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:18:31.949 11:10:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.949 11:10:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2281606 00:18:31.949 11:10:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:31.949 11:10:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2281606 00:18:31.949 11:10:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2281606 ']' 00:18:31.949 11:10:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.949 11:10:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:31.949 11:10:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.949 11:10:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:31.949 11:10:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.949 [2024-05-15 11:10:29.072769] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:18:31.949 [2024-05-15 11:10:29.072815] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:31.949 EAL: No free 2048 kB hugepages reported on node 1 00:18:31.949 [2024-05-15 11:10:29.129309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.949 [2024-05-15 11:10:29.199048] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:31.949 [2024-05-15 11:10:29.199088] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:31.949 [2024-05-15 11:10:29.199094] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:31.949 [2024-05-15 11:10:29.199100] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:31.949 [2024-05-15 11:10:29.199105] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:31.949 [2024-05-15 11:10:29.199123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:32.882 11:10:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:32.882 11:10:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:18:32.882 11:10:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:32.882 11:10:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:18:32.882 11:10:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:32.882 11:10:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:32.882 11:10:29 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.1HZT47fWdD 00:18:32.882 11:10:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:18:32.882 11:10:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.1HZT47fWdD 00:18:32.882 11:10:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=setup_nvmf_tgt 00:18:32.882 11:10:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:32.882 11:10:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t setup_nvmf_tgt 00:18:32.882 11:10:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:32.882 11:10:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # setup_nvmf_tgt /tmp/tmp.1HZT47fWdD 00:18:32.882 11:10:29 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.1HZT47fWdD 00:18:32.882 11:10:29 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:32.882 [2024-05-15 11:10:30.066738] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:32.882 11:10:30 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:33.139 11:10:30 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:33.396 [2024-05-15 11:10:30.407579] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:33.396 [2024-05-15 11:10:30.407627] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:33.396 [2024-05-15 11:10:30.407787] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:33.396 11:10:30 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:33.396 malloc0 00:18:33.396 11:10:30 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:33.654 11:10:30 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1HZT47fWdD 00:18:33.913 [2024-05-15 11:10:30.941010] tcp.c:3575:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:33.913 [2024-05-15 11:10:30.941038] tcp.c:3661:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:18:33.913 [2024-05-15 11:10:30.941061] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:33.913 request: 00:18:33.913 { 00:18:33.913 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.913 "host": "nqn.2016-06.io.spdk:host1", 00:18:33.913 "psk": "/tmp/tmp.1HZT47fWdD", 00:18:33.913 "method": "nvmf_subsystem_add_host", 00:18:33.913 "req_id": 1 00:18:33.913 } 00:18:33.913 Got JSON-RPC error response 00:18:33.913 response: 00:18:33.913 { 00:18:33.913 "code": -32603, 00:18:33.913 "message": "Internal error" 00:18:33.913 } 00:18:33.913 11:10:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:18:33.913 11:10:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:18:33.913 11:10:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:18:33.913 11:10:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:18:33.913 11:10:30 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 2281606 00:18:33.913 11:10:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2281606 ']' 00:18:33.913 11:10:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2281606 00:18:33.913 11:10:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:18:33.913 11:10:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:33.913 11:10:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2281606 00:18:33.913 11:10:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:18:33.913 11:10:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:18:33.913 11:10:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2281606' 00:18:33.913 killing process with pid 2281606 00:18:33.913 11:10:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2281606 00:18:33.913 [2024-05-15 11:10:31.006860] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:33.913 11:10:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2281606 00:18:34.171 11:10:31 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.1HZT47fWdD 00:18:34.171 11:10:31 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:18:34.171 11:10:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:34.171 11:10:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:18:34.171 11:10:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.171 11:10:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2282048 00:18:34.171 11:10:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2282048 00:18:34.171 11:10:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:34.171 11:10:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2282048 ']' 00:18:34.171 11:10:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.171 11:10:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:34.171 11:10:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.171 11:10:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:34.171 11:10:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.171 [2024-05-15 11:10:31.280401] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:18:34.171 [2024-05-15 11:10:31.280447] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:34.171 EAL: No free 2048 kB hugepages reported on node 1 00:18:34.171 [2024-05-15 11:10:31.337553] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.171 [2024-05-15 11:10:31.402897] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:34.171 [2024-05-15 11:10:31.402939] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:34.171 [2024-05-15 11:10:31.402945] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:34.171 [2024-05-15 11:10:31.402951] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:34.171 [2024-05-15 11:10:31.402955] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:34.171 [2024-05-15 11:10:31.402974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:35.104 11:10:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:35.104 11:10:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:18:35.104 11:10:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:35.104 11:10:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:18:35.104 11:10:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:35.104 11:10:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:35.104 11:10:32 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.1HZT47fWdD 00:18:35.104 11:10:32 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.1HZT47fWdD 00:18:35.104 11:10:32 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:35.104 [2024-05-15 11:10:32.258530] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:35.104 11:10:32 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:35.363 11:10:32 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:35.363 [2024-05-15 11:10:32.599384] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:35.363 [2024-05-15 11:10:32.599428] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:35.363 [2024-05-15 11:10:32.599605] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:35.363 11:10:32 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:35.621 malloc0 00:18:35.621 11:10:32 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:35.881 11:10:32 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1HZT47fWdD 00:18:35.881 [2024-05-15 11:10:33.092924] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:35.881 11:10:33 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:35.881 11:10:33 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=2282323 00:18:35.881 11:10:33 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:35.881 11:10:33 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 2282323 /var/tmp/bdevperf.sock 00:18:35.881 11:10:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2282323 ']' 00:18:35.881 11:10:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:35.881 11:10:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:35.881 11:10:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:35.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:35.881 11:10:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:35.881 11:10:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:35.881 [2024-05-15 11:10:33.137353] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:18:35.881 [2024-05-15 11:10:33.137398] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2282323 ] 00:18:36.138 EAL: No free 2048 kB hugepages reported on node 1 00:18:36.138 [2024-05-15 11:10:33.188021] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.138 [2024-05-15 11:10:33.260022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:36.138 11:10:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:36.138 11:10:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:18:36.138 11:10:33 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1HZT47fWdD 00:18:36.396 [2024-05-15 11:10:33.492487] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:36.396 [2024-05-15 11:10:33.492563] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:36.396 TLSTESTn1 00:18:36.396 11:10:33 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:36.655 11:10:33 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:18:36.655 "subsystems": [ 00:18:36.655 { 00:18:36.655 "subsystem": "keyring", 00:18:36.655 "config": [] 00:18:36.655 }, 00:18:36.655 { 00:18:36.655 "subsystem": "iobuf", 00:18:36.655 "config": [ 00:18:36.655 { 00:18:36.655 "method": "iobuf_set_options", 00:18:36.655 "params": { 00:18:36.655 "small_pool_count": 8192, 00:18:36.655 "large_pool_count": 1024, 00:18:36.655 "small_bufsize": 8192, 00:18:36.655 "large_bufsize": 135168 00:18:36.655 } 00:18:36.655 } 00:18:36.655 ] 00:18:36.655 }, 00:18:36.655 { 00:18:36.655 "subsystem": "sock", 00:18:36.655 "config": [ 00:18:36.655 { 00:18:36.655 "method": "sock_impl_set_options", 00:18:36.655 "params": { 00:18:36.655 "impl_name": "posix", 00:18:36.655 "recv_buf_size": 2097152, 00:18:36.655 "send_buf_size": 2097152, 00:18:36.655 "enable_recv_pipe": true, 00:18:36.655 "enable_quickack": false, 00:18:36.655 "enable_placement_id": 0, 00:18:36.655 "enable_zerocopy_send_server": true, 00:18:36.655 "enable_zerocopy_send_client": false, 00:18:36.655 "zerocopy_threshold": 0, 00:18:36.655 "tls_version": 0, 00:18:36.655 "enable_ktls": false 00:18:36.655 } 00:18:36.655 }, 00:18:36.655 { 00:18:36.655 "method": "sock_impl_set_options", 00:18:36.655 "params": { 00:18:36.655 "impl_name": "ssl", 00:18:36.655 "recv_buf_size": 4096, 00:18:36.655 "send_buf_size": 4096, 00:18:36.655 "enable_recv_pipe": true, 00:18:36.655 "enable_quickack": false, 00:18:36.655 "enable_placement_id": 0, 00:18:36.655 "enable_zerocopy_send_server": true, 00:18:36.655 "enable_zerocopy_send_client": false, 00:18:36.655 "zerocopy_threshold": 0, 00:18:36.655 "tls_version": 0, 00:18:36.655 "enable_ktls": false 00:18:36.655 } 00:18:36.655 } 00:18:36.655 ] 00:18:36.655 }, 00:18:36.655 { 00:18:36.655 "subsystem": "vmd", 00:18:36.655 "config": [] 00:18:36.655 }, 00:18:36.655 { 00:18:36.655 "subsystem": "accel", 00:18:36.655 "config": [ 00:18:36.655 { 00:18:36.655 "method": "accel_set_options", 00:18:36.655 "params": { 00:18:36.655 "small_cache_size": 128, 00:18:36.655 "large_cache_size": 16, 00:18:36.655 "task_count": 2048, 00:18:36.655 "sequence_count": 2048, 00:18:36.655 "buf_count": 2048 00:18:36.655 } 00:18:36.655 } 00:18:36.655 ] 00:18:36.655 }, 00:18:36.655 { 00:18:36.655 "subsystem": "bdev", 00:18:36.655 "config": [ 00:18:36.655 { 00:18:36.655 "method": "bdev_set_options", 00:18:36.655 "params": { 00:18:36.655 "bdev_io_pool_size": 65535, 00:18:36.655 "bdev_io_cache_size": 256, 00:18:36.655 "bdev_auto_examine": true, 00:18:36.655 "iobuf_small_cache_size": 128, 00:18:36.655 "iobuf_large_cache_size": 16 00:18:36.655 } 00:18:36.655 }, 00:18:36.655 { 00:18:36.655 "method": "bdev_raid_set_options", 00:18:36.655 "params": { 00:18:36.655 "process_window_size_kb": 1024 00:18:36.655 } 00:18:36.655 }, 00:18:36.655 { 00:18:36.655 "method": "bdev_iscsi_set_options", 00:18:36.655 "params": { 00:18:36.655 "timeout_sec": 30 00:18:36.655 } 00:18:36.655 }, 00:18:36.655 { 00:18:36.655 "method": "bdev_nvme_set_options", 00:18:36.655 "params": { 00:18:36.655 "action_on_timeout": "none", 00:18:36.655 "timeout_us": 0, 00:18:36.655 "timeout_admin_us": 0, 00:18:36.655 "keep_alive_timeout_ms": 10000, 00:18:36.655 "arbitration_burst": 0, 00:18:36.655 "low_priority_weight": 0, 00:18:36.655 "medium_priority_weight": 0, 00:18:36.655 "high_priority_weight": 0, 00:18:36.655 "nvme_adminq_poll_period_us": 10000, 00:18:36.655 "nvme_ioq_poll_period_us": 0, 00:18:36.655 "io_queue_requests": 0, 00:18:36.655 "delay_cmd_submit": true, 00:18:36.655 "transport_retry_count": 4, 00:18:36.655 "bdev_retry_count": 3, 00:18:36.655 "transport_ack_timeout": 0, 00:18:36.655 "ctrlr_loss_timeout_sec": 0, 00:18:36.655 "reconnect_delay_sec": 0, 00:18:36.655 "fast_io_fail_timeout_sec": 0, 00:18:36.655 "disable_auto_failback": false, 00:18:36.655 "generate_uuids": false, 00:18:36.655 "transport_tos": 0, 00:18:36.655 "nvme_error_stat": false, 00:18:36.655 "rdma_srq_size": 0, 00:18:36.655 "io_path_stat": false, 00:18:36.655 "allow_accel_sequence": false, 00:18:36.655 "rdma_max_cq_size": 0, 00:18:36.655 "rdma_cm_event_timeout_ms": 0, 00:18:36.655 "dhchap_digests": [ 00:18:36.655 "sha256", 00:18:36.655 "sha384", 00:18:36.655 "sha512" 00:18:36.655 ], 00:18:36.655 "dhchap_dhgroups": [ 00:18:36.655 "null", 00:18:36.655 "ffdhe2048", 00:18:36.655 "ffdhe3072", 00:18:36.655 "ffdhe4096", 00:18:36.655 "ffdhe6144", 00:18:36.656 "ffdhe8192" 00:18:36.656 ] 00:18:36.656 } 00:18:36.656 }, 00:18:36.656 { 00:18:36.656 "method": "bdev_nvme_set_hotplug", 00:18:36.656 "params": { 00:18:36.656 "period_us": 100000, 00:18:36.656 "enable": false 00:18:36.656 } 00:18:36.656 }, 00:18:36.656 { 00:18:36.656 "method": "bdev_malloc_create", 00:18:36.656 "params": { 00:18:36.656 "name": "malloc0", 00:18:36.656 "num_blocks": 8192, 00:18:36.656 "block_size": 4096, 00:18:36.656 "physical_block_size": 4096, 00:18:36.656 "uuid": "634184e5-d5a6-4055-933f-9448222175de", 00:18:36.656 "optimal_io_boundary": 0 00:18:36.656 } 00:18:36.656 }, 00:18:36.656 { 00:18:36.656 "method": "bdev_wait_for_examine" 00:18:36.656 } 00:18:36.656 ] 00:18:36.656 }, 00:18:36.656 { 00:18:36.656 "subsystem": "nbd", 00:18:36.656 "config": [] 00:18:36.656 }, 00:18:36.656 { 00:18:36.656 "subsystem": "scheduler", 00:18:36.656 "config": [ 00:18:36.656 { 00:18:36.656 "method": "framework_set_scheduler", 00:18:36.656 "params": { 00:18:36.656 "name": "static" 00:18:36.656 } 00:18:36.656 } 00:18:36.656 ] 00:18:36.656 }, 00:18:36.656 { 00:18:36.656 "subsystem": "nvmf", 00:18:36.656 "config": [ 00:18:36.656 { 00:18:36.656 "method": "nvmf_set_config", 00:18:36.656 "params": { 00:18:36.656 "discovery_filter": "match_any", 00:18:36.656 "admin_cmd_passthru": { 00:18:36.656 "identify_ctrlr": false 00:18:36.656 } 00:18:36.656 } 00:18:36.656 }, 00:18:36.656 { 00:18:36.656 "method": "nvmf_set_max_subsystems", 00:18:36.656 "params": { 00:18:36.656 "max_subsystems": 1024 00:18:36.656 } 00:18:36.656 }, 00:18:36.656 { 00:18:36.656 "method": "nvmf_set_crdt", 00:18:36.656 "params": { 00:18:36.656 "crdt1": 0, 00:18:36.656 "crdt2": 0, 00:18:36.656 "crdt3": 0 00:18:36.656 } 00:18:36.656 }, 00:18:36.656 { 00:18:36.656 "method": "nvmf_create_transport", 00:18:36.656 "params": { 00:18:36.656 "trtype": "TCP", 00:18:36.656 "max_queue_depth": 128, 00:18:36.656 "max_io_qpairs_per_ctrlr": 127, 00:18:36.656 "in_capsule_data_size": 4096, 00:18:36.656 "max_io_size": 131072, 00:18:36.656 "io_unit_size": 131072, 00:18:36.656 "max_aq_depth": 128, 00:18:36.656 "num_shared_buffers": 511, 00:18:36.656 "buf_cache_size": 4294967295, 00:18:36.656 "dif_insert_or_strip": false, 00:18:36.656 "zcopy": false, 00:18:36.656 "c2h_success": false, 00:18:36.656 "sock_priority": 0, 00:18:36.656 "abort_timeout_sec": 1, 00:18:36.656 "ack_timeout": 0, 00:18:36.656 "data_wr_pool_size": 0 00:18:36.656 } 00:18:36.656 }, 00:18:36.656 { 00:18:36.656 "method": "nvmf_create_subsystem", 00:18:36.656 "params": { 00:18:36.656 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.656 "allow_any_host": false, 00:18:36.656 "serial_number": "SPDK00000000000001", 00:18:36.656 "model_number": "SPDK bdev Controller", 00:18:36.656 "max_namespaces": 10, 00:18:36.656 "min_cntlid": 1, 00:18:36.656 "max_cntlid": 65519, 00:18:36.656 "ana_reporting": false 00:18:36.656 } 00:18:36.656 }, 00:18:36.656 { 00:18:36.656 "method": "nvmf_subsystem_add_host", 00:18:36.656 "params": { 00:18:36.656 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.656 "host": "nqn.2016-06.io.spdk:host1", 00:18:36.656 "psk": "/tmp/tmp.1HZT47fWdD" 00:18:36.656 } 00:18:36.656 }, 00:18:36.656 { 00:18:36.656 "method": "nvmf_subsystem_add_ns", 00:18:36.656 "params": { 00:18:36.656 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.656 "namespace": { 00:18:36.656 "nsid": 1, 00:18:36.656 "bdev_name": "malloc0", 00:18:36.656 "nguid": "634184E5D5A64055933F9448222175DE", 00:18:36.656 "uuid": "634184e5-d5a6-4055-933f-9448222175de", 00:18:36.656 "no_auto_visible": false 00:18:36.656 } 00:18:36.656 } 00:18:36.656 }, 00:18:36.656 { 00:18:36.656 "method": "nvmf_subsystem_add_listener", 00:18:36.656 "params": { 00:18:36.656 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.656 "listen_address": { 00:18:36.656 "trtype": "TCP", 00:18:36.656 "adrfam": "IPv4", 00:18:36.656 "traddr": "10.0.0.2", 00:18:36.656 "trsvcid": "4420" 00:18:36.656 }, 00:18:36.656 "secure_channel": true 00:18:36.656 } 00:18:36.656 } 00:18:36.656 ] 00:18:36.656 } 00:18:36.656 ] 00:18:36.656 }' 00:18:36.656 11:10:33 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:36.915 11:10:34 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:18:36.915 "subsystems": [ 00:18:36.915 { 00:18:36.915 "subsystem": "keyring", 00:18:36.915 "config": [] 00:18:36.915 }, 00:18:36.915 { 00:18:36.915 "subsystem": "iobuf", 00:18:36.915 "config": [ 00:18:36.915 { 00:18:36.915 "method": "iobuf_set_options", 00:18:36.915 "params": { 00:18:36.915 "small_pool_count": 8192, 00:18:36.915 "large_pool_count": 1024, 00:18:36.915 "small_bufsize": 8192, 00:18:36.915 "large_bufsize": 135168 00:18:36.915 } 00:18:36.915 } 00:18:36.915 ] 00:18:36.915 }, 00:18:36.915 { 00:18:36.915 "subsystem": "sock", 00:18:36.915 "config": [ 00:18:36.915 { 00:18:36.915 "method": "sock_impl_set_options", 00:18:36.915 "params": { 00:18:36.915 "impl_name": "posix", 00:18:36.915 "recv_buf_size": 2097152, 00:18:36.915 "send_buf_size": 2097152, 00:18:36.915 "enable_recv_pipe": true, 00:18:36.915 "enable_quickack": false, 00:18:36.915 "enable_placement_id": 0, 00:18:36.915 "enable_zerocopy_send_server": true, 00:18:36.915 "enable_zerocopy_send_client": false, 00:18:36.915 "zerocopy_threshold": 0, 00:18:36.915 "tls_version": 0, 00:18:36.915 "enable_ktls": false 00:18:36.915 } 00:18:36.915 }, 00:18:36.915 { 00:18:36.915 "method": "sock_impl_set_options", 00:18:36.915 "params": { 00:18:36.915 "impl_name": "ssl", 00:18:36.915 "recv_buf_size": 4096, 00:18:36.915 "send_buf_size": 4096, 00:18:36.915 "enable_recv_pipe": true, 00:18:36.915 "enable_quickack": false, 00:18:36.915 "enable_placement_id": 0, 00:18:36.915 "enable_zerocopy_send_server": true, 00:18:36.915 "enable_zerocopy_send_client": false, 00:18:36.915 "zerocopy_threshold": 0, 00:18:36.915 "tls_version": 0, 00:18:36.915 "enable_ktls": false 00:18:36.915 } 00:18:36.915 } 00:18:36.915 ] 00:18:36.915 }, 00:18:36.915 { 00:18:36.915 "subsystem": "vmd", 00:18:36.915 "config": [] 00:18:36.915 }, 00:18:36.915 { 00:18:36.915 "subsystem": "accel", 00:18:36.915 "config": [ 00:18:36.915 { 00:18:36.915 "method": "accel_set_options", 00:18:36.915 "params": { 00:18:36.915 "small_cache_size": 128, 00:18:36.915 "large_cache_size": 16, 00:18:36.915 "task_count": 2048, 00:18:36.915 "sequence_count": 2048, 00:18:36.915 "buf_count": 2048 00:18:36.915 } 00:18:36.915 } 00:18:36.915 ] 00:18:36.915 }, 00:18:36.915 { 00:18:36.915 "subsystem": "bdev", 00:18:36.915 "config": [ 00:18:36.915 { 00:18:36.915 "method": "bdev_set_options", 00:18:36.915 "params": { 00:18:36.915 "bdev_io_pool_size": 65535, 00:18:36.915 "bdev_io_cache_size": 256, 00:18:36.915 "bdev_auto_examine": true, 00:18:36.915 "iobuf_small_cache_size": 128, 00:18:36.915 "iobuf_large_cache_size": 16 00:18:36.915 } 00:18:36.915 }, 00:18:36.915 { 00:18:36.915 "method": "bdev_raid_set_options", 00:18:36.915 "params": { 00:18:36.915 "process_window_size_kb": 1024 00:18:36.915 } 00:18:36.915 }, 00:18:36.915 { 00:18:36.915 "method": "bdev_iscsi_set_options", 00:18:36.915 "params": { 00:18:36.915 "timeout_sec": 30 00:18:36.915 } 00:18:36.915 }, 00:18:36.915 { 00:18:36.915 "method": "bdev_nvme_set_options", 00:18:36.915 "params": { 00:18:36.915 "action_on_timeout": "none", 00:18:36.915 "timeout_us": 0, 00:18:36.915 "timeout_admin_us": 0, 00:18:36.915 "keep_alive_timeout_ms": 10000, 00:18:36.916 "arbitration_burst": 0, 00:18:36.916 "low_priority_weight": 0, 00:18:36.916 "medium_priority_weight": 0, 00:18:36.916 "high_priority_weight": 0, 00:18:36.916 "nvme_adminq_poll_period_us": 10000, 00:18:36.916 "nvme_ioq_poll_period_us": 0, 00:18:36.916 "io_queue_requests": 512, 00:18:36.916 "delay_cmd_submit": true, 00:18:36.916 "transport_retry_count": 4, 00:18:36.916 "bdev_retry_count": 3, 00:18:36.916 "transport_ack_timeout": 0, 00:18:36.916 "ctrlr_loss_timeout_sec": 0, 00:18:36.916 "reconnect_delay_sec": 0, 00:18:36.916 "fast_io_fail_timeout_sec": 0, 00:18:36.916 "disable_auto_failback": false, 00:18:36.916 "generate_uuids": false, 00:18:36.916 "transport_tos": 0, 00:18:36.916 "nvme_error_stat": false, 00:18:36.916 "rdma_srq_size": 0, 00:18:36.916 "io_path_stat": false, 00:18:36.916 "allow_accel_sequence": false, 00:18:36.916 "rdma_max_cq_size": 0, 00:18:36.916 "rdma_cm_event_timeout_ms": 0, 00:18:36.916 "dhchap_digests": [ 00:18:36.916 "sha256", 00:18:36.916 "sha384", 00:18:36.916 "sha512" 00:18:36.916 ], 00:18:36.916 "dhchap_dhgroups": [ 00:18:36.916 "null", 00:18:36.916 "ffdhe2048", 00:18:36.916 "ffdhe3072", 00:18:36.916 "ffdhe4096", 00:18:36.916 "ffdhe6144", 00:18:36.916 "ffdhe8192" 00:18:36.916 ] 00:18:36.916 } 00:18:36.916 }, 00:18:36.916 { 00:18:36.916 "method": "bdev_nvme_attach_controller", 00:18:36.916 "params": { 00:18:36.916 "name": "TLSTEST", 00:18:36.916 "trtype": "TCP", 00:18:36.916 "adrfam": "IPv4", 00:18:36.916 "traddr": "10.0.0.2", 00:18:36.916 "trsvcid": "4420", 00:18:36.916 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.916 "prchk_reftag": false, 00:18:36.916 "prchk_guard": false, 00:18:36.916 "ctrlr_loss_timeout_sec": 0, 00:18:36.916 "reconnect_delay_sec": 0, 00:18:36.916 "fast_io_fail_timeout_sec": 0, 00:18:36.916 "psk": "/tmp/tmp.1HZT47fWdD", 00:18:36.916 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:36.916 "hdgst": false, 00:18:36.916 "ddgst": false 00:18:36.916 } 00:18:36.916 }, 00:18:36.916 { 00:18:36.916 "method": "bdev_nvme_set_hotplug", 00:18:36.916 "params": { 00:18:36.916 "period_us": 100000, 00:18:36.916 "enable": false 00:18:36.916 } 00:18:36.916 }, 00:18:36.916 { 00:18:36.916 "method": "bdev_wait_for_examine" 00:18:36.916 } 00:18:36.916 ] 00:18:36.916 }, 00:18:36.916 { 00:18:36.916 "subsystem": "nbd", 00:18:36.916 "config": [] 00:18:36.916 } 00:18:36.916 ] 00:18:36.916 }' 00:18:36.916 11:10:34 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 2282323 00:18:36.916 11:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2282323 ']' 00:18:36.916 11:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2282323 00:18:36.916 11:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:18:36.916 11:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:36.916 11:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2282323 00:18:36.916 11:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:18:36.916 11:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:18:36.916 11:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2282323' 00:18:36.916 killing process with pid 2282323 00:18:36.916 11:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2282323 00:18:36.916 Received shutdown signal, test time was about 10.000000 seconds 00:18:36.916 00:18:36.916 Latency(us) 00:18:36.916 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.916 =================================================================================================================== 00:18:36.916 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:36.916 [2024-05-15 11:10:34.118347] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:36.916 11:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2282323 00:18:37.175 11:10:34 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 2282048 00:18:37.175 11:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2282048 ']' 00:18:37.175 11:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2282048 00:18:37.175 11:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:18:37.175 11:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:37.175 11:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2282048 00:18:37.175 11:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:18:37.175 11:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:18:37.175 11:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2282048' 00:18:37.175 killing process with pid 2282048 00:18:37.175 11:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2282048 00:18:37.175 [2024-05-15 11:10:34.370362] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:37.175 [2024-05-15 11:10:34.370397] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:37.175 11:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2282048 00:18:37.434 11:10:34 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:18:37.434 "subsystems": [ 00:18:37.434 { 00:18:37.434 "subsystem": "keyring", 00:18:37.434 "config": [] 00:18:37.434 }, 00:18:37.434 { 00:18:37.434 "subsystem": "iobuf", 00:18:37.434 "config": [ 00:18:37.434 { 00:18:37.434 "method": "iobuf_set_options", 00:18:37.434 "params": { 00:18:37.434 "small_pool_count": 8192, 00:18:37.434 "large_pool_count": 1024, 00:18:37.434 "small_bufsize": 8192, 00:18:37.434 "large_bufsize": 135168 00:18:37.434 } 00:18:37.434 } 00:18:37.434 ] 00:18:37.434 }, 00:18:37.434 { 00:18:37.434 "subsystem": "sock", 00:18:37.434 "config": [ 00:18:37.434 { 00:18:37.434 "method": "sock_impl_set_options", 00:18:37.434 "params": { 00:18:37.434 "impl_name": "posix", 00:18:37.434 "recv_buf_size": 2097152, 00:18:37.434 "send_buf_size": 2097152, 00:18:37.434 "enable_recv_pipe": true, 00:18:37.434 "enable_quickack": false, 00:18:37.434 "enable_placement_id": 0, 00:18:37.434 "enable_zerocopy_send_server": true, 00:18:37.434 "enable_zerocopy_send_client": false, 00:18:37.434 "zerocopy_threshold": 0, 00:18:37.434 "tls_version": 0, 00:18:37.434 "enable_ktls": false 00:18:37.434 } 00:18:37.434 }, 00:18:37.434 { 00:18:37.434 "method": "sock_impl_set_options", 00:18:37.434 "params": { 00:18:37.434 "impl_name": "ssl", 00:18:37.434 "recv_buf_size": 4096, 00:18:37.434 "send_buf_size": 4096, 00:18:37.434 "enable_recv_pipe": true, 00:18:37.434 "enable_quickack": false, 00:18:37.434 "enable_placement_id": 0, 00:18:37.434 "enable_zerocopy_send_server": true, 00:18:37.434 "enable_zerocopy_send_client": false, 00:18:37.434 "zerocopy_threshold": 0, 00:18:37.434 "tls_version": 0, 00:18:37.434 "enable_ktls": false 00:18:37.434 } 00:18:37.434 } 00:18:37.434 ] 00:18:37.434 }, 00:18:37.434 { 00:18:37.434 "subsystem": "vmd", 00:18:37.434 "config": [] 00:18:37.434 }, 00:18:37.434 { 00:18:37.434 "subsystem": "accel", 00:18:37.434 "config": [ 00:18:37.434 { 00:18:37.434 "method": "accel_set_options", 00:18:37.434 "params": { 00:18:37.434 "small_cache_size": 128, 00:18:37.434 "large_cache_size": 16, 00:18:37.434 "task_count": 2048, 00:18:37.434 "sequence_count": 2048, 00:18:37.434 "buf_count": 2048 00:18:37.434 } 00:18:37.434 } 00:18:37.434 ] 00:18:37.434 }, 00:18:37.434 { 00:18:37.434 "subsystem": "bdev", 00:18:37.434 "config": [ 00:18:37.434 { 00:18:37.434 "method": "bdev_set_options", 00:18:37.434 "params": { 00:18:37.434 "bdev_io_pool_size": 65535, 00:18:37.434 "bdev_io_cache_size": 256, 00:18:37.434 "bdev_auto_examine": true, 00:18:37.434 "iobuf_small_cache_size": 128, 00:18:37.434 "iobuf_large_cache_size": 16 00:18:37.434 } 00:18:37.434 }, 00:18:37.434 { 00:18:37.434 "method": "bdev_raid_set_options", 00:18:37.434 "params": { 00:18:37.434 "process_window_size_kb": 1024 00:18:37.434 } 00:18:37.434 }, 00:18:37.434 { 00:18:37.434 "method": "bdev_iscsi_set_options", 00:18:37.434 "params": { 00:18:37.435 "timeout_sec": 30 00:18:37.435 } 00:18:37.435 }, 00:18:37.435 { 00:18:37.435 "method": "bdev_nvme_set_options", 00:18:37.435 "params": { 00:18:37.435 "action_on_timeout": "none", 00:18:37.435 "timeout_us": 0, 00:18:37.435 "timeout_admin_us": 0, 00:18:37.435 "keep_alive_timeout_ms": 10000, 00:18:37.435 "arbitration_burst": 0, 00:18:37.435 "low_priority_weight": 0, 00:18:37.435 "medium_priority_weight": 0, 00:18:37.435 "high_priority_weight": 0, 00:18:37.435 "nvme_adminq_poll_period_us": 10000, 00:18:37.435 "nvme_ioq_poll_period_us": 0, 00:18:37.435 "io_queue_requests": 0, 00:18:37.435 "delay_cmd_submit": true, 00:18:37.435 "transport_retry_count": 4, 00:18:37.435 "bdev_retry_count": 3, 00:18:37.435 "transport_ack_timeout": 0, 00:18:37.435 "ctrlr_loss_timeout_sec": 0, 00:18:37.435 "reconnect_delay_sec": 0, 00:18:37.435 "fast_io_fail_timeout_sec": 0, 00:18:37.435 "disable_auto_failback": false, 00:18:37.435 "generate_uuids": false, 00:18:37.435 "transport_tos": 0, 00:18:37.435 "nvme_error_stat": false, 00:18:37.435 "rdma_srq_size": 0, 00:18:37.435 "io_path_stat": false, 00:18:37.435 "allow_accel_sequence": false, 00:18:37.435 "rdma_max_cq_size": 0, 00:18:37.435 "rdma_cm_event_timeout_ms": 0, 00:18:37.435 "dhchap_digests": [ 00:18:37.435 "sha256", 00:18:37.435 "sha384", 00:18:37.435 "sha512" 00:18:37.435 ], 00:18:37.435 "dhchap_dhgroups": [ 00:18:37.435 "null", 00:18:37.435 "ffdhe2048", 00:18:37.435 "ffdhe3072", 00:18:37.435 "ffdhe4096", 00:18:37.435 "ffdhe6144", 00:18:37.435 "ffdhe8192" 00:18:37.435 ] 00:18:37.435 } 00:18:37.435 }, 00:18:37.435 { 00:18:37.435 "method": "bdev_nvme_set_hotplug", 00:18:37.435 "params": { 00:18:37.435 "period_us": 100000, 00:18:37.435 "enable": false 00:18:37.435 } 00:18:37.435 }, 00:18:37.435 { 00:18:37.435 "method": "bdev_malloc_create", 00:18:37.435 "params": { 00:18:37.435 "name": "malloc0", 00:18:37.435 "num_blocks": 8192, 00:18:37.435 "block_size": 4096, 00:18:37.435 "physical_block_size": 4096, 00:18:37.435 "uuid": "634184e5-d5a6-4055-933f-9448222175de", 00:18:37.435 "optimal_io_boundary": 0 00:18:37.435 } 00:18:37.435 }, 00:18:37.435 { 00:18:37.435 "method": "bdev_wait_for_examine" 00:18:37.435 } 00:18:37.435 ] 00:18:37.435 }, 00:18:37.435 { 00:18:37.435 "subsystem": "nbd", 00:18:37.435 "config": [] 00:18:37.435 }, 00:18:37.435 { 00:18:37.435 "subsystem": "scheduler", 00:18:37.435 "config": [ 00:18:37.435 { 00:18:37.435 "method": "framework_set_scheduler", 00:18:37.435 "params": { 00:18:37.435 "name": "static" 00:18:37.435 } 00:18:37.435 } 00:18:37.435 ] 00:18:37.435 }, 00:18:37.435 { 00:18:37.435 "subsystem": "nvmf", 00:18:37.435 "config": [ 00:18:37.435 { 00:18:37.435 "method": "nvmf_set_config", 00:18:37.435 "params": { 00:18:37.435 "discovery_filter": "match_any", 00:18:37.435 "admin_cmd_passthru": { 00:18:37.435 "identify_ctrlr": false 00:18:37.435 } 00:18:37.435 } 00:18:37.435 }, 00:18:37.435 { 00:18:37.435 "method": "nvmf_set_max_subsystems", 00:18:37.435 "params": { 00:18:37.435 "max_subsystems": 1024 00:18:37.435 } 00:18:37.435 }, 00:18:37.435 { 00:18:37.435 "method": "nvmf_set_crdt", 00:18:37.435 "params": { 00:18:37.435 "crdt1": 0, 00:18:37.435 "crdt2": 0, 00:18:37.435 "crdt3": 0 00:18:37.435 } 00:18:37.435 }, 00:18:37.435 { 00:18:37.435 "method": "nvmf_create_transport", 00:18:37.435 "params": { 00:18:37.435 "trtype": "TCP", 00:18:37.435 "max_queue_depth": 128, 00:18:37.435 "max_io_qpairs_per_ctrlr": 127, 00:18:37.435 "in_capsule_data_size": 4096, 00:18:37.435 "max_io_size": 131072, 00:18:37.435 "io_unit_size": 131072, 00:18:37.435 "max_aq_depth": 128, 00:18:37.435 "num_shared_buffers": 511, 00:18:37.435 "buf_cache_size": 4294967295, 00:18:37.435 "dif_insert_or_strip": false, 00:18:37.435 "zcopy": false, 00:18:37.435 "c2h_success": false, 00:18:37.435 "sock_priority": 0, 00:18:37.435 "abort_timeout_sec": 1, 00:18:37.435 "ack_timeout": 0, 00:18:37.435 "data_wr_pool_size": 0 00:18:37.435 } 00:18:37.435 }, 00:18:37.435 { 00:18:37.435 "method": "nvmf_create_subsystem", 00:18:37.435 "params": { 00:18:37.435 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.435 "allow_any_host": false, 00:18:37.435 "serial_number": "SPDK00000000000001", 00:18:37.435 "model_number": "SPDK bdev Controller", 00:18:37.435 "max_namespaces": 10, 00:18:37.435 "min_cntlid": 1, 00:18:37.435 "max_cntlid": 65519, 00:18:37.435 "ana_reporting": false 00:18:37.435 } 00:18:37.435 }, 00:18:37.435 { 00:18:37.435 "method": "nvmf_subsystem_add_host", 00:18:37.435 "params": { 00:18:37.435 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.435 "host": "nqn.2016-06.io.spdk:host1", 00:18:37.435 "psk": "/tmp/tmp.1HZT47fWdD" 00:18:37.435 } 00:18:37.435 }, 00:18:37.435 { 00:18:37.435 "method": "nvmf_subsystem_add_ns", 00:18:37.435 "params": { 00:18:37.435 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.435 "namespace": { 00:18:37.435 "nsid": 1, 00:18:37.435 "bdev_name": "malloc0", 00:18:37.435 "nguid": "634184E5D5A64055933F9448222175DE", 00:18:37.435 "uuid": "634184e5-d5a6-4055-933f-9448222175de", 00:18:37.435 "no_auto_visible": false 00:18:37.435 } 00:18:37.435 } 00:18:37.435 }, 00:18:37.435 { 00:18:37.435 "method": "nvmf_subsystem_add_listener", 00:18:37.435 "params": { 00:18:37.435 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.435 "listen_address": { 00:18:37.435 "trtype": "TCP", 00:18:37.435 "adrfam": "IPv4", 00:18:37.435 "traddr": "10.0.0.2", 00:18:37.435 "trsvcid": "4420" 00:18:37.435 }, 00:18:37.435 "secure_channel": true 00:18:37.435 } 00:18:37.435 } 00:18:37.435 ] 00:18:37.435 } 00:18:37.435 ] 00:18:37.435 }' 00:18:37.435 11:10:34 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:37.435 11:10:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:37.435 11:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:18:37.435 11:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.435 11:10:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2282634 00:18:37.435 11:10:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2282634 00:18:37.435 11:10:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:37.435 11:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2282634 ']' 00:18:37.435 11:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.435 11:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:37.435 11:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.435 11:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:37.435 11:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.435 [2024-05-15 11:10:34.640420] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:18:37.435 [2024-05-15 11:10:34.640466] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:37.435 EAL: No free 2048 kB hugepages reported on node 1 00:18:37.435 [2024-05-15 11:10:34.696573] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.694 [2024-05-15 11:10:34.774910] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:37.694 [2024-05-15 11:10:34.774945] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:37.694 [2024-05-15 11:10:34.774952] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:37.694 [2024-05-15 11:10:34.774959] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:37.694 [2024-05-15 11:10:34.774964] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:37.694 [2024-05-15 11:10:34.775017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:37.953 [2024-05-15 11:10:34.969416] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:37.953 [2024-05-15 11:10:34.985391] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:37.953 [2024-05-15 11:10:35.001425] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:37.953 [2024-05-15 11:10:35.001465] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:37.953 [2024-05-15 11:10:35.009518] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:38.215 11:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:38.215 11:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:18:38.215 11:10:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:38.215 11:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:18:38.215 11:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:38.215 11:10:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:38.215 11:10:35 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=2282798 00:18:38.215 11:10:35 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 2282798 /var/tmp/bdevperf.sock 00:18:38.215 11:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2282798 ']' 00:18:38.215 11:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:38.215 11:10:35 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:38.215 11:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:38.215 11:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:38.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:38.215 11:10:35 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:18:38.215 "subsystems": [ 00:18:38.215 { 00:18:38.215 "subsystem": "keyring", 00:18:38.215 "config": [] 00:18:38.215 }, 00:18:38.215 { 00:18:38.215 "subsystem": "iobuf", 00:18:38.215 "config": [ 00:18:38.215 { 00:18:38.215 "method": "iobuf_set_options", 00:18:38.215 "params": { 00:18:38.215 "small_pool_count": 8192, 00:18:38.215 "large_pool_count": 1024, 00:18:38.216 "small_bufsize": 8192, 00:18:38.216 "large_bufsize": 135168 00:18:38.216 } 00:18:38.216 } 00:18:38.216 ] 00:18:38.216 }, 00:18:38.216 { 00:18:38.216 "subsystem": "sock", 00:18:38.216 "config": [ 00:18:38.216 { 00:18:38.216 "method": "sock_impl_set_options", 00:18:38.216 "params": { 00:18:38.216 "impl_name": "posix", 00:18:38.216 "recv_buf_size": 2097152, 00:18:38.216 "send_buf_size": 2097152, 00:18:38.216 "enable_recv_pipe": true, 00:18:38.216 "enable_quickack": false, 00:18:38.216 "enable_placement_id": 0, 00:18:38.216 "enable_zerocopy_send_server": true, 00:18:38.216 "enable_zerocopy_send_client": false, 00:18:38.216 "zerocopy_threshold": 0, 00:18:38.216 "tls_version": 0, 00:18:38.216 "enable_ktls": false 00:18:38.216 } 00:18:38.216 }, 00:18:38.216 { 00:18:38.216 "method": "sock_impl_set_options", 00:18:38.216 "params": { 00:18:38.216 "impl_name": "ssl", 00:18:38.216 "recv_buf_size": 4096, 00:18:38.216 "send_buf_size": 4096, 00:18:38.216 "enable_recv_pipe": true, 00:18:38.216 "enable_quickack": false, 00:18:38.216 "enable_placement_id": 0, 00:18:38.216 "enable_zerocopy_send_server": true, 00:18:38.216 "enable_zerocopy_send_client": false, 00:18:38.216 "zerocopy_threshold": 0, 00:18:38.216 "tls_version": 0, 00:18:38.216 "enable_ktls": false 00:18:38.216 } 00:18:38.216 } 00:18:38.216 ] 00:18:38.216 }, 00:18:38.216 { 00:18:38.216 "subsystem": "vmd", 00:18:38.216 "config": [] 00:18:38.216 }, 00:18:38.216 { 00:18:38.216 "subsystem": "accel", 00:18:38.216 "config": [ 00:18:38.216 { 00:18:38.216 "method": "accel_set_options", 00:18:38.216 "params": { 00:18:38.216 "small_cache_size": 128, 00:18:38.216 "large_cache_size": 16, 00:18:38.216 "task_count": 2048, 00:18:38.216 "sequence_count": 2048, 00:18:38.216 "buf_count": 2048 00:18:38.216 } 00:18:38.216 } 00:18:38.216 ] 00:18:38.216 }, 00:18:38.216 { 00:18:38.216 "subsystem": "bdev", 00:18:38.216 "config": [ 00:18:38.216 { 00:18:38.216 "method": "bdev_set_options", 00:18:38.216 "params": { 00:18:38.216 "bdev_io_pool_size": 65535, 00:18:38.216 "bdev_io_cache_size": 256, 00:18:38.216 "bdev_auto_examine": true, 00:18:38.216 "iobuf_small_cache_size": 128, 00:18:38.216 "iobuf_large_cache_size": 16 00:18:38.216 } 00:18:38.216 }, 00:18:38.216 { 00:18:38.216 "method": "bdev_raid_set_options", 00:18:38.216 "params": { 00:18:38.216 "process_window_size_kb": 1024 00:18:38.216 } 00:18:38.216 }, 00:18:38.216 { 00:18:38.216 "method": "bdev_iscsi_set_options", 00:18:38.216 "params": { 00:18:38.216 "timeout_sec": 30 00:18:38.216 } 00:18:38.216 }, 00:18:38.216 { 00:18:38.216 "method": "bdev_nvme_set_options", 00:18:38.216 "params": { 00:18:38.216 "action_on_timeout": "none", 00:18:38.216 "timeout_us": 0, 00:18:38.216 "timeout_admin_us": 0, 00:18:38.216 "keep_alive_timeout_ms": 10000, 00:18:38.216 "arbitration_burst": 0, 00:18:38.216 "low_priority_weight": 0, 00:18:38.216 "medium_priority_weight": 0, 00:18:38.216 "high_priority_weight": 0, 00:18:38.216 "nvme_adminq_poll_period_us": 10000, 00:18:38.216 "nvme_ioq_poll_period_us": 0, 00:18:38.216 "io_queue_requests": 512, 00:18:38.216 "delay_cmd_submit": true, 00:18:38.216 "transport_retry_count": 4, 00:18:38.216 "bdev_retry_count": 3, 00:18:38.216 "transport_ack_timeout": 0, 00:18:38.216 "ctrlr_loss_timeout_sec": 0, 00:18:38.216 "reconnect_delay_sec": 0, 00:18:38.216 "fast_io_fail_timeout_sec": 0, 00:18:38.216 "disable_auto_failback": false, 00:18:38.216 "generate_uuids": false, 00:18:38.216 "transport_tos": 0, 00:18:38.216 "nvme_error_stat": false, 00:18:38.216 "rdma_srq_size": 0, 00:18:38.216 "io_path_stat": false, 00:18:38.216 "allow_accel_sequence": false, 00:18:38.216 "rdma_max_cq_size": 0, 00:18:38.216 "rdma_cm_event_timeout_ms": 0, 00:18:38.216 "dhchap_digests": [ 00:18:38.216 "sha256", 00:18:38.216 "sha384", 00:18:38.216 "sha512" 00:18:38.216 ], 00:18:38.216 "dhchap_dhgroups": [ 00:18:38.216 "null", 00:18:38.216 "ffdhe2048", 00:18:38.216 "ffdhe3072", 00:18:38.216 "ffdhe4096", 00:18:38.216 "ffdhe6144", 00:18:38.216 "ffdhe8192" 00:18:38.216 ] 00:18:38.216 } 00:18:38.216 }, 00:18:38.216 { 00:18:38.216 "method": "bdev_nvme_attach_controller", 00:18:38.216 "params": { 00:18:38.216 "name": "TLSTEST", 00:18:38.216 "trtype": "TCP", 00:18:38.216 "adrfam": "IPv4", 00:18:38.216 "traddr": "10.0.0.2", 00:18:38.216 "trsvcid": "4420", 00:18:38.216 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:38.216 "prchk_reftag": false, 00:18:38.216 "prchk_guard": false, 00:18:38.216 "ctrlr_loss_timeout_sec": 0, 00:18:38.216 "reconnect_delay_sec": 0, 00:18:38.216 "fast_io_fail_timeout_sec": 0, 00:18:38.216 "psk": "/tmp/tmp.1HZT47fWdD", 00:18:38.216 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:38.216 "hdgst": false, 00:18:38.216 "ddgst": false 00:18:38.216 } 00:18:38.216 }, 00:18:38.216 { 00:18:38.216 "method": "bdev_nvme_set_hotplug", 00:18:38.216 "params": { 00:18:38.216 "period_us": 100000, 00:18:38.216 "enable": false 00:18:38.216 } 00:18:38.216 }, 00:18:38.216 { 00:18:38.216 "method": "bdev_wait_for_examine" 00:18:38.216 } 00:18:38.216 ] 00:18:38.216 }, 00:18:38.216 { 00:18:38.216 "subsystem": "nbd", 00:18:38.216 "config": [] 00:18:38.216 } 00:18:38.216 ] 00:18:38.216 }' 00:18:38.216 11:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:38.216 11:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:38.502 [2024-05-15 11:10:35.518610] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:18:38.502 [2024-05-15 11:10:35.518659] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2282798 ] 00:18:38.502 EAL: No free 2048 kB hugepages reported on node 1 00:18:38.502 [2024-05-15 11:10:35.568563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.502 [2024-05-15 11:10:35.639863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:38.771 [2024-05-15 11:10:35.773841] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:38.771 [2024-05-15 11:10:35.773921] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:39.337 11:10:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:39.337 11:10:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:18:39.337 11:10:36 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:39.337 Running I/O for 10 seconds... 00:18:49.307 00:18:49.307 Latency(us) 00:18:49.307 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.307 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:49.307 Verification LBA range: start 0x0 length 0x2000 00:18:49.307 TLSTESTn1 : 10.01 5489.64 21.44 0.00 0.00 23279.89 6496.61 26898.25 00:18:49.307 =================================================================================================================== 00:18:49.307 Total : 5489.64 21.44 0.00 0.00 23279.89 6496.61 26898.25 00:18:49.307 0 00:18:49.307 11:10:46 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:49.307 11:10:46 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 2282798 00:18:49.307 11:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2282798 ']' 00:18:49.307 11:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2282798 00:18:49.307 11:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:18:49.307 11:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:49.307 11:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2282798 00:18:49.307 11:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:18:49.307 11:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:18:49.307 11:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2282798' 00:18:49.307 killing process with pid 2282798 00:18:49.307 11:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2282798 00:18:49.307 Received shutdown signal, test time was about 10.000000 seconds 00:18:49.307 00:18:49.307 Latency(us) 00:18:49.307 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.307 =================================================================================================================== 00:18:49.307 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:49.307 [2024-05-15 11:10:46.500782] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:49.307 11:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2282798 00:18:49.565 11:10:46 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 2282634 00:18:49.565 11:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2282634 ']' 00:18:49.565 11:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2282634 00:18:49.565 11:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:18:49.565 11:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:49.565 11:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2282634 00:18:49.565 11:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:18:49.565 11:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:18:49.565 11:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2282634' 00:18:49.565 killing process with pid 2282634 00:18:49.565 11:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2282634 00:18:49.565 [2024-05-15 11:10:46.753229] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:49.565 [2024-05-15 11:10:46.753273] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:49.565 11:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2282634 00:18:49.824 11:10:46 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:18:49.824 11:10:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:49.824 11:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:18:49.824 11:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:49.824 11:10:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2284648 00:18:49.824 11:10:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2284648 00:18:49.824 11:10:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:49.824 11:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2284648 ']' 00:18:49.824 11:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.824 11:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:49.824 11:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.824 11:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:49.824 11:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:49.824 [2024-05-15 11:10:47.023924] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:18:49.824 [2024-05-15 11:10:47.023969] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:49.824 EAL: No free 2048 kB hugepages reported on node 1 00:18:49.824 [2024-05-15 11:10:47.079945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.082 [2024-05-15 11:10:47.158726] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:50.082 [2024-05-15 11:10:47.158764] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:50.082 [2024-05-15 11:10:47.158771] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:50.082 [2024-05-15 11:10:47.158778] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:50.082 [2024-05-15 11:10:47.158783] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:50.082 [2024-05-15 11:10:47.158823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.646 11:10:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:50.646 11:10:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:18:50.646 11:10:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:50.646 11:10:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:18:50.646 11:10:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.646 11:10:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:50.646 11:10:47 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.1HZT47fWdD 00:18:50.646 11:10:47 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.1HZT47fWdD 00:18:50.646 11:10:47 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:50.903 [2024-05-15 11:10:48.005841] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:50.903 11:10:48 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:51.160 11:10:48 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:51.160 [2024-05-15 11:10:48.346697] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:51.160 [2024-05-15 11:10:48.346741] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:51.160 [2024-05-15 11:10:48.346915] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:51.160 11:10:48 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:51.418 malloc0 00:18:51.418 11:10:48 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:51.676 11:10:48 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1HZT47fWdD 00:18:51.676 [2024-05-15 11:10:48.872323] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:51.676 11:10:48 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=2285123 00:18:51.676 11:10:48 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:51.676 11:10:48 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:51.676 11:10:48 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 2285123 /var/tmp/bdevperf.sock 00:18:51.676 11:10:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2285123 ']' 00:18:51.676 11:10:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:51.676 11:10:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:51.676 11:10:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:51.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:51.676 11:10:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:51.676 11:10:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:51.676 [2024-05-15 11:10:48.936003] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:18:51.676 [2024-05-15 11:10:48.936054] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2285123 ] 00:18:51.934 EAL: No free 2048 kB hugepages reported on node 1 00:18:51.934 [2024-05-15 11:10:48.990266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.934 [2024-05-15 11:10:49.063557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:52.498 11:10:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:52.498 11:10:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:18:52.498 11:10:49 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.1HZT47fWdD 00:18:52.755 11:10:49 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:53.013 [2024-05-15 11:10:50.058433] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:53.013 nvme0n1 00:18:53.013 11:10:50 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:53.013 Running I/O for 1 seconds... 00:18:54.383 00:18:54.383 Latency(us) 00:18:54.383 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:54.383 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:54.383 Verification LBA range: start 0x0 length 0x2000 00:18:54.383 nvme0n1 : 1.02 5313.62 20.76 0.00 0.00 23879.63 6325.65 35332.45 00:18:54.383 =================================================================================================================== 00:18:54.383 Total : 5313.62 20.76 0.00 0.00 23879.63 6325.65 35332.45 00:18:54.383 0 00:18:54.383 11:10:51 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 2285123 00:18:54.383 11:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2285123 ']' 00:18:54.383 11:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2285123 00:18:54.383 11:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:18:54.383 11:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:54.383 11:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2285123 00:18:54.383 11:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:18:54.383 11:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:18:54.383 11:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2285123' 00:18:54.383 killing process with pid 2285123 00:18:54.383 11:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2285123 00:18:54.383 Received shutdown signal, test time was about 1.000000 seconds 00:18:54.383 00:18:54.383 Latency(us) 00:18:54.383 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:54.383 =================================================================================================================== 00:18:54.383 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:54.383 11:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2285123 00:18:54.383 11:10:51 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 2284648 00:18:54.383 11:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2284648 ']' 00:18:54.383 11:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2284648 00:18:54.383 11:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:18:54.383 11:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:54.383 11:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2284648 00:18:54.383 11:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:18:54.383 11:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:18:54.383 11:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2284648' 00:18:54.383 killing process with pid 2284648 00:18:54.383 11:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2284648 00:18:54.383 [2024-05-15 11:10:51.563059] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:54.383 [2024-05-15 11:10:51.563102] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:54.383 11:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2284648 00:18:54.641 11:10:51 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:18:54.641 11:10:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:54.641 11:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:18:54.641 11:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:54.641 11:10:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2285596 00:18:54.641 11:10:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2285596 00:18:54.641 11:10:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:54.641 11:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2285596 ']' 00:18:54.641 11:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:54.641 11:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:54.641 11:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:54.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:54.641 11:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:54.641 11:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:54.641 [2024-05-15 11:10:51.834988] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:18:54.641 [2024-05-15 11:10:51.835034] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:54.641 EAL: No free 2048 kB hugepages reported on node 1 00:18:54.641 [2024-05-15 11:10:51.890888] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.899 [2024-05-15 11:10:51.957833] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:54.899 [2024-05-15 11:10:51.957874] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:54.899 [2024-05-15 11:10:51.957881] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:54.899 [2024-05-15 11:10:51.957887] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:54.899 [2024-05-15 11:10:51.957892] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:54.899 [2024-05-15 11:10:51.957911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:55.466 11:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:55.466 11:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:18:55.466 11:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:55.466 11:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:18:55.466 11:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.466 11:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:55.466 11:10:52 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:18:55.466 11:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:55.466 11:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.466 [2024-05-15 11:10:52.672619] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:55.466 malloc0 00:18:55.466 [2024-05-15 11:10:52.700736] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:55.466 [2024-05-15 11:10:52.700790] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:55.466 [2024-05-15 11:10:52.700957] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:55.466 11:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:55.466 11:10:52 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=2285642 00:18:55.466 11:10:52 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:55.466 11:10:52 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 2285642 /var/tmp/bdevperf.sock 00:18:55.466 11:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2285642 ']' 00:18:55.466 11:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:55.724 11:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:55.724 11:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:55.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:55.724 11:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:55.724 11:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.724 [2024-05-15 11:10:52.772922] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:18:55.724 [2024-05-15 11:10:52.772961] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2285642 ] 00:18:55.724 EAL: No free 2048 kB hugepages reported on node 1 00:18:55.724 [2024-05-15 11:10:52.827101] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.724 [2024-05-15 11:10:52.905787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:56.658 11:10:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:56.658 11:10:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:18:56.658 11:10:53 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.1HZT47fWdD 00:18:56.658 11:10:53 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:56.658 [2024-05-15 11:10:53.924823] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:56.916 nvme0n1 00:18:56.916 11:10:54 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:56.916 Running I/O for 1 seconds... 00:18:58.293 00:18:58.293 Latency(us) 00:18:58.293 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:58.293 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:58.293 Verification LBA range: start 0x0 length 0x2000 00:18:58.293 nvme0n1 : 1.02 5151.16 20.12 0.00 0.00 24644.77 4986.43 24732.72 00:18:58.293 =================================================================================================================== 00:18:58.293 Total : 5151.16 20.12 0.00 0.00 24644.77 4986.43 24732.72 00:18:58.293 0 00:18:58.293 11:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:18:58.293 11:10:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:58.293 11:10:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.293 11:10:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:58.293 11:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:18:58.293 "subsystems": [ 00:18:58.293 { 00:18:58.293 "subsystem": "keyring", 00:18:58.293 "config": [ 00:18:58.293 { 00:18:58.293 "method": "keyring_file_add_key", 00:18:58.293 "params": { 00:18:58.293 "name": "key0", 00:18:58.293 "path": "/tmp/tmp.1HZT47fWdD" 00:18:58.293 } 00:18:58.293 } 00:18:58.293 ] 00:18:58.293 }, 00:18:58.293 { 00:18:58.293 "subsystem": "iobuf", 00:18:58.293 "config": [ 00:18:58.293 { 00:18:58.293 "method": "iobuf_set_options", 00:18:58.293 "params": { 00:18:58.293 "small_pool_count": 8192, 00:18:58.293 "large_pool_count": 1024, 00:18:58.293 "small_bufsize": 8192, 00:18:58.293 "large_bufsize": 135168 00:18:58.293 } 00:18:58.293 } 00:18:58.293 ] 00:18:58.293 }, 00:18:58.293 { 00:18:58.293 "subsystem": "sock", 00:18:58.293 "config": [ 00:18:58.293 { 00:18:58.293 "method": "sock_impl_set_options", 00:18:58.293 "params": { 00:18:58.293 "impl_name": "posix", 00:18:58.293 "recv_buf_size": 2097152, 00:18:58.293 "send_buf_size": 2097152, 00:18:58.293 "enable_recv_pipe": true, 00:18:58.293 "enable_quickack": false, 00:18:58.293 "enable_placement_id": 0, 00:18:58.293 "enable_zerocopy_send_server": true, 00:18:58.293 "enable_zerocopy_send_client": false, 00:18:58.293 "zerocopy_threshold": 0, 00:18:58.293 "tls_version": 0, 00:18:58.293 "enable_ktls": false 00:18:58.293 } 00:18:58.293 }, 00:18:58.293 { 00:18:58.293 "method": "sock_impl_set_options", 00:18:58.293 "params": { 00:18:58.293 "impl_name": "ssl", 00:18:58.293 "recv_buf_size": 4096, 00:18:58.293 "send_buf_size": 4096, 00:18:58.293 "enable_recv_pipe": true, 00:18:58.293 "enable_quickack": false, 00:18:58.293 "enable_placement_id": 0, 00:18:58.293 "enable_zerocopy_send_server": true, 00:18:58.293 "enable_zerocopy_send_client": false, 00:18:58.293 "zerocopy_threshold": 0, 00:18:58.293 "tls_version": 0, 00:18:58.293 "enable_ktls": false 00:18:58.293 } 00:18:58.293 } 00:18:58.293 ] 00:18:58.293 }, 00:18:58.293 { 00:18:58.293 "subsystem": "vmd", 00:18:58.293 "config": [] 00:18:58.293 }, 00:18:58.293 { 00:18:58.293 "subsystem": "accel", 00:18:58.293 "config": [ 00:18:58.293 { 00:18:58.293 "method": "accel_set_options", 00:18:58.293 "params": { 00:18:58.293 "small_cache_size": 128, 00:18:58.293 "large_cache_size": 16, 00:18:58.293 "task_count": 2048, 00:18:58.293 "sequence_count": 2048, 00:18:58.293 "buf_count": 2048 00:18:58.293 } 00:18:58.293 } 00:18:58.293 ] 00:18:58.293 }, 00:18:58.293 { 00:18:58.293 "subsystem": "bdev", 00:18:58.293 "config": [ 00:18:58.293 { 00:18:58.293 "method": "bdev_set_options", 00:18:58.293 "params": { 00:18:58.293 "bdev_io_pool_size": 65535, 00:18:58.293 "bdev_io_cache_size": 256, 00:18:58.293 "bdev_auto_examine": true, 00:18:58.293 "iobuf_small_cache_size": 128, 00:18:58.293 "iobuf_large_cache_size": 16 00:18:58.293 } 00:18:58.293 }, 00:18:58.293 { 00:18:58.293 "method": "bdev_raid_set_options", 00:18:58.293 "params": { 00:18:58.293 "process_window_size_kb": 1024 00:18:58.293 } 00:18:58.293 }, 00:18:58.293 { 00:18:58.293 "method": "bdev_iscsi_set_options", 00:18:58.293 "params": { 00:18:58.293 "timeout_sec": 30 00:18:58.293 } 00:18:58.293 }, 00:18:58.293 { 00:18:58.293 "method": "bdev_nvme_set_options", 00:18:58.293 "params": { 00:18:58.293 "action_on_timeout": "none", 00:18:58.293 "timeout_us": 0, 00:18:58.293 "timeout_admin_us": 0, 00:18:58.293 "keep_alive_timeout_ms": 10000, 00:18:58.293 "arbitration_burst": 0, 00:18:58.293 "low_priority_weight": 0, 00:18:58.293 "medium_priority_weight": 0, 00:18:58.293 "high_priority_weight": 0, 00:18:58.293 "nvme_adminq_poll_period_us": 10000, 00:18:58.293 "nvme_ioq_poll_period_us": 0, 00:18:58.293 "io_queue_requests": 0, 00:18:58.293 "delay_cmd_submit": true, 00:18:58.294 "transport_retry_count": 4, 00:18:58.294 "bdev_retry_count": 3, 00:18:58.294 "transport_ack_timeout": 0, 00:18:58.294 "ctrlr_loss_timeout_sec": 0, 00:18:58.294 "reconnect_delay_sec": 0, 00:18:58.294 "fast_io_fail_timeout_sec": 0, 00:18:58.294 "disable_auto_failback": false, 00:18:58.294 "generate_uuids": false, 00:18:58.294 "transport_tos": 0, 00:18:58.294 "nvme_error_stat": false, 00:18:58.294 "rdma_srq_size": 0, 00:18:58.294 "io_path_stat": false, 00:18:58.294 "allow_accel_sequence": false, 00:18:58.294 "rdma_max_cq_size": 0, 00:18:58.294 "rdma_cm_event_timeout_ms": 0, 00:18:58.294 "dhchap_digests": [ 00:18:58.294 "sha256", 00:18:58.294 "sha384", 00:18:58.294 "sha512" 00:18:58.294 ], 00:18:58.294 "dhchap_dhgroups": [ 00:18:58.294 "null", 00:18:58.294 "ffdhe2048", 00:18:58.294 "ffdhe3072", 00:18:58.294 "ffdhe4096", 00:18:58.294 "ffdhe6144", 00:18:58.294 "ffdhe8192" 00:18:58.294 ] 00:18:58.294 } 00:18:58.294 }, 00:18:58.294 { 00:18:58.294 "method": "bdev_nvme_set_hotplug", 00:18:58.294 "params": { 00:18:58.294 "period_us": 100000, 00:18:58.294 "enable": false 00:18:58.294 } 00:18:58.294 }, 00:18:58.294 { 00:18:58.294 "method": "bdev_malloc_create", 00:18:58.294 "params": { 00:18:58.294 "name": "malloc0", 00:18:58.294 "num_blocks": 8192, 00:18:58.294 "block_size": 4096, 00:18:58.294 "physical_block_size": 4096, 00:18:58.294 "uuid": "048ab451-4771-40c9-a45b-896d08ea18a6", 00:18:58.294 "optimal_io_boundary": 0 00:18:58.294 } 00:18:58.294 }, 00:18:58.294 { 00:18:58.294 "method": "bdev_wait_for_examine" 00:18:58.294 } 00:18:58.294 ] 00:18:58.294 }, 00:18:58.294 { 00:18:58.294 "subsystem": "nbd", 00:18:58.294 "config": [] 00:18:58.294 }, 00:18:58.294 { 00:18:58.294 "subsystem": "scheduler", 00:18:58.294 "config": [ 00:18:58.294 { 00:18:58.294 "method": "framework_set_scheduler", 00:18:58.294 "params": { 00:18:58.294 "name": "static" 00:18:58.294 } 00:18:58.294 } 00:18:58.294 ] 00:18:58.294 }, 00:18:58.294 { 00:18:58.294 "subsystem": "nvmf", 00:18:58.294 "config": [ 00:18:58.294 { 00:18:58.294 "method": "nvmf_set_config", 00:18:58.294 "params": { 00:18:58.294 "discovery_filter": "match_any", 00:18:58.294 "admin_cmd_passthru": { 00:18:58.294 "identify_ctrlr": false 00:18:58.294 } 00:18:58.294 } 00:18:58.294 }, 00:18:58.294 { 00:18:58.294 "method": "nvmf_set_max_subsystems", 00:18:58.294 "params": { 00:18:58.294 "max_subsystems": 1024 00:18:58.294 } 00:18:58.294 }, 00:18:58.294 { 00:18:58.294 "method": "nvmf_set_crdt", 00:18:58.294 "params": { 00:18:58.294 "crdt1": 0, 00:18:58.294 "crdt2": 0, 00:18:58.294 "crdt3": 0 00:18:58.294 } 00:18:58.294 }, 00:18:58.294 { 00:18:58.294 "method": "nvmf_create_transport", 00:18:58.294 "params": { 00:18:58.294 "trtype": "TCP", 00:18:58.294 "max_queue_depth": 128, 00:18:58.294 "max_io_qpairs_per_ctrlr": 127, 00:18:58.294 "in_capsule_data_size": 4096, 00:18:58.294 "max_io_size": 131072, 00:18:58.294 "io_unit_size": 131072, 00:18:58.294 "max_aq_depth": 128, 00:18:58.294 "num_shared_buffers": 511, 00:18:58.294 "buf_cache_size": 4294967295, 00:18:58.294 "dif_insert_or_strip": false, 00:18:58.294 "zcopy": false, 00:18:58.294 "c2h_success": false, 00:18:58.294 "sock_priority": 0, 00:18:58.294 "abort_timeout_sec": 1, 00:18:58.294 "ack_timeout": 0, 00:18:58.294 "data_wr_pool_size": 0 00:18:58.294 } 00:18:58.294 }, 00:18:58.294 { 00:18:58.294 "method": "nvmf_create_subsystem", 00:18:58.294 "params": { 00:18:58.294 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:58.294 "allow_any_host": false, 00:18:58.294 "serial_number": "00000000000000000000", 00:18:58.294 "model_number": "SPDK bdev Controller", 00:18:58.294 "max_namespaces": 32, 00:18:58.294 "min_cntlid": 1, 00:18:58.294 "max_cntlid": 65519, 00:18:58.294 "ana_reporting": false 00:18:58.294 } 00:18:58.294 }, 00:18:58.294 { 00:18:58.294 "method": "nvmf_subsystem_add_host", 00:18:58.294 "params": { 00:18:58.294 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:58.294 "host": "nqn.2016-06.io.spdk:host1", 00:18:58.294 "psk": "key0" 00:18:58.294 } 00:18:58.294 }, 00:18:58.294 { 00:18:58.294 "method": "nvmf_subsystem_add_ns", 00:18:58.294 "params": { 00:18:58.294 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:58.294 "namespace": { 00:18:58.294 "nsid": 1, 00:18:58.294 "bdev_name": "malloc0", 00:18:58.294 "nguid": "048AB451477140C9A45B896D08EA18A6", 00:18:58.294 "uuid": "048ab451-4771-40c9-a45b-896d08ea18a6", 00:18:58.294 "no_auto_visible": false 00:18:58.294 } 00:18:58.294 } 00:18:58.294 }, 00:18:58.294 { 00:18:58.294 "method": "nvmf_subsystem_add_listener", 00:18:58.294 "params": { 00:18:58.294 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:58.294 "listen_address": { 00:18:58.294 "trtype": "TCP", 00:18:58.294 "adrfam": "IPv4", 00:18:58.294 "traddr": "10.0.0.2", 00:18:58.294 "trsvcid": "4420" 00:18:58.294 }, 00:18:58.294 "secure_channel": true 00:18:58.294 } 00:18:58.294 } 00:18:58.294 ] 00:18:58.294 } 00:18:58.294 ] 00:18:58.294 }' 00:18:58.294 11:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:58.294 11:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:18:58.294 "subsystems": [ 00:18:58.294 { 00:18:58.294 "subsystem": "keyring", 00:18:58.294 "config": [ 00:18:58.294 { 00:18:58.294 "method": "keyring_file_add_key", 00:18:58.294 "params": { 00:18:58.294 "name": "key0", 00:18:58.294 "path": "/tmp/tmp.1HZT47fWdD" 00:18:58.294 } 00:18:58.294 } 00:18:58.294 ] 00:18:58.294 }, 00:18:58.294 { 00:18:58.294 "subsystem": "iobuf", 00:18:58.294 "config": [ 00:18:58.294 { 00:18:58.294 "method": "iobuf_set_options", 00:18:58.294 "params": { 00:18:58.294 "small_pool_count": 8192, 00:18:58.294 "large_pool_count": 1024, 00:18:58.294 "small_bufsize": 8192, 00:18:58.294 "large_bufsize": 135168 00:18:58.294 } 00:18:58.294 } 00:18:58.294 ] 00:18:58.294 }, 00:18:58.294 { 00:18:58.294 "subsystem": "sock", 00:18:58.294 "config": [ 00:18:58.294 { 00:18:58.294 "method": "sock_impl_set_options", 00:18:58.294 "params": { 00:18:58.294 "impl_name": "posix", 00:18:58.294 "recv_buf_size": 2097152, 00:18:58.294 "send_buf_size": 2097152, 00:18:58.294 "enable_recv_pipe": true, 00:18:58.294 "enable_quickack": false, 00:18:58.294 "enable_placement_id": 0, 00:18:58.294 "enable_zerocopy_send_server": true, 00:18:58.294 "enable_zerocopy_send_client": false, 00:18:58.294 "zerocopy_threshold": 0, 00:18:58.294 "tls_version": 0, 00:18:58.294 "enable_ktls": false 00:18:58.294 } 00:18:58.294 }, 00:18:58.294 { 00:18:58.294 "method": "sock_impl_set_options", 00:18:58.294 "params": { 00:18:58.294 "impl_name": "ssl", 00:18:58.294 "recv_buf_size": 4096, 00:18:58.295 "send_buf_size": 4096, 00:18:58.295 "enable_recv_pipe": true, 00:18:58.295 "enable_quickack": false, 00:18:58.295 "enable_placement_id": 0, 00:18:58.295 "enable_zerocopy_send_server": true, 00:18:58.295 "enable_zerocopy_send_client": false, 00:18:58.295 "zerocopy_threshold": 0, 00:18:58.295 "tls_version": 0, 00:18:58.295 "enable_ktls": false 00:18:58.295 } 00:18:58.295 } 00:18:58.295 ] 00:18:58.295 }, 00:18:58.295 { 00:18:58.295 "subsystem": "vmd", 00:18:58.295 "config": [] 00:18:58.295 }, 00:18:58.295 { 00:18:58.295 "subsystem": "accel", 00:18:58.295 "config": [ 00:18:58.295 { 00:18:58.295 "method": "accel_set_options", 00:18:58.295 "params": { 00:18:58.295 "small_cache_size": 128, 00:18:58.295 "large_cache_size": 16, 00:18:58.295 "task_count": 2048, 00:18:58.295 "sequence_count": 2048, 00:18:58.295 "buf_count": 2048 00:18:58.295 } 00:18:58.295 } 00:18:58.295 ] 00:18:58.295 }, 00:18:58.295 { 00:18:58.295 "subsystem": "bdev", 00:18:58.295 "config": [ 00:18:58.295 { 00:18:58.295 "method": "bdev_set_options", 00:18:58.295 "params": { 00:18:58.295 "bdev_io_pool_size": 65535, 00:18:58.295 "bdev_io_cache_size": 256, 00:18:58.295 "bdev_auto_examine": true, 00:18:58.295 "iobuf_small_cache_size": 128, 00:18:58.295 "iobuf_large_cache_size": 16 00:18:58.295 } 00:18:58.295 }, 00:18:58.295 { 00:18:58.295 "method": "bdev_raid_set_options", 00:18:58.295 "params": { 00:18:58.295 "process_window_size_kb": 1024 00:18:58.295 } 00:18:58.295 }, 00:18:58.295 { 00:18:58.295 "method": "bdev_iscsi_set_options", 00:18:58.295 "params": { 00:18:58.295 "timeout_sec": 30 00:18:58.295 } 00:18:58.295 }, 00:18:58.295 { 00:18:58.295 "method": "bdev_nvme_set_options", 00:18:58.295 "params": { 00:18:58.295 "action_on_timeout": "none", 00:18:58.295 "timeout_us": 0, 00:18:58.295 "timeout_admin_us": 0, 00:18:58.295 "keep_alive_timeout_ms": 10000, 00:18:58.295 "arbitration_burst": 0, 00:18:58.295 "low_priority_weight": 0, 00:18:58.295 "medium_priority_weight": 0, 00:18:58.295 "high_priority_weight": 0, 00:18:58.295 "nvme_adminq_poll_period_us": 10000, 00:18:58.295 "nvme_ioq_poll_period_us": 0, 00:18:58.295 "io_queue_requests": 512, 00:18:58.295 "delay_cmd_submit": true, 00:18:58.295 "transport_retry_count": 4, 00:18:58.295 "bdev_retry_count": 3, 00:18:58.295 "transport_ack_timeout": 0, 00:18:58.295 "ctrlr_loss_timeout_sec": 0, 00:18:58.295 "reconnect_delay_sec": 0, 00:18:58.295 "fast_io_fail_timeout_sec": 0, 00:18:58.295 "disable_auto_failback": false, 00:18:58.295 "generate_uuids": false, 00:18:58.295 "transport_tos": 0, 00:18:58.295 "nvme_error_stat": false, 00:18:58.295 "rdma_srq_size": 0, 00:18:58.295 "io_path_stat": false, 00:18:58.295 "allow_accel_sequence": false, 00:18:58.295 "rdma_max_cq_size": 0, 00:18:58.295 "rdma_cm_event_timeout_ms": 0, 00:18:58.295 "dhchap_digests": [ 00:18:58.295 "sha256", 00:18:58.295 "sha384", 00:18:58.295 "sha512" 00:18:58.295 ], 00:18:58.295 "dhchap_dhgroups": [ 00:18:58.295 "null", 00:18:58.295 "ffdhe2048", 00:18:58.295 "ffdhe3072", 00:18:58.295 "ffdhe4096", 00:18:58.295 "ffdhe6144", 00:18:58.295 "ffdhe8192" 00:18:58.295 ] 00:18:58.295 } 00:18:58.295 }, 00:18:58.295 { 00:18:58.295 "method": "bdev_nvme_attach_controller", 00:18:58.295 "params": { 00:18:58.295 "name": "nvme0", 00:18:58.295 "trtype": "TCP", 00:18:58.295 "adrfam": "IPv4", 00:18:58.295 "traddr": "10.0.0.2", 00:18:58.295 "trsvcid": "4420", 00:18:58.295 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:58.295 "prchk_reftag": false, 00:18:58.295 "prchk_guard": false, 00:18:58.295 "ctrlr_loss_timeout_sec": 0, 00:18:58.295 "reconnect_delay_sec": 0, 00:18:58.295 "fast_io_fail_timeout_sec": 0, 00:18:58.295 "psk": "key0", 00:18:58.295 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:58.295 "hdgst": false, 00:18:58.295 "ddgst": false 00:18:58.295 } 00:18:58.295 }, 00:18:58.295 { 00:18:58.295 "method": "bdev_nvme_set_hotplug", 00:18:58.295 "params": { 00:18:58.295 "period_us": 100000, 00:18:58.295 "enable": false 00:18:58.295 } 00:18:58.295 }, 00:18:58.295 { 00:18:58.295 "method": "bdev_enable_histogram", 00:18:58.295 "params": { 00:18:58.295 "name": "nvme0n1", 00:18:58.295 "enable": true 00:18:58.295 } 00:18:58.295 }, 00:18:58.295 { 00:18:58.295 "method": "bdev_wait_for_examine" 00:18:58.295 } 00:18:58.295 ] 00:18:58.295 }, 00:18:58.295 { 00:18:58.295 "subsystem": "nbd", 00:18:58.295 "config": [] 00:18:58.295 } 00:18:58.295 ] 00:18:58.295 }' 00:18:58.295 11:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 2285642 00:18:58.295 11:10:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2285642 ']' 00:18:58.295 11:10:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2285642 00:18:58.295 11:10:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:18:58.295 11:10:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:58.295 11:10:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2285642 00:18:58.295 11:10:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:18:58.295 11:10:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:18:58.295 11:10:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2285642' 00:18:58.295 killing process with pid 2285642 00:18:58.295 11:10:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2285642 00:18:58.295 Received shutdown signal, test time was about 1.000000 seconds 00:18:58.295 00:18:58.295 Latency(us) 00:18:58.295 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:58.295 =================================================================================================================== 00:18:58.295 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:58.295 11:10:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2285642 00:18:58.555 11:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 2285596 00:18:58.555 11:10:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2285596 ']' 00:18:58.555 11:10:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2285596 00:18:58.555 11:10:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:18:58.555 11:10:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:58.555 11:10:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2285596 00:18:58.555 11:10:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:18:58.555 11:10:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:18:58.555 11:10:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2285596' 00:18:58.555 killing process with pid 2285596 00:18:58.555 11:10:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2285596 00:18:58.555 [2024-05-15 11:10:55.796946] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:58.555 11:10:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2285596 00:18:58.813 11:10:56 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:18:58.813 11:10:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:58.813 11:10:56 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:18:58.813 "subsystems": [ 00:18:58.813 { 00:18:58.813 "subsystem": "keyring", 00:18:58.813 "config": [ 00:18:58.813 { 00:18:58.813 "method": "keyring_file_add_key", 00:18:58.813 "params": { 00:18:58.813 "name": "key0", 00:18:58.813 "path": "/tmp/tmp.1HZT47fWdD" 00:18:58.813 } 00:18:58.813 } 00:18:58.813 ] 00:18:58.813 }, 00:18:58.813 { 00:18:58.813 "subsystem": "iobuf", 00:18:58.813 "config": [ 00:18:58.813 { 00:18:58.813 "method": "iobuf_set_options", 00:18:58.813 "params": { 00:18:58.813 "small_pool_count": 8192, 00:18:58.813 "large_pool_count": 1024, 00:18:58.813 "small_bufsize": 8192, 00:18:58.813 "large_bufsize": 135168 00:18:58.813 } 00:18:58.813 } 00:18:58.813 ] 00:18:58.813 }, 00:18:58.813 { 00:18:58.813 "subsystem": "sock", 00:18:58.813 "config": [ 00:18:58.813 { 00:18:58.813 "method": "sock_impl_set_options", 00:18:58.813 "params": { 00:18:58.813 "impl_name": "posix", 00:18:58.813 "recv_buf_size": 2097152, 00:18:58.813 "send_buf_size": 2097152, 00:18:58.813 "enable_recv_pipe": true, 00:18:58.813 "enable_quickack": false, 00:18:58.813 "enable_placement_id": 0, 00:18:58.813 "enable_zerocopy_send_server": true, 00:18:58.813 "enable_zerocopy_send_client": false, 00:18:58.814 "zerocopy_threshold": 0, 00:18:58.814 "tls_version": 0, 00:18:58.814 "enable_ktls": false 00:18:58.814 } 00:18:58.814 }, 00:18:58.814 { 00:18:58.814 "method": "sock_impl_set_options", 00:18:58.814 "params": { 00:18:58.814 "impl_name": "ssl", 00:18:58.814 "recv_buf_size": 4096, 00:18:58.814 "send_buf_size": 4096, 00:18:58.814 "enable_recv_pipe": true, 00:18:58.814 "enable_quickack": false, 00:18:58.814 "enable_placement_id": 0, 00:18:58.814 "enable_zerocopy_send_server": true, 00:18:58.814 "enable_zerocopy_send_client": false, 00:18:58.814 "zerocopy_threshold": 0, 00:18:58.814 "tls_version": 0, 00:18:58.814 "enable_ktls": false 00:18:58.814 } 00:18:58.814 } 00:18:58.814 ] 00:18:58.814 }, 00:18:58.814 { 00:18:58.814 "subsystem": "vmd", 00:18:58.814 "config": [] 00:18:58.814 }, 00:18:58.814 { 00:18:58.814 "subsystem": "accel", 00:18:58.814 "config": [ 00:18:58.814 { 00:18:58.814 "method": "accel_set_options", 00:18:58.814 "params": { 00:18:58.814 "small_cache_size": 128, 00:18:58.814 "large_cache_size": 16, 00:18:58.814 "task_count": 2048, 00:18:58.814 "sequence_count": 2048, 00:18:58.814 "buf_count": 2048 00:18:58.814 } 00:18:58.814 } 00:18:58.814 ] 00:18:58.814 }, 00:18:58.814 { 00:18:58.814 "subsystem": "bdev", 00:18:58.814 "config": [ 00:18:58.814 { 00:18:58.814 "method": "bdev_set_options", 00:18:58.814 "params": { 00:18:58.814 "bdev_io_pool_size": 65535, 00:18:58.814 "bdev_io_cache_size": 256, 00:18:58.814 "bdev_auto_examine": true, 00:18:58.814 "iobuf_small_cache_size": 128, 00:18:58.814 "iobuf_large_cache_size": 16 00:18:58.814 } 00:18:58.814 }, 00:18:58.814 { 00:18:58.814 "method": "bdev_raid_set_options", 00:18:58.814 "params": { 00:18:58.814 "process_window_size_kb": 1024 00:18:58.814 } 00:18:58.814 }, 00:18:58.814 { 00:18:58.814 "method": "bdev_iscsi_set_options", 00:18:58.814 "params": { 00:18:58.814 "timeout_sec": 30 00:18:58.814 } 00:18:58.814 }, 00:18:58.814 { 00:18:58.814 "method": "bdev_nvme_set_options", 00:18:58.814 "params": { 00:18:58.814 "action_on_timeout": "none", 00:18:58.814 "timeout_us": 0, 00:18:58.814 "timeout_admin_us": 0, 00:18:58.814 "keep_alive_timeout_ms": 10000, 00:18:58.814 "arbitration_burst": 0, 00:18:58.814 "low_priority_weight": 0, 00:18:58.814 "medium_priority_weight": 0, 00:18:58.814 "high_priority_weight": 0, 00:18:58.814 "nvme_adminq_poll_period_us": 10000, 00:18:58.814 "nvme_ioq_poll_period_us": 0, 00:18:58.814 "io_queue_requests": 0, 00:18:58.814 "delay_cmd_submit": true, 00:18:58.814 "transport_retry_count": 4, 00:18:58.814 "bdev_retry_count": 3, 00:18:58.814 "transport_ack_timeout": 0, 00:18:58.814 "ctrlr_loss_timeout_sec": 0, 00:18:58.814 "reconnect_delay_sec": 0, 00:18:58.814 "fast_io_fail_timeout_sec": 0, 00:18:58.814 "disable_auto_failback": false, 00:18:58.814 "generate_uuids": false, 00:18:58.814 "transport_tos": 0, 00:18:58.814 "nvme_error_stat": false, 00:18:58.814 "rdma_srq_size": 0, 00:18:58.814 "io_path_stat": false, 00:18:58.814 "allow_accel_sequence": false, 00:18:58.814 "rdma_max_cq_size": 0, 00:18:58.814 "rdma_cm_event_timeout_ms": 0, 00:18:58.814 "dhchap_digests": [ 00:18:58.814 "sha256", 00:18:58.814 "sha384", 00:18:58.814 "sha512" 00:18:58.814 ], 00:18:58.814 "dhchap_dhgroups": [ 00:18:58.814 "null", 00:18:58.814 "ffdhe2048", 00:18:58.814 "ffdhe3072", 00:18:58.814 "ffdhe4096", 00:18:58.814 "ffdhe6144", 00:18:58.814 "ffdhe8192" 00:18:58.814 ] 00:18:58.814 } 00:18:58.814 }, 00:18:58.814 { 00:18:58.814 "method": "bdev_nvme_set_hotplug", 00:18:58.814 "params": { 00:18:58.814 "period_us": 100000, 00:18:58.814 "enable": false 00:18:58.814 } 00:18:58.814 }, 00:18:58.814 { 00:18:58.814 "method": "bdev_malloc_create", 00:18:58.814 "params": { 00:18:58.814 "name": "malloc0", 00:18:58.814 "num_blocks": 8192, 00:18:58.814 "block_size": 4096, 00:18:58.814 "physical_block_size": 4096, 00:18:58.814 "uuid": "048ab451-4771-40c9-a45b-896d08ea18a6", 00:18:58.814 "optimal_io_boundary": 0 00:18:58.815 } 00:18:58.815 }, 00:18:58.815 { 00:18:58.815 "method": "bdev_wait_for_examine" 00:18:58.815 } 00:18:58.815 ] 00:18:58.815 }, 00:18:58.815 { 00:18:58.815 "subsystem": "nbd", 00:18:58.815 "config": [] 00:18:58.815 }, 00:18:58.815 { 00:18:58.815 "subsystem": "scheduler", 00:18:58.815 "config": [ 00:18:58.815 { 00:18:58.815 "method": "framework_set_scheduler", 00:18:58.815 "params": { 00:18:58.815 "name": "static" 00:18:58.815 } 00:18:58.815 } 00:18:58.815 ] 00:18:58.815 }, 00:18:58.815 { 00:18:58.815 "subsystem": "nvmf", 00:18:58.815 "config": [ 00:18:58.815 { 00:18:58.815 "method": "nvmf_set_config", 00:18:58.815 "params": { 00:18:58.815 "discovery_filter": "match_any", 00:18:58.815 "admin_cmd_passthru": { 00:18:58.815 "identify_ctrlr": false 00:18:58.815 } 00:18:58.815 } 00:18:58.815 }, 00:18:58.815 { 00:18:58.815 "method": "nvmf_set_max_subsystems", 00:18:58.815 "params": { 00:18:58.815 "max_subsystems": 1024 00:18:58.815 } 00:18:58.815 }, 00:18:58.815 { 00:18:58.815 "method": "nvmf_set_crdt", 00:18:58.815 "params": { 00:18:58.815 "crdt1": 0, 00:18:58.815 "crdt2": 0, 00:18:58.815 "crdt3": 0 00:18:58.815 } 00:18:58.815 }, 00:18:58.815 { 00:18:58.815 "method": "nvmf_create_transport", 00:18:58.815 "params": { 00:18:58.815 "trtype": "TCP", 00:18:58.815 "max_queue_depth": 128, 00:18:58.815 "max_io_qpairs_per_ctrlr": 127, 00:18:58.815 "in_capsule_data_size": 4096, 00:18:58.815 "max_io_size": 131072, 00:18:58.815 "io_unit_size": 131072, 00:18:58.815 "max_aq_depth": 128, 00:18:58.815 "num_shared_buffers": 511, 00:18:58.815 "buf_cache_size": 4294967295, 00:18:58.815 "dif_insert_or_strip": false, 00:18:58.815 "zcopy": false, 00:18:58.815 "c2h_success": false, 00:18:58.815 "sock_priority": 0, 00:18:58.815 "abort_timeout_sec": 1, 00:18:58.815 "ack_timeout": 0, 00:18:58.815 "data_wr_pool_size": 0 00:18:58.815 } 00:18:58.815 }, 00:18:58.815 { 00:18:58.815 "method": "nvmf_create_subsystem", 00:18:58.815 "params": { 00:18:58.815 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:58.815 "allow_any_host": false, 00:18:58.815 "serial_number": "00000000000000000000", 00:18:58.815 "model_number": "SPDK bdev Controller", 00:18:58.815 "max_namespaces": 32, 00:18:58.815 "min_cntlid": 1, 00:18:58.815 "max_cntlid": 65519, 00:18:58.815 "ana_reporting": false 00:18:58.815 } 00:18:58.815 }, 00:18:58.815 { 00:18:58.815 "method": "nvmf_subsystem_add_host", 00:18:58.815 "params": { 00:18:58.815 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:58.815 "host": "nqn.2016-06.io.spdk:host1", 00:18:58.815 "psk": "key0" 00:18:58.815 } 00:18:58.815 }, 00:18:58.815 { 00:18:58.815 "method": "nvmf_subsystem_add_ns", 00:18:58.815 "params": { 00:18:58.815 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:58.815 "namespace": { 00:18:58.815 "nsid": 1, 00:18:58.815 "bdev_name": "malloc0", 00:18:58.815 "nguid": "048AB451477140C9A45B896D08EA18A6", 00:18:58.815 "uuid": "048ab451-4771-40c9-a45b-896d08ea18a6", 00:18:58.815 "no_auto_visible": false 00:18:58.815 } 00:18:58.815 } 00:18:58.815 }, 00:18:58.815 { 00:18:58.815 "method": "nvmf_subsystem_add_listener", 00:18:58.815 "params": { 00:18:58.815 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:58.815 "listen_address": { 00:18:58.815 "trtype": "TCP", 00:18:58.815 "adrfam": "IPv4", 00:18:58.815 "traddr": "10.0.0.2", 00:18:58.815 "trsvcid": "4420" 00:18:58.815 }, 00:18:58.815 "secure_channel": true 00:18:58.815 } 00:18:58.815 } 00:18:58.815 ] 00:18:58.815 } 00:18:58.815 ] 00:18:58.815 }' 00:18:58.815 11:10:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:18:58.815 11:10:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.815 11:10:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2286320 00:18:58.815 11:10:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:58.815 11:10:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2286320 00:18:58.815 11:10:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2286320 ']' 00:18:58.815 11:10:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:58.815 11:10:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:58.816 11:10:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:58.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:58.816 11:10:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:58.816 11:10:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.816 [2024-05-15 11:10:56.068822] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:18:58.816 [2024-05-15 11:10:56.068867] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:59.074 EAL: No free 2048 kB hugepages reported on node 1 00:18:59.074 [2024-05-15 11:10:56.124910] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.074 [2024-05-15 11:10:56.191552] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:59.074 [2024-05-15 11:10:56.191591] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:59.074 [2024-05-15 11:10:56.191598] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:59.074 [2024-05-15 11:10:56.191604] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:59.074 [2024-05-15 11:10:56.191609] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:59.074 [2024-05-15 11:10:56.191680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.333 [2024-05-15 11:10:56.395546] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:59.333 [2024-05-15 11:10:56.427558] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:59.333 [2024-05-15 11:10:56.427600] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:59.333 [2024-05-15 11:10:56.437460] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:59.591 11:10:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:59.591 11:10:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:18:59.591 11:10:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:59.591 11:10:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:18:59.591 11:10:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:59.850 11:10:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:59.850 11:10:56 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=2286355 00:18:59.850 11:10:56 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 2286355 /var/tmp/bdevperf.sock 00:18:59.850 11:10:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2286355 ']' 00:18:59.850 11:10:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:59.850 11:10:56 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:59.850 11:10:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:59.850 11:10:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:59.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:59.850 11:10:56 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:18:59.850 "subsystems": [ 00:18:59.850 { 00:18:59.850 "subsystem": "keyring", 00:18:59.850 "config": [ 00:18:59.850 { 00:18:59.850 "method": "keyring_file_add_key", 00:18:59.850 "params": { 00:18:59.850 "name": "key0", 00:18:59.850 "path": "/tmp/tmp.1HZT47fWdD" 00:18:59.850 } 00:18:59.850 } 00:18:59.850 ] 00:18:59.850 }, 00:18:59.850 { 00:18:59.850 "subsystem": "iobuf", 00:18:59.850 "config": [ 00:18:59.850 { 00:18:59.850 "method": "iobuf_set_options", 00:18:59.850 "params": { 00:18:59.850 "small_pool_count": 8192, 00:18:59.850 "large_pool_count": 1024, 00:18:59.850 "small_bufsize": 8192, 00:18:59.850 "large_bufsize": 135168 00:18:59.850 } 00:18:59.850 } 00:18:59.850 ] 00:18:59.850 }, 00:18:59.851 { 00:18:59.851 "subsystem": "sock", 00:18:59.851 "config": [ 00:18:59.851 { 00:18:59.851 "method": "sock_impl_set_options", 00:18:59.851 "params": { 00:18:59.851 "impl_name": "posix", 00:18:59.851 "recv_buf_size": 2097152, 00:18:59.851 "send_buf_size": 2097152, 00:18:59.851 "enable_recv_pipe": true, 00:18:59.851 "enable_quickack": false, 00:18:59.851 "enable_placement_id": 0, 00:18:59.851 "enable_zerocopy_send_server": true, 00:18:59.851 "enable_zerocopy_send_client": false, 00:18:59.851 "zerocopy_threshold": 0, 00:18:59.851 "tls_version": 0, 00:18:59.851 "enable_ktls": false 00:18:59.851 } 00:18:59.851 }, 00:18:59.851 { 00:18:59.851 "method": "sock_impl_set_options", 00:18:59.851 "params": { 00:18:59.851 "impl_name": "ssl", 00:18:59.851 "recv_buf_size": 4096, 00:18:59.851 "send_buf_size": 4096, 00:18:59.851 "enable_recv_pipe": true, 00:18:59.851 "enable_quickack": false, 00:18:59.851 "enable_placement_id": 0, 00:18:59.851 "enable_zerocopy_send_server": true, 00:18:59.851 "enable_zerocopy_send_client": false, 00:18:59.851 "zerocopy_threshold": 0, 00:18:59.851 "tls_version": 0, 00:18:59.851 "enable_ktls": false 00:18:59.851 } 00:18:59.851 } 00:18:59.851 ] 00:18:59.851 }, 00:18:59.851 { 00:18:59.851 "subsystem": "vmd", 00:18:59.851 "config": [] 00:18:59.851 }, 00:18:59.851 { 00:18:59.851 "subsystem": "accel", 00:18:59.851 "config": [ 00:18:59.851 { 00:18:59.851 "method": "accel_set_options", 00:18:59.851 "params": { 00:18:59.851 "small_cache_size": 128, 00:18:59.851 "large_cache_size": 16, 00:18:59.851 "task_count": 2048, 00:18:59.851 "sequence_count": 2048, 00:18:59.851 "buf_count": 2048 00:18:59.851 } 00:18:59.851 } 00:18:59.851 ] 00:18:59.851 }, 00:18:59.851 { 00:18:59.851 "subsystem": "bdev", 00:18:59.851 "config": [ 00:18:59.851 { 00:18:59.851 "method": "bdev_set_options", 00:18:59.851 "params": { 00:18:59.851 "bdev_io_pool_size": 65535, 00:18:59.851 "bdev_io_cache_size": 256, 00:18:59.851 "bdev_auto_examine": true, 00:18:59.851 "iobuf_small_cache_size": 128, 00:18:59.851 "iobuf_large_cache_size": 16 00:18:59.851 } 00:18:59.851 }, 00:18:59.851 { 00:18:59.851 "method": "bdev_raid_set_options", 00:18:59.851 "params": { 00:18:59.851 "process_window_size_kb": 1024 00:18:59.851 } 00:18:59.851 }, 00:18:59.851 { 00:18:59.851 "method": "bdev_iscsi_set_options", 00:18:59.851 "params": { 00:18:59.851 "timeout_sec": 30 00:18:59.851 } 00:18:59.851 }, 00:18:59.851 { 00:18:59.851 "method": "bdev_nvme_set_options", 00:18:59.851 "params": { 00:18:59.851 "action_on_timeout": "none", 00:18:59.851 "timeout_us": 0, 00:18:59.851 "timeout_admin_us": 0, 00:18:59.851 "keep_alive_timeout_ms": 10000, 00:18:59.851 "arbitration_burst": 0, 00:18:59.851 "low_priority_weight": 0, 00:18:59.851 "medium_priority_weight": 0, 00:18:59.851 "high_priority_weight": 0, 00:18:59.851 "nvme_adminq_poll_period_us": 10000, 00:18:59.851 "nvme_ioq_poll_period_us": 0, 00:18:59.851 "io_queue_requests": 512, 00:18:59.851 "delay_cmd_submit": true, 00:18:59.851 "transport_retry_count": 4, 00:18:59.851 "bdev_retry_count": 3, 00:18:59.851 "transport_ack_timeout": 0, 00:18:59.851 "ctrlr_loss_timeout_sec": 0, 00:18:59.851 "reconnect_delay_sec": 0, 00:18:59.851 "fast_io_fail_timeout_sec": 0, 00:18:59.851 "disable_auto_failback": false, 00:18:59.851 "generate_uuids": false, 00:18:59.851 "transport_tos": 0, 00:18:59.851 "nvme_error_stat": false, 00:18:59.851 "rdma_srq_size": 0, 00:18:59.851 "io_path_stat": false, 00:18:59.851 "allow_accel_sequence": false, 00:18:59.851 "rdma_max_cq_size": 0, 00:18:59.851 "rdma_cm_event_timeout_ms": 0, 00:18:59.851 "dhchap_digests": [ 00:18:59.851 "sha256", 00:18:59.851 "sha384", 00:18:59.851 "sha512" 00:18:59.851 ], 00:18:59.851 "dhchap_dhgroups": [ 00:18:59.851 "null", 00:18:59.851 "ffdhe2048", 00:18:59.851 "ffdhe3072", 00:18:59.851 "ffdhe4096", 00:18:59.851 "ffdhe6144", 00:18:59.851 "ffdhe8192" 00:18:59.851 ] 00:18:59.851 } 00:18:59.851 }, 00:18:59.851 { 00:18:59.851 "method": "bdev_nvme_attach_controller", 00:18:59.851 "params": { 00:18:59.851 "name": "nvme0", 00:18:59.851 "trtype": "TCP", 00:18:59.851 "adrfam": "IPv4", 00:18:59.851 "traddr": "10.0.0.2", 00:18:59.851 "trsvcid": "4420", 00:18:59.851 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:59.851 "prchk_reftag": false, 00:18:59.851 "prchk_guard": false, 00:18:59.851 "ctrlr_loss_timeout_sec": 0, 00:18:59.851 "reconnect_delay_sec": 0, 00:18:59.851 "fast_io_fail_timeout_sec": 0, 00:18:59.851 "psk": "key0", 00:18:59.851 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:59.851 "hdgst": false, 00:18:59.851 "ddgst": false 00:18:59.851 } 00:18:59.851 }, 00:18:59.851 { 00:18:59.851 "method": "bdev_nvme_set_hotplug", 00:18:59.851 "params": { 00:18:59.851 "period_us": 100000, 00:18:59.851 "enable": false 00:18:59.851 } 00:18:59.851 }, 00:18:59.851 { 00:18:59.851 "method": "bdev_enable_histogram", 00:18:59.851 "params": { 00:18:59.851 "name": "nvme0n1", 00:18:59.851 "enable": true 00:18:59.851 } 00:18:59.851 }, 00:18:59.851 { 00:18:59.851 "method": "bdev_wait_for_examine" 00:18:59.851 } 00:18:59.851 ] 00:18:59.851 }, 00:18:59.851 { 00:18:59.851 "subsystem": "nbd", 00:18:59.851 "config": [] 00:18:59.851 } 00:18:59.851 ] 00:18:59.851 }' 00:18:59.851 11:10:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:59.851 11:10:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:59.851 [2024-05-15 11:10:56.934238] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:18:59.851 [2024-05-15 11:10:56.934284] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2286355 ] 00:18:59.851 EAL: No free 2048 kB hugepages reported on node 1 00:18:59.851 [2024-05-15 11:10:56.988475] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.851 [2024-05-15 11:10:57.062451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:00.109 [2024-05-15 11:10:57.206086] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:00.675 11:10:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:19:00.675 11:10:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:19:00.675 11:10:57 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:00.675 11:10:57 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:19:00.675 11:10:57 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.675 11:10:57 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:00.932 Running I/O for 1 seconds... 00:19:01.865 00:19:01.865 Latency(us) 00:19:01.865 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.865 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:01.865 Verification LBA range: start 0x0 length 0x2000 00:19:01.865 nvme0n1 : 1.01 5406.22 21.12 0.00 0.00 23503.08 6097.70 42398.94 00:19:01.865 =================================================================================================================== 00:19:01.865 Total : 5406.22 21.12 0.00 0.00 23503.08 6097.70 42398.94 00:19:01.865 0 00:19:01.865 11:10:59 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:19:01.865 11:10:59 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:19:01.865 11:10:59 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:01.865 11:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # type=--id 00:19:01.865 11:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # id=0 00:19:01.865 11:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # '[' --id = --pid ']' 00:19:01.865 11:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@811 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:01.865 11:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@811 -- # shm_files=nvmf_trace.0 00:19:01.865 11:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@813 -- # [[ -z nvmf_trace.0 ]] 00:19:01.865 11:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # for n in $shm_files 00:19:01.865 11:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:01.865 nvmf_trace.0 00:19:01.865 11:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@820 -- # return 0 00:19:01.865 11:10:59 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 2286355 00:19:01.865 11:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2286355 ']' 00:19:01.865 11:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2286355 00:19:01.865 11:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:19:01.865 11:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:19:01.865 11:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2286355 00:19:02.123 11:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:19:02.123 11:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:19:02.123 11:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2286355' 00:19:02.123 killing process with pid 2286355 00:19:02.123 11:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2286355 00:19:02.123 Received shutdown signal, test time was about 1.000000 seconds 00:19:02.123 00:19:02.123 Latency(us) 00:19:02.123 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.123 =================================================================================================================== 00:19:02.123 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:02.123 11:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2286355 00:19:02.123 11:10:59 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:02.123 11:10:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:02.123 11:10:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:19:02.123 11:10:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:02.123 11:10:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:19:02.123 11:10:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:02.123 11:10:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:02.123 rmmod nvme_tcp 00:19:02.123 rmmod nvme_fabrics 00:19:02.123 rmmod nvme_keyring 00:19:02.381 11:10:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:02.381 11:10:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:19:02.381 11:10:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:19:02.381 11:10:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 2286320 ']' 00:19:02.381 11:10:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 2286320 00:19:02.381 11:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2286320 ']' 00:19:02.381 11:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2286320 00:19:02.381 11:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:19:02.381 11:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:19:02.381 11:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2286320 00:19:02.381 11:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:19:02.381 11:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:19:02.381 11:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2286320' 00:19:02.381 killing process with pid 2286320 00:19:02.381 11:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2286320 00:19:02.381 [2024-05-15 11:10:59.455615] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:02.381 11:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2286320 00:19:02.670 11:10:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:02.670 11:10:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:02.670 11:10:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:02.670 11:10:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:02.670 11:10:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:02.670 11:10:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:02.670 11:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:02.670 11:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:04.620 11:11:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:04.620 11:11:01 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.TGwGvNYWEa /tmp/tmp.o5GNrt5iDo /tmp/tmp.1HZT47fWdD 00:19:04.620 00:19:04.620 real 1m23.182s 00:19:04.620 user 2m7.157s 00:19:04.620 sys 0m28.867s 00:19:04.620 11:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # xtrace_disable 00:19:04.620 11:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.620 ************************************ 00:19:04.620 END TEST nvmf_tls 00:19:04.620 ************************************ 00:19:04.620 11:11:01 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:04.620 11:11:01 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:19:04.620 11:11:01 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:19:04.620 11:11:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:04.620 ************************************ 00:19:04.620 START TEST nvmf_fips 00:19:04.620 ************************************ 00:19:04.620 11:11:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:04.878 * Looking for test storage... 00:19:04.878 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:04.878 11:11:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:04.878 11:11:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:04.878 11:11:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:04.878 11:11:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:04.878 11:11:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:04.878 11:11:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:04.878 11:11:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:04.878 11:11:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:04.878 11:11:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:04.878 11:11:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:04.878 11:11:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:04.878 11:11:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:04.878 11:11:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:04.878 11:11:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:04.878 11:11:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:04.878 11:11:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:04.878 11:11:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:04.878 11:11:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:04.878 11:11:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:04.878 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:04.878 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:04.878 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:04.878 11:11:01 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.878 11:11:01 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.878 11:11:01 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.878 11:11:01 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:04.878 11:11:01 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.878 11:11:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:19:04.878 11:11:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:04.878 11:11:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:04.878 11:11:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:04.878 11:11:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:04.878 11:11:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:04.878 11:11:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:04.879 11:11:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:19:04.879 11:11:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:04.879 11:11:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:04.879 11:11:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:19:04.879 11:11:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:19:04.879 11:11:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:19:04.879 11:11:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:19:04.879 11:11:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:19:04.879 11:11:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:19:04.879 11:11:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:04.879 11:11:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:19:04.879 11:11:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:19:04.879 11:11:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:19:04.879 11:11:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:19:04.879 11:11:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:19:04.879 11:11:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:19:04.879 11:11:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:04.879 11:11:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:19:04.879 11:11:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@649 -- # local es=0 00:19:04.879 11:11:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:19:04.879 11:11:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:04.879 11:11:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@637 -- # local arg=openssl 00:19:04.879 11:11:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:04.879 11:11:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # type -t openssl 00:19:04.879 11:11:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:04.879 11:11:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # type -P openssl 00:19:04.879 11:11:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:04.879 11:11:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # arg=/usr/bin/openssl 00:19:04.879 11:11:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # [[ -x /usr/bin/openssl ]] 00:19:04.879 11:11:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # openssl md5 /dev/fd/62 00:19:04.879 Error setting digest 00:19:04.879 0002ADE34E7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:19:04.879 0002ADE34E7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:19:04.879 11:11:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # es=1 00:19:04.879 11:11:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:19:04.879 11:11:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:19:04.879 11:11:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:19:04.879 11:11:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:19:04.879 11:11:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:04.879 11:11:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:04.879 11:11:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:04.879 11:11:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:04.879 11:11:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:04.879 11:11:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:04.879 11:11:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:04.879 11:11:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:04.879 11:11:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:04.879 11:11:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:04.879 11:11:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:19:04.879 11:11:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:10.142 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:10.142 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:10.142 Found net devices under 0000:86:00.0: cvl_0_0 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:10.142 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:10.143 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:10.143 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:10.143 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:10.143 Found net devices under 0000:86:00.1: cvl_0_1 00:19:10.143 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:10.143 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:10.143 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:19:10.143 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:10.143 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:10.143 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:10.143 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:10.143 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:10.143 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:10.143 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:10.143 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:10.143 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:10.143 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:10.143 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:10.143 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:10.143 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:10.143 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:10.143 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:10.143 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:10.143 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:10.143 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:10.143 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:10.143 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:10.143 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:10.143 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:10.143 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:10.143 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:10.143 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:19:10.143 00:19:10.143 --- 10.0.0.2 ping statistics --- 00:19:10.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.143 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:19:10.143 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:10.143 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:10.143 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:19:10.143 00:19:10.143 --- 10.0.0.1 ping statistics --- 00:19:10.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.143 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:19:10.143 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:10.143 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:19:10.143 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:10.143 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:10.143 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:10.143 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:10.143 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:10.143 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:10.143 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:10.143 11:11:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:19:10.143 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:10.143 11:11:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@721 -- # xtrace_disable 00:19:10.143 11:11:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:10.143 11:11:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=2290283 00:19:10.143 11:11:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 2290283 00:19:10.143 11:11:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@828 -- # '[' -z 2290283 ']' 00:19:10.143 11:11:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.143 11:11:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local max_retries=100 00:19:10.143 11:11:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.143 11:11:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # xtrace_disable 00:19:10.143 11:11:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:10.143 11:11:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:10.143 [2024-05-15 11:11:07.064461] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:19:10.143 [2024-05-15 11:11:07.064515] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:10.143 EAL: No free 2048 kB hugepages reported on node 1 00:19:10.143 [2024-05-15 11:11:07.120446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.143 [2024-05-15 11:11:07.196788] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:10.143 [2024-05-15 11:11:07.196823] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:10.143 [2024-05-15 11:11:07.196829] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:10.143 [2024-05-15 11:11:07.196835] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:10.143 [2024-05-15 11:11:07.196840] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:10.143 [2024-05-15 11:11:07.196864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:10.709 11:11:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:19:10.709 11:11:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@861 -- # return 0 00:19:10.709 11:11:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:10.709 11:11:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@727 -- # xtrace_disable 00:19:10.709 11:11:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:10.709 11:11:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:10.709 11:11:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:19:10.709 11:11:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:10.709 11:11:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:10.709 11:11:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:10.709 11:11:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:10.709 11:11:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:10.709 11:11:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:10.709 11:11:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:10.967 [2024-05-15 11:11:08.028218] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:10.967 [2024-05-15 11:11:08.044190] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:10.967 [2024-05-15 11:11:08.044227] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:10.967 [2024-05-15 11:11:08.044380] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:10.967 [2024-05-15 11:11:08.072398] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:10.967 malloc0 00:19:10.967 11:11:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:10.967 11:11:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=2290395 00:19:10.967 11:11:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 2290395 /var/tmp/bdevperf.sock 00:19:10.967 11:11:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:10.967 11:11:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@828 -- # '[' -z 2290395 ']' 00:19:10.967 11:11:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:10.967 11:11:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local max_retries=100 00:19:10.967 11:11:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:10.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:10.968 11:11:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # xtrace_disable 00:19:10.968 11:11:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:10.968 [2024-05-15 11:11:08.148712] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:19:10.968 [2024-05-15 11:11:08.148758] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2290395 ] 00:19:10.968 EAL: No free 2048 kB hugepages reported on node 1 00:19:10.968 [2024-05-15 11:11:08.199193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.225 [2024-05-15 11:11:08.271186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:11.790 11:11:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:19:11.790 11:11:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@861 -- # return 0 00:19:11.790 11:11:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:12.048 [2024-05-15 11:11:09.096945] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:12.048 [2024-05-15 11:11:09.097024] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:12.048 TLSTESTn1 00:19:12.048 11:11:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:12.048 Running I/O for 10 seconds... 00:19:24.245 00:19:24.245 Latency(us) 00:19:24.245 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.245 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:24.245 Verification LBA range: start 0x0 length 0x2000 00:19:24.245 TLSTESTn1 : 10.01 5453.11 21.30 0.00 0.00 23436.33 5100.41 23365.01 00:19:24.246 =================================================================================================================== 00:19:24.246 Total : 5453.11 21.30 0.00 0.00 23436.33 5100.41 23365.01 00:19:24.246 0 00:19:24.246 11:11:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:24.246 11:11:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:24.246 11:11:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # type=--id 00:19:24.246 11:11:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # id=0 00:19:24.246 11:11:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # '[' --id = --pid ']' 00:19:24.246 11:11:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@811 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:24.246 11:11:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@811 -- # shm_files=nvmf_trace.0 00:19:24.246 11:11:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@813 -- # [[ -z nvmf_trace.0 ]] 00:19:24.246 11:11:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # for n in $shm_files 00:19:24.246 11:11:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:24.246 nvmf_trace.0 00:19:24.246 11:11:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@820 -- # return 0 00:19:24.246 11:11:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2290395 00:19:24.246 11:11:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@947 -- # '[' -z 2290395 ']' 00:19:24.246 11:11:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # kill -0 2290395 00:19:24.246 11:11:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # uname 00:19:24.246 11:11:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:19:24.246 11:11:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2290395 00:19:24.246 11:11:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:19:24.246 11:11:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:19:24.246 11:11:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2290395' 00:19:24.246 killing process with pid 2290395 00:19:24.246 11:11:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # kill 2290395 00:19:24.246 Received shutdown signal, test time was about 10.000000 seconds 00:19:24.246 00:19:24.246 Latency(us) 00:19:24.246 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.246 =================================================================================================================== 00:19:24.246 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:24.246 [2024-05-15 11:11:19.434153] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:24.246 11:11:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@971 -- # wait 2290395 00:19:24.246 11:11:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:24.246 11:11:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:24.246 11:11:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:19:24.246 11:11:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:24.246 11:11:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:19:24.246 11:11:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:24.246 11:11:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:24.246 rmmod nvme_tcp 00:19:24.246 rmmod nvme_fabrics 00:19:24.246 rmmod nvme_keyring 00:19:24.246 11:11:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:24.246 11:11:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:19:24.246 11:11:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:19:24.246 11:11:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 2290283 ']' 00:19:24.246 11:11:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 2290283 00:19:24.246 11:11:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@947 -- # '[' -z 2290283 ']' 00:19:24.246 11:11:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # kill -0 2290283 00:19:24.246 11:11:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # uname 00:19:24.246 11:11:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:19:24.246 11:11:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2290283 00:19:24.246 11:11:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:19:24.246 11:11:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:19:24.246 11:11:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2290283' 00:19:24.246 killing process with pid 2290283 00:19:24.246 11:11:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # kill 2290283 00:19:24.246 [2024-05-15 11:11:19.745536] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:24.246 [2024-05-15 11:11:19.745570] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:24.246 11:11:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@971 -- # wait 2290283 00:19:24.246 11:11:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:24.246 11:11:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:24.246 11:11:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:24.246 11:11:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:24.246 11:11:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:24.246 11:11:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:24.246 11:11:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:24.246 11:11:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:24.813 11:11:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:24.813 11:11:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:24.813 00:19:24.813 real 0m20.215s 00:19:24.813 user 0m22.476s 00:19:24.813 sys 0m8.474s 00:19:24.813 11:11:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # xtrace_disable 00:19:24.813 11:11:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:24.813 ************************************ 00:19:24.813 END TEST nvmf_fips 00:19:24.813 ************************************ 00:19:24.813 11:11:22 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:19:24.813 11:11:22 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:19:24.813 11:11:22 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:19:24.813 11:11:22 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:19:24.813 11:11:22 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:19:24.813 11:11:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:30.075 11:11:27 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:30.075 11:11:27 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:19:30.075 11:11:27 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:30.075 11:11:27 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:30.075 11:11:27 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:30.075 11:11:27 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:30.075 11:11:27 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:30.075 11:11:27 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:19:30.075 11:11:27 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:30.075 11:11:27 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:19:30.075 11:11:27 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:19:30.075 11:11:27 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:19:30.075 11:11:27 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:19:30.075 11:11:27 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:19:30.075 11:11:27 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:19:30.075 11:11:27 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:30.075 11:11:27 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:30.075 11:11:27 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:30.075 11:11:27 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:30.075 11:11:27 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:30.075 11:11:27 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:30.075 11:11:27 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:30.075 11:11:27 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:30.075 11:11:27 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:30.075 11:11:27 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:30.075 11:11:27 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:30.075 11:11:27 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:30.075 11:11:27 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:30.075 11:11:27 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:30.075 11:11:27 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:30.076 11:11:27 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:30.076 11:11:27 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:30.076 11:11:27 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:30.076 11:11:27 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:30.076 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:30.076 11:11:27 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:30.076 11:11:27 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:30.076 11:11:27 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:30.076 11:11:27 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:30.076 11:11:27 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:30.076 11:11:27 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:30.076 11:11:27 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:30.076 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:30.076 11:11:27 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:30.076 11:11:27 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:30.076 11:11:27 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:30.076 11:11:27 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:30.076 11:11:27 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:30.076 11:11:27 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:30.076 11:11:27 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:30.076 11:11:27 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:30.076 11:11:27 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:30.076 11:11:27 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:30.076 11:11:27 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:30.076 11:11:27 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:30.076 11:11:27 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:30.076 11:11:27 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:30.076 11:11:27 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:30.076 11:11:27 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:30.076 Found net devices under 0000:86:00.0: cvl_0_0 00:19:30.076 11:11:27 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:30.076 11:11:27 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:30.076 11:11:27 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:30.076 11:11:27 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:30.076 11:11:27 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:30.076 11:11:27 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:30.076 11:11:27 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:30.076 11:11:27 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:30.076 11:11:27 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:30.076 Found net devices under 0000:86:00.1: cvl_0_1 00:19:30.076 11:11:27 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:30.076 11:11:27 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:30.076 11:11:27 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:30.076 11:11:27 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:19:30.076 11:11:27 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:30.076 11:11:27 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:19:30.076 11:11:27 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:19:30.076 11:11:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:30.076 ************************************ 00:19:30.076 START TEST nvmf_perf_adq 00:19:30.076 ************************************ 00:19:30.076 11:11:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:30.076 * Looking for test storage... 00:19:30.076 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:30.076 11:11:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:30.076 11:11:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:19:30.076 11:11:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:30.076 11:11:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:30.076 11:11:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:30.076 11:11:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:30.076 11:11:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:30.076 11:11:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:30.076 11:11:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:30.076 11:11:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:30.076 11:11:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:30.076 11:11:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:30.076 11:11:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:30.076 11:11:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:30.076 11:11:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:30.076 11:11:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:30.076 11:11:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:30.076 11:11:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:30.076 11:11:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:30.076 11:11:27 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:30.076 11:11:27 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:30.076 11:11:27 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:30.076 11:11:27 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.076 11:11:27 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.076 11:11:27 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.076 11:11:27 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:19:30.076 11:11:27 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.076 11:11:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:19:30.076 11:11:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:30.076 11:11:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:30.076 11:11:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:30.076 11:11:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:30.076 11:11:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:30.076 11:11:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:30.076 11:11:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:30.076 11:11:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:30.076 11:11:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:30.076 11:11:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:30.076 11:11:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:35.345 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:35.346 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:35.346 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:35.346 Found net devices under 0000:86:00.0: cvl_0_0 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:35.346 Found net devices under 0000:86:00.1: cvl_0_1 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:19:35.346 11:11:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:19:36.280 11:11:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:19:38.183 11:11:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:19:43.457 11:11:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:19:43.457 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:43.457 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:43.457 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:43.457 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:43.457 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:43.457 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:43.458 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:43.458 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:43.458 Found net devices under 0000:86:00.0: cvl_0_0 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:43.458 Found net devices under 0000:86:00.1: cvl_0_1 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:43.458 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:43.458 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:19:43.458 00:19:43.458 --- 10.0.0.2 ping statistics --- 00:19:43.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.458 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:43.458 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:43.458 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:19:43.458 00:19:43.458 --- 10.0.0.1 ping statistics --- 00:19:43.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.458 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:43.458 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:19:43.459 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:43.459 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:43.459 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:43.459 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:43.459 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:43.459 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:43.459 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:43.459 11:11:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:43.459 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:43.459 11:11:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@721 -- # xtrace_disable 00:19:43.459 11:11:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:43.459 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2300214 00:19:43.459 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2300214 00:19:43.459 11:11:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:43.459 11:11:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@828 -- # '[' -z 2300214 ']' 00:19:43.459 11:11:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:43.459 11:11:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local max_retries=100 00:19:43.459 11:11:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:43.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:43.459 11:11:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@837 -- # xtrace_disable 00:19:43.459 11:11:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:43.717 [2024-05-15 11:11:40.756890] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:19:43.717 [2024-05-15 11:11:40.756933] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:43.717 EAL: No free 2048 kB hugepages reported on node 1 00:19:43.717 [2024-05-15 11:11:40.812952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:43.717 [2024-05-15 11:11:40.894797] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:43.717 [2024-05-15 11:11:40.894833] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:43.717 [2024-05-15 11:11:40.894840] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:43.717 [2024-05-15 11:11:40.894847] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:43.717 [2024-05-15 11:11:40.894852] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:43.717 [2024-05-15 11:11:40.894887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:43.717 [2024-05-15 11:11:40.895002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:43.717 [2024-05-15 11:11:40.895087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:43.717 [2024-05-15 11:11:40.895089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.328 11:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:19:44.328 11:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@861 -- # return 0 00:19:44.328 11:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:44.328 11:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@727 -- # xtrace_disable 00:19:44.328 11:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:44.587 11:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:44.587 11:11:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:19:44.587 11:11:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:44.587 11:11:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:44.587 11:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:44.587 11:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:44.587 11:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:44.587 11:11:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:44.587 11:11:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:44.587 11:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:44.587 11:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:44.587 11:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:44.587 11:11:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:44.587 11:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:44.587 11:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:44.587 11:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:44.587 11:11:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:44.587 11:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:44.587 11:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:44.587 [2024-05-15 11:11:41.743704] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:44.587 11:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:44.587 11:11:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:44.587 11:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:44.587 11:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:44.587 Malloc1 00:19:44.587 11:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:44.587 11:11:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:44.587 11:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:44.587 11:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:44.587 11:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:44.587 11:11:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:44.587 11:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:44.587 11:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:44.587 11:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:44.587 11:11:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:44.587 11:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:44.587 11:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:44.587 [2024-05-15 11:11:41.791376] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:44.587 [2024-05-15 11:11:41.791618] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:44.587 11:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:44.587 11:11:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=2300339 00:19:44.587 11:11:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:19:44.587 11:11:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:44.587 EAL: No free 2048 kB hugepages reported on node 1 00:19:47.111 11:11:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:19:47.111 11:11:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:47.111 11:11:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:47.111 11:11:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:47.111 11:11:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:19:47.111 "tick_rate": 2300000000, 00:19:47.111 "poll_groups": [ 00:19:47.111 { 00:19:47.111 "name": "nvmf_tgt_poll_group_000", 00:19:47.111 "admin_qpairs": 1, 00:19:47.111 "io_qpairs": 1, 00:19:47.111 "current_admin_qpairs": 1, 00:19:47.111 "current_io_qpairs": 1, 00:19:47.111 "pending_bdev_io": 0, 00:19:47.111 "completed_nvme_io": 19156, 00:19:47.111 "transports": [ 00:19:47.111 { 00:19:47.111 "trtype": "TCP" 00:19:47.111 } 00:19:47.111 ] 00:19:47.111 }, 00:19:47.111 { 00:19:47.111 "name": "nvmf_tgt_poll_group_001", 00:19:47.111 "admin_qpairs": 0, 00:19:47.111 "io_qpairs": 1, 00:19:47.112 "current_admin_qpairs": 0, 00:19:47.112 "current_io_qpairs": 1, 00:19:47.112 "pending_bdev_io": 0, 00:19:47.112 "completed_nvme_io": 19669, 00:19:47.112 "transports": [ 00:19:47.112 { 00:19:47.112 "trtype": "TCP" 00:19:47.112 } 00:19:47.112 ] 00:19:47.112 }, 00:19:47.112 { 00:19:47.112 "name": "nvmf_tgt_poll_group_002", 00:19:47.112 "admin_qpairs": 0, 00:19:47.112 "io_qpairs": 1, 00:19:47.112 "current_admin_qpairs": 0, 00:19:47.112 "current_io_qpairs": 1, 00:19:47.112 "pending_bdev_io": 0, 00:19:47.112 "completed_nvme_io": 19431, 00:19:47.112 "transports": [ 00:19:47.112 { 00:19:47.112 "trtype": "TCP" 00:19:47.112 } 00:19:47.112 ] 00:19:47.112 }, 00:19:47.112 { 00:19:47.112 "name": "nvmf_tgt_poll_group_003", 00:19:47.112 "admin_qpairs": 0, 00:19:47.112 "io_qpairs": 1, 00:19:47.112 "current_admin_qpairs": 0, 00:19:47.112 "current_io_qpairs": 1, 00:19:47.112 "pending_bdev_io": 0, 00:19:47.112 "completed_nvme_io": 19104, 00:19:47.112 "transports": [ 00:19:47.112 { 00:19:47.112 "trtype": "TCP" 00:19:47.112 } 00:19:47.112 ] 00:19:47.112 } 00:19:47.112 ] 00:19:47.112 }' 00:19:47.112 11:11:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:47.112 11:11:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:19:47.112 11:11:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:19:47.112 11:11:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:19:47.112 11:11:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 2300339 00:19:55.238 Initializing NVMe Controllers 00:19:55.238 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:55.238 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:55.238 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:55.238 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:55.238 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:55.238 Initialization complete. Launching workers. 00:19:55.238 ======================================================== 00:19:55.238 Latency(us) 00:19:55.238 Device Information : IOPS MiB/s Average min max 00:19:55.238 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10039.96 39.22 6376.11 2258.54 10223.24 00:19:55.238 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10356.15 40.45 6181.47 2909.07 10387.86 00:19:55.238 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10302.45 40.24 6211.50 2495.47 10361.30 00:19:55.238 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10202.05 39.85 6273.46 2139.28 10580.64 00:19:55.238 ======================================================== 00:19:55.238 Total : 40900.62 159.77 6259.76 2139.28 10580.64 00:19:55.238 00:19:55.238 11:11:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:19:55.238 11:11:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:55.238 11:11:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:19:55.238 11:11:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:55.238 11:11:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:19:55.238 11:11:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:55.238 11:11:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:55.238 rmmod nvme_tcp 00:19:55.238 rmmod nvme_fabrics 00:19:55.238 rmmod nvme_keyring 00:19:55.238 11:11:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:55.238 11:11:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:19:55.238 11:11:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:19:55.238 11:11:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2300214 ']' 00:19:55.238 11:11:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2300214 00:19:55.238 11:11:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@947 -- # '[' -z 2300214 ']' 00:19:55.238 11:11:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # kill -0 2300214 00:19:55.238 11:11:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # uname 00:19:55.238 11:11:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:19:55.238 11:11:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2300214 00:19:55.238 11:11:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:19:55.238 11:11:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:19:55.238 11:11:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2300214' 00:19:55.238 killing process with pid 2300214 00:19:55.238 11:11:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # kill 2300214 00:19:55.238 [2024-05-15 11:11:52.061028] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:55.238 11:11:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@971 -- # wait 2300214 00:19:55.238 11:11:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:55.238 11:11:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:55.238 11:11:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:55.238 11:11:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:55.238 11:11:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:55.238 11:11:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:55.238 11:11:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:55.238 11:11:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:57.144 11:11:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:57.144 11:11:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:19:57.144 11:11:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:19:58.523 11:11:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:20:00.426 11:11:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:05.692 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:05.692 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:05.692 Found net devices under 0000:86:00.0: cvl_0_0 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:05.692 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:05.693 Found net devices under 0000:86:00.1: cvl_0_1 00:20:05.693 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:05.693 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:05.693 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:20:05.693 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:05.693 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:05.693 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:05.693 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:05.693 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:05.693 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:05.693 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:05.693 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:05.693 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:05.693 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:05.693 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:05.693 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:05.693 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:05.693 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:05.693 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:05.693 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:05.693 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:05.693 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:05.693 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:05.693 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:05.693 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:05.693 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:05.693 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:05.693 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:05.693 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:20:05.693 00:20:05.693 --- 10.0.0.2 ping statistics --- 00:20:05.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:05.693 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:20:05.693 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:05.693 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:05.693 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:20:05.693 00:20:05.693 --- 10.0.0.1 ping statistics --- 00:20:05.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:05.693 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:20:05.693 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:05.693 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:20:05.693 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:05.693 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:05.693 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:05.693 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:05.693 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:05.693 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:05.693 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:05.693 11:12:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:20:05.693 11:12:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:05.693 11:12:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:05.693 11:12:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:05.693 net.core.busy_poll = 1 00:20:05.693 11:12:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:05.693 net.core.busy_read = 1 00:20:05.693 11:12:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:05.693 11:12:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:05.693 11:12:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:05.693 11:12:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:05.693 11:12:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:05.693 11:12:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:05.693 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:05.693 11:12:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@721 -- # xtrace_disable 00:20:05.693 11:12:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:05.951 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2304248 00:20:05.951 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2304248 00:20:05.951 11:12:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:05.951 11:12:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@828 -- # '[' -z 2304248 ']' 00:20:05.951 11:12:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:05.951 11:12:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:05.951 11:12:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:05.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:05.951 11:12:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:05.951 11:12:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:05.951 [2024-05-15 11:12:03.010993] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:20:05.951 [2024-05-15 11:12:03.011044] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:05.951 EAL: No free 2048 kB hugepages reported on node 1 00:20:05.951 [2024-05-15 11:12:03.070471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:05.951 [2024-05-15 11:12:03.152524] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:05.951 [2024-05-15 11:12:03.152561] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:05.951 [2024-05-15 11:12:03.152568] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:05.951 [2024-05-15 11:12:03.152573] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:05.951 [2024-05-15 11:12:03.152579] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:05.951 [2024-05-15 11:12:03.152619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:05.951 [2024-05-15 11:12:03.152713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:05.951 [2024-05-15 11:12:03.152799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:05.951 [2024-05-15 11:12:03.152800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.882 11:12:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:06.882 11:12:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@861 -- # return 0 00:20:06.882 11:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:06.882 11:12:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@727 -- # xtrace_disable 00:20:06.882 11:12:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:06.882 11:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:06.882 11:12:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:20:06.882 11:12:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:06.882 11:12:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:06.882 11:12:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:06.882 11:12:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:06.882 11:12:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:06.882 11:12:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:06.882 11:12:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:06.882 11:12:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:06.882 11:12:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:06.882 11:12:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:06.882 11:12:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:06.882 11:12:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:06.882 11:12:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:06.882 11:12:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:06.882 11:12:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:06.882 11:12:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:06.882 11:12:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:06.882 [2024-05-15 11:12:04.004800] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:06.882 11:12:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:06.882 11:12:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:06.882 11:12:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:06.882 11:12:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:06.882 Malloc1 00:20:06.882 11:12:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:06.882 11:12:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:06.882 11:12:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:06.882 11:12:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:06.882 11:12:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:06.882 11:12:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:06.882 11:12:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:06.882 11:12:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:06.882 11:12:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:06.882 11:12:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:06.882 11:12:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:06.882 11:12:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:06.882 [2024-05-15 11:12:04.048157] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:06.882 [2024-05-15 11:12:04.048407] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:06.882 11:12:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:06.882 11:12:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=2304499 00:20:06.883 11:12:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:20:06.883 11:12:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:06.883 EAL: No free 2048 kB hugepages reported on node 1 00:20:09.405 11:12:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:20:09.405 11:12:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:09.405 11:12:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:09.405 11:12:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:09.405 11:12:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:20:09.405 "tick_rate": 2300000000, 00:20:09.405 "poll_groups": [ 00:20:09.405 { 00:20:09.405 "name": "nvmf_tgt_poll_group_000", 00:20:09.405 "admin_qpairs": 1, 00:20:09.405 "io_qpairs": 4, 00:20:09.405 "current_admin_qpairs": 1, 00:20:09.405 "current_io_qpairs": 4, 00:20:09.405 "pending_bdev_io": 0, 00:20:09.405 "completed_nvme_io": 44723, 00:20:09.405 "transports": [ 00:20:09.405 { 00:20:09.405 "trtype": "TCP" 00:20:09.405 } 00:20:09.405 ] 00:20:09.405 }, 00:20:09.405 { 00:20:09.405 "name": "nvmf_tgt_poll_group_001", 00:20:09.405 "admin_qpairs": 0, 00:20:09.405 "io_qpairs": 0, 00:20:09.405 "current_admin_qpairs": 0, 00:20:09.405 "current_io_qpairs": 0, 00:20:09.405 "pending_bdev_io": 0, 00:20:09.405 "completed_nvme_io": 0, 00:20:09.405 "transports": [ 00:20:09.405 { 00:20:09.405 "trtype": "TCP" 00:20:09.405 } 00:20:09.405 ] 00:20:09.405 }, 00:20:09.405 { 00:20:09.405 "name": "nvmf_tgt_poll_group_002", 00:20:09.405 "admin_qpairs": 0, 00:20:09.405 "io_qpairs": 0, 00:20:09.405 "current_admin_qpairs": 0, 00:20:09.405 "current_io_qpairs": 0, 00:20:09.405 "pending_bdev_io": 0, 00:20:09.405 "completed_nvme_io": 0, 00:20:09.405 "transports": [ 00:20:09.405 { 00:20:09.405 "trtype": "TCP" 00:20:09.405 } 00:20:09.405 ] 00:20:09.405 }, 00:20:09.405 { 00:20:09.405 "name": "nvmf_tgt_poll_group_003", 00:20:09.405 "admin_qpairs": 0, 00:20:09.405 "io_qpairs": 0, 00:20:09.405 "current_admin_qpairs": 0, 00:20:09.405 "current_io_qpairs": 0, 00:20:09.405 "pending_bdev_io": 0, 00:20:09.405 "completed_nvme_io": 0, 00:20:09.405 "transports": [ 00:20:09.405 { 00:20:09.405 "trtype": "TCP" 00:20:09.405 } 00:20:09.405 ] 00:20:09.405 } 00:20:09.405 ] 00:20:09.405 }' 00:20:09.405 11:12:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:09.405 11:12:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:20:09.405 11:12:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=3 00:20:09.405 11:12:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 3 -lt 2 ]] 00:20:09.405 11:12:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 2304499 00:20:17.502 Initializing NVMe Controllers 00:20:17.502 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:17.502 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:17.502 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:17.502 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:17.502 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:17.502 Initialization complete. Launching workers. 00:20:17.502 ======================================================== 00:20:17.502 Latency(us) 00:20:17.502 Device Information : IOPS MiB/s Average min max 00:20:17.502 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6155.00 24.04 10399.30 1297.35 57211.83 00:20:17.502 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5196.20 20.30 12322.33 1534.58 56766.02 00:20:17.502 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6135.20 23.97 10464.38 1194.90 57029.08 00:20:17.502 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5661.10 22.11 11345.31 1517.47 56940.69 00:20:17.502 ======================================================== 00:20:17.502 Total : 23147.50 90.42 11079.60 1194.90 57211.83 00:20:17.502 00:20:17.502 11:12:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:20:17.502 11:12:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:17.502 11:12:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:20:17.502 11:12:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:17.502 11:12:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:20:17.502 11:12:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:17.502 11:12:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:17.502 rmmod nvme_tcp 00:20:17.502 rmmod nvme_fabrics 00:20:17.502 rmmod nvme_keyring 00:20:17.502 11:12:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:17.502 11:12:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:20:17.502 11:12:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:20:17.502 11:12:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2304248 ']' 00:20:17.502 11:12:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2304248 00:20:17.502 11:12:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@947 -- # '[' -z 2304248 ']' 00:20:17.502 11:12:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # kill -0 2304248 00:20:17.502 11:12:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # uname 00:20:17.502 11:12:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:20:17.502 11:12:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2304248 00:20:17.502 11:12:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:20:17.502 11:12:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:20:17.502 11:12:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2304248' 00:20:17.502 killing process with pid 2304248 00:20:17.502 11:12:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # kill 2304248 00:20:17.502 [2024-05-15 11:12:14.306134] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:17.502 11:12:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@971 -- # wait 2304248 00:20:17.502 11:12:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:17.502 11:12:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:17.502 11:12:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:17.502 11:12:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:17.502 11:12:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:17.502 11:12:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:17.502 11:12:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:17.502 11:12:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:20.817 11:12:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:20.817 11:12:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:20:20.817 00:20:20.817 real 0m50.453s 00:20:20.817 user 2m49.544s 00:20:20.817 sys 0m9.026s 00:20:20.817 11:12:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # xtrace_disable 00:20:20.817 11:12:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:20.817 ************************************ 00:20:20.817 END TEST nvmf_perf_adq 00:20:20.817 ************************************ 00:20:20.817 11:12:17 nvmf_tcp -- nvmf/nvmf.sh@82 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:20.817 11:12:17 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:20:20.817 11:12:17 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:20:20.817 11:12:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:20.817 ************************************ 00:20:20.817 START TEST nvmf_shutdown 00:20:20.817 ************************************ 00:20:20.817 11:12:17 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:20.817 * Looking for test storage... 00:20:20.817 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:20.817 11:12:17 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:20.817 11:12:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:20.817 11:12:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:20.817 11:12:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:20.817 11:12:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:20.817 11:12:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:20.817 11:12:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:20.817 11:12:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:20.817 11:12:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:20.817 11:12:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:20.817 11:12:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:20.817 11:12:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:20.817 11:12:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:20.817 11:12:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:20.817 11:12:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:20.817 11:12:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:20.817 11:12:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:20.817 11:12:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:20.817 11:12:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:20.817 11:12:17 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:20.817 11:12:17 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:20.817 11:12:17 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:20.817 11:12:17 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.817 11:12:17 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.817 11:12:17 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.817 11:12:17 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:20.817 11:12:17 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.817 11:12:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:20:20.817 11:12:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:20.817 11:12:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:20.817 11:12:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:20.817 11:12:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:20.817 11:12:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:20.817 11:12:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:20.817 11:12:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:20.817 11:12:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:20.817 11:12:17 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:20.817 11:12:17 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:20.817 11:12:17 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:20.817 11:12:17 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:20:20.817 11:12:17 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1104 -- # xtrace_disable 00:20:20.817 11:12:17 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:20.817 ************************************ 00:20:20.817 START TEST nvmf_shutdown_tc1 00:20:20.817 ************************************ 00:20:20.817 11:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1122 -- # nvmf_shutdown_tc1 00:20:20.817 11:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:20:20.817 11:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:20.817 11:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:20.817 11:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:20.817 11:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:20.817 11:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:20.817 11:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:20.818 11:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:20.818 11:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:20.818 11:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:20.818 11:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:20.818 11:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:20.818 11:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:20.818 11:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:26.091 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:26.091 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:26.092 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:26.092 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:26.092 Found net devices under 0000:86:00.0: cvl_0_0 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:26.092 Found net devices under 0000:86:00.1: cvl_0_1 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:26.092 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:26.092 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:20:26.092 00:20:26.092 --- 10.0.0.2 ping statistics --- 00:20:26.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.092 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:26.092 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:26.092 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:20:26.092 00:20:26.092 --- 10.0.0.1 ping statistics --- 00:20:26.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.092 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@721 -- # xtrace_disable 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=2310299 00:20:26.092 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 2310299 00:20:26.093 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@828 -- # '[' -z 2310299 ']' 00:20:26.093 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.093 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:26.093 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.093 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:26.093 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:26.093 11:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:26.093 [2024-05-15 11:12:23.025206] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:20:26.093 [2024-05-15 11:12:23.025249] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:26.093 EAL: No free 2048 kB hugepages reported on node 1 00:20:26.093 [2024-05-15 11:12:23.081581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:26.093 [2024-05-15 11:12:23.162682] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:26.093 [2024-05-15 11:12:23.162716] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:26.093 [2024-05-15 11:12:23.162723] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:26.093 [2024-05-15 11:12:23.162730] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:26.093 [2024-05-15 11:12:23.162735] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:26.093 [2024-05-15 11:12:23.162772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:26.093 [2024-05-15 11:12:23.162863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:26.093 [2024-05-15 11:12:23.162971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.093 [2024-05-15 11:12:23.162972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:26.662 11:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:26.662 11:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@861 -- # return 0 00:20:26.662 11:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:26.662 11:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@727 -- # xtrace_disable 00:20:26.662 11:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:26.662 11:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:26.662 11:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:26.662 11:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:26.662 11:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:26.662 [2024-05-15 11:12:23.869952] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:26.662 11:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:26.662 11:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:26.662 11:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:26.662 11:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@721 -- # xtrace_disable 00:20:26.662 11:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:26.662 11:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:26.662 11:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:26.662 11:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:26.662 11:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:26.662 11:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:26.662 11:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:26.662 11:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:26.662 11:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:26.662 11:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:26.662 11:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:26.662 11:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:26.662 11:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:26.662 11:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:26.662 11:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:26.662 11:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:26.662 11:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:26.662 11:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:26.662 11:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:26.662 11:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:26.662 11:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:26.662 11:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:26.662 11:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:26.662 11:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:26.662 11:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:26.921 Malloc1 00:20:26.921 [2024-05-15 11:12:23.965544] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:26.921 [2024-05-15 11:12:23.965779] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:26.921 Malloc2 00:20:26.921 Malloc3 00:20:26.921 Malloc4 00:20:26.921 Malloc5 00:20:26.921 Malloc6 00:20:27.179 Malloc7 00:20:27.179 Malloc8 00:20:27.179 Malloc9 00:20:27.179 Malloc10 00:20:27.179 11:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:27.179 11:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:27.179 11:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@727 -- # xtrace_disable 00:20:27.179 11:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:27.179 11:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=2310582 00:20:27.179 11:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 2310582 /var/tmp/bdevperf.sock 00:20:27.179 11:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@828 -- # '[' -z 2310582 ']' 00:20:27.179 11:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:27.179 11:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:27.179 11:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:27.179 11:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:27.179 11:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:27.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:27.179 11:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:20:27.179 11:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:27.179 11:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:20:27.179 11:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:27.179 11:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:27.179 11:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:27.179 { 00:20:27.179 "params": { 00:20:27.179 "name": "Nvme$subsystem", 00:20:27.179 "trtype": "$TEST_TRANSPORT", 00:20:27.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.179 "adrfam": "ipv4", 00:20:27.179 "trsvcid": "$NVMF_PORT", 00:20:27.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.179 "hdgst": ${hdgst:-false}, 00:20:27.179 "ddgst": ${ddgst:-false} 00:20:27.179 }, 00:20:27.179 "method": "bdev_nvme_attach_controller" 00:20:27.179 } 00:20:27.179 EOF 00:20:27.179 )") 00:20:27.179 11:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:27.179 11:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:27.179 11:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:27.179 { 00:20:27.179 "params": { 00:20:27.179 "name": "Nvme$subsystem", 00:20:27.179 "trtype": "$TEST_TRANSPORT", 00:20:27.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.179 "adrfam": "ipv4", 00:20:27.179 "trsvcid": "$NVMF_PORT", 00:20:27.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.180 "hdgst": ${hdgst:-false}, 00:20:27.180 "ddgst": ${ddgst:-false} 00:20:27.180 }, 00:20:27.180 "method": "bdev_nvme_attach_controller" 00:20:27.180 } 00:20:27.180 EOF 00:20:27.180 )") 00:20:27.180 11:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:27.180 11:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:27.180 11:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:27.180 { 00:20:27.180 "params": { 00:20:27.180 "name": "Nvme$subsystem", 00:20:27.180 "trtype": "$TEST_TRANSPORT", 00:20:27.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.180 "adrfam": "ipv4", 00:20:27.180 "trsvcid": "$NVMF_PORT", 00:20:27.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.180 "hdgst": ${hdgst:-false}, 00:20:27.180 "ddgst": ${ddgst:-false} 00:20:27.180 }, 00:20:27.180 "method": "bdev_nvme_attach_controller" 00:20:27.180 } 00:20:27.180 EOF 00:20:27.180 )") 00:20:27.180 11:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:27.180 11:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:27.180 11:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:27.180 { 00:20:27.180 "params": { 00:20:27.180 "name": "Nvme$subsystem", 00:20:27.180 "trtype": "$TEST_TRANSPORT", 00:20:27.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.180 "adrfam": "ipv4", 00:20:27.180 "trsvcid": "$NVMF_PORT", 00:20:27.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.180 "hdgst": ${hdgst:-false}, 00:20:27.180 "ddgst": ${ddgst:-false} 00:20:27.180 }, 00:20:27.180 "method": "bdev_nvme_attach_controller" 00:20:27.180 } 00:20:27.180 EOF 00:20:27.180 )") 00:20:27.180 11:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:27.180 11:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:27.180 11:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:27.180 { 00:20:27.180 "params": { 00:20:27.180 "name": "Nvme$subsystem", 00:20:27.180 "trtype": "$TEST_TRANSPORT", 00:20:27.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.180 "adrfam": "ipv4", 00:20:27.180 "trsvcid": "$NVMF_PORT", 00:20:27.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.180 "hdgst": ${hdgst:-false}, 00:20:27.180 "ddgst": ${ddgst:-false} 00:20:27.180 }, 00:20:27.180 "method": "bdev_nvme_attach_controller" 00:20:27.180 } 00:20:27.180 EOF 00:20:27.180 )") 00:20:27.180 11:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:27.180 11:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:27.180 11:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:27.180 { 00:20:27.180 "params": { 00:20:27.180 "name": "Nvme$subsystem", 00:20:27.180 "trtype": "$TEST_TRANSPORT", 00:20:27.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.180 "adrfam": "ipv4", 00:20:27.180 "trsvcid": "$NVMF_PORT", 00:20:27.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.180 "hdgst": ${hdgst:-false}, 00:20:27.180 "ddgst": ${ddgst:-false} 00:20:27.180 }, 00:20:27.180 "method": "bdev_nvme_attach_controller" 00:20:27.180 } 00:20:27.180 EOF 00:20:27.180 )") 00:20:27.180 11:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:27.180 11:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:27.180 [2024-05-15 11:12:24.435288] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:20:27.180 [2024-05-15 11:12:24.435338] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:27.180 11:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:27.180 { 00:20:27.180 "params": { 00:20:27.180 "name": "Nvme$subsystem", 00:20:27.180 "trtype": "$TEST_TRANSPORT", 00:20:27.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.180 "adrfam": "ipv4", 00:20:27.180 "trsvcid": "$NVMF_PORT", 00:20:27.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.180 "hdgst": ${hdgst:-false}, 00:20:27.180 "ddgst": ${ddgst:-false} 00:20:27.180 }, 00:20:27.180 "method": "bdev_nvme_attach_controller" 00:20:27.180 } 00:20:27.180 EOF 00:20:27.180 )") 00:20:27.180 11:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:27.180 11:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:27.180 11:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:27.180 { 00:20:27.180 "params": { 00:20:27.180 "name": "Nvme$subsystem", 00:20:27.180 "trtype": "$TEST_TRANSPORT", 00:20:27.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.180 "adrfam": "ipv4", 00:20:27.180 "trsvcid": "$NVMF_PORT", 00:20:27.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.180 "hdgst": ${hdgst:-false}, 00:20:27.180 "ddgst": ${ddgst:-false} 00:20:27.180 }, 00:20:27.180 "method": "bdev_nvme_attach_controller" 00:20:27.180 } 00:20:27.180 EOF 00:20:27.180 )") 00:20:27.180 11:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:27.438 11:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:27.438 11:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:27.438 { 00:20:27.438 "params": { 00:20:27.438 "name": "Nvme$subsystem", 00:20:27.438 "trtype": "$TEST_TRANSPORT", 00:20:27.438 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.438 "adrfam": "ipv4", 00:20:27.438 "trsvcid": "$NVMF_PORT", 00:20:27.438 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.438 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.438 "hdgst": ${hdgst:-false}, 00:20:27.438 "ddgst": ${ddgst:-false} 00:20:27.438 }, 00:20:27.438 "method": "bdev_nvme_attach_controller" 00:20:27.438 } 00:20:27.438 EOF 00:20:27.438 )") 00:20:27.438 11:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:27.438 11:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:27.438 11:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:27.438 { 00:20:27.438 "params": { 00:20:27.438 "name": "Nvme$subsystem", 00:20:27.438 "trtype": "$TEST_TRANSPORT", 00:20:27.438 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.438 "adrfam": "ipv4", 00:20:27.438 "trsvcid": "$NVMF_PORT", 00:20:27.438 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.438 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.438 "hdgst": ${hdgst:-false}, 00:20:27.438 "ddgst": ${ddgst:-false} 00:20:27.438 }, 00:20:27.438 "method": "bdev_nvme_attach_controller" 00:20:27.438 } 00:20:27.438 EOF 00:20:27.438 )") 00:20:27.439 11:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:27.439 EAL: No free 2048 kB hugepages reported on node 1 00:20:27.439 11:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:20:27.439 11:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:20:27.439 11:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:27.439 "params": { 00:20:27.439 "name": "Nvme1", 00:20:27.439 "trtype": "tcp", 00:20:27.439 "traddr": "10.0.0.2", 00:20:27.439 "adrfam": "ipv4", 00:20:27.439 "trsvcid": "4420", 00:20:27.439 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:27.439 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:27.439 "hdgst": false, 00:20:27.439 "ddgst": false 00:20:27.439 }, 00:20:27.439 "method": "bdev_nvme_attach_controller" 00:20:27.439 },{ 00:20:27.439 "params": { 00:20:27.439 "name": "Nvme2", 00:20:27.439 "trtype": "tcp", 00:20:27.439 "traddr": "10.0.0.2", 00:20:27.439 "adrfam": "ipv4", 00:20:27.439 "trsvcid": "4420", 00:20:27.439 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:27.439 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:27.439 "hdgst": false, 00:20:27.439 "ddgst": false 00:20:27.439 }, 00:20:27.439 "method": "bdev_nvme_attach_controller" 00:20:27.439 },{ 00:20:27.439 "params": { 00:20:27.439 "name": "Nvme3", 00:20:27.439 "trtype": "tcp", 00:20:27.439 "traddr": "10.0.0.2", 00:20:27.439 "adrfam": "ipv4", 00:20:27.439 "trsvcid": "4420", 00:20:27.439 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:27.439 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:27.439 "hdgst": false, 00:20:27.439 "ddgst": false 00:20:27.439 }, 00:20:27.439 "method": "bdev_nvme_attach_controller" 00:20:27.439 },{ 00:20:27.439 "params": { 00:20:27.439 "name": "Nvme4", 00:20:27.439 "trtype": "tcp", 00:20:27.439 "traddr": "10.0.0.2", 00:20:27.439 "adrfam": "ipv4", 00:20:27.439 "trsvcid": "4420", 00:20:27.439 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:27.439 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:27.439 "hdgst": false, 00:20:27.439 "ddgst": false 00:20:27.439 }, 00:20:27.439 "method": "bdev_nvme_attach_controller" 00:20:27.439 },{ 00:20:27.439 "params": { 00:20:27.439 "name": "Nvme5", 00:20:27.439 "trtype": "tcp", 00:20:27.439 "traddr": "10.0.0.2", 00:20:27.439 "adrfam": "ipv4", 00:20:27.439 "trsvcid": "4420", 00:20:27.439 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:27.439 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:27.439 "hdgst": false, 00:20:27.439 "ddgst": false 00:20:27.439 }, 00:20:27.439 "method": "bdev_nvme_attach_controller" 00:20:27.439 },{ 00:20:27.439 "params": { 00:20:27.439 "name": "Nvme6", 00:20:27.439 "trtype": "tcp", 00:20:27.439 "traddr": "10.0.0.2", 00:20:27.439 "adrfam": "ipv4", 00:20:27.439 "trsvcid": "4420", 00:20:27.439 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:27.439 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:27.439 "hdgst": false, 00:20:27.439 "ddgst": false 00:20:27.439 }, 00:20:27.439 "method": "bdev_nvme_attach_controller" 00:20:27.439 },{ 00:20:27.439 "params": { 00:20:27.439 "name": "Nvme7", 00:20:27.439 "trtype": "tcp", 00:20:27.439 "traddr": "10.0.0.2", 00:20:27.439 "adrfam": "ipv4", 00:20:27.439 "trsvcid": "4420", 00:20:27.439 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:27.439 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:27.439 "hdgst": false, 00:20:27.439 "ddgst": false 00:20:27.439 }, 00:20:27.439 "method": "bdev_nvme_attach_controller" 00:20:27.439 },{ 00:20:27.439 "params": { 00:20:27.439 "name": "Nvme8", 00:20:27.439 "trtype": "tcp", 00:20:27.439 "traddr": "10.0.0.2", 00:20:27.439 "adrfam": "ipv4", 00:20:27.439 "trsvcid": "4420", 00:20:27.439 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:27.439 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:27.439 "hdgst": false, 00:20:27.439 "ddgst": false 00:20:27.439 }, 00:20:27.439 "method": "bdev_nvme_attach_controller" 00:20:27.439 },{ 00:20:27.439 "params": { 00:20:27.439 "name": "Nvme9", 00:20:27.439 "trtype": "tcp", 00:20:27.439 "traddr": "10.0.0.2", 00:20:27.439 "adrfam": "ipv4", 00:20:27.439 "trsvcid": "4420", 00:20:27.439 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:27.439 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:27.439 "hdgst": false, 00:20:27.439 "ddgst": false 00:20:27.439 }, 00:20:27.439 "method": "bdev_nvme_attach_controller" 00:20:27.439 },{ 00:20:27.439 "params": { 00:20:27.439 "name": "Nvme10", 00:20:27.439 "trtype": "tcp", 00:20:27.439 "traddr": "10.0.0.2", 00:20:27.439 "adrfam": "ipv4", 00:20:27.439 "trsvcid": "4420", 00:20:27.439 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:27.439 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:27.439 "hdgst": false, 00:20:27.439 "ddgst": false 00:20:27.439 }, 00:20:27.439 "method": "bdev_nvme_attach_controller" 00:20:27.439 }' 00:20:27.439 [2024-05-15 11:12:24.491244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.439 [2024-05-15 11:12:24.564751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.812 11:12:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:28.812 11:12:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@861 -- # return 0 00:20:28.812 11:12:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:28.812 11:12:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:28.812 11:12:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:28.812 11:12:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:28.812 11:12:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 2310582 00:20:28.812 11:12:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:20:28.812 11:12:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:20:29.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2310582 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:29.746 11:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 2310299 00:20:29.746 11:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:29.746 11:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:29.746 11:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:20:29.746 11:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:20:29.747 11:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:29.747 11:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:29.747 { 00:20:29.747 "params": { 00:20:29.747 "name": "Nvme$subsystem", 00:20:29.747 "trtype": "$TEST_TRANSPORT", 00:20:29.747 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.747 "adrfam": "ipv4", 00:20:29.747 "trsvcid": "$NVMF_PORT", 00:20:29.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.747 "hdgst": ${hdgst:-false}, 00:20:29.747 "ddgst": ${ddgst:-false} 00:20:29.747 }, 00:20:29.747 "method": "bdev_nvme_attach_controller" 00:20:29.747 } 00:20:29.747 EOF 00:20:29.747 )") 00:20:29.747 11:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:29.747 11:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:29.747 11:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:29.747 { 00:20:29.747 "params": { 00:20:29.747 "name": "Nvme$subsystem", 00:20:29.747 "trtype": "$TEST_TRANSPORT", 00:20:29.747 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.747 "adrfam": "ipv4", 00:20:29.747 "trsvcid": "$NVMF_PORT", 00:20:29.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.747 "hdgst": ${hdgst:-false}, 00:20:29.747 "ddgst": ${ddgst:-false} 00:20:29.747 }, 00:20:29.747 "method": "bdev_nvme_attach_controller" 00:20:29.747 } 00:20:29.747 EOF 00:20:29.747 )") 00:20:29.747 11:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:29.747 11:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:29.747 11:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:29.747 { 00:20:29.747 "params": { 00:20:29.747 "name": "Nvme$subsystem", 00:20:29.747 "trtype": "$TEST_TRANSPORT", 00:20:29.747 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.747 "adrfam": "ipv4", 00:20:29.747 "trsvcid": "$NVMF_PORT", 00:20:29.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.747 "hdgst": ${hdgst:-false}, 00:20:29.747 "ddgst": ${ddgst:-false} 00:20:29.747 }, 00:20:29.747 "method": "bdev_nvme_attach_controller" 00:20:29.747 } 00:20:29.747 EOF 00:20:29.747 )") 00:20:29.747 11:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:29.747 11:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:29.747 11:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:29.747 { 00:20:29.747 "params": { 00:20:29.747 "name": "Nvme$subsystem", 00:20:29.747 "trtype": "$TEST_TRANSPORT", 00:20:29.747 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.747 "adrfam": "ipv4", 00:20:29.747 "trsvcid": "$NVMF_PORT", 00:20:29.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.747 "hdgst": ${hdgst:-false}, 00:20:29.747 "ddgst": ${ddgst:-false} 00:20:29.747 }, 00:20:29.747 "method": "bdev_nvme_attach_controller" 00:20:29.747 } 00:20:29.747 EOF 00:20:29.747 )") 00:20:29.747 11:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:29.747 11:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:29.747 11:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:29.747 { 00:20:29.747 "params": { 00:20:29.747 "name": "Nvme$subsystem", 00:20:29.747 "trtype": "$TEST_TRANSPORT", 00:20:29.747 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.747 "adrfam": "ipv4", 00:20:29.747 "trsvcid": "$NVMF_PORT", 00:20:29.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.747 "hdgst": ${hdgst:-false}, 00:20:29.747 "ddgst": ${ddgst:-false} 00:20:29.747 }, 00:20:29.747 "method": "bdev_nvme_attach_controller" 00:20:29.747 } 00:20:29.747 EOF 00:20:29.747 )") 00:20:29.747 11:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:29.747 11:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:29.747 11:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:29.747 { 00:20:29.747 "params": { 00:20:29.747 "name": "Nvme$subsystem", 00:20:29.747 "trtype": "$TEST_TRANSPORT", 00:20:29.747 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.747 "adrfam": "ipv4", 00:20:29.747 "trsvcid": "$NVMF_PORT", 00:20:29.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.747 "hdgst": ${hdgst:-false}, 00:20:29.747 "ddgst": ${ddgst:-false} 00:20:29.747 }, 00:20:29.747 "method": "bdev_nvme_attach_controller" 00:20:29.747 } 00:20:29.747 EOF 00:20:29.747 )") 00:20:29.747 11:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:29.747 [2024-05-15 11:12:27.010030] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:20:29.747 [2024-05-15 11:12:27.010080] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2310929 ] 00:20:30.005 11:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:30.006 11:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:30.006 { 00:20:30.006 "params": { 00:20:30.006 "name": "Nvme$subsystem", 00:20:30.006 "trtype": "$TEST_TRANSPORT", 00:20:30.006 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:30.006 "adrfam": "ipv4", 00:20:30.006 "trsvcid": "$NVMF_PORT", 00:20:30.006 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:30.006 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:30.006 "hdgst": ${hdgst:-false}, 00:20:30.006 "ddgst": ${ddgst:-false} 00:20:30.006 }, 00:20:30.006 "method": "bdev_nvme_attach_controller" 00:20:30.006 } 00:20:30.006 EOF 00:20:30.006 )") 00:20:30.006 11:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:30.006 11:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:30.006 11:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:30.006 { 00:20:30.006 "params": { 00:20:30.006 "name": "Nvme$subsystem", 00:20:30.006 "trtype": "$TEST_TRANSPORT", 00:20:30.006 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:30.006 "adrfam": "ipv4", 00:20:30.006 "trsvcid": "$NVMF_PORT", 00:20:30.006 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:30.006 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:30.006 "hdgst": ${hdgst:-false}, 00:20:30.006 "ddgst": ${ddgst:-false} 00:20:30.006 }, 00:20:30.006 "method": "bdev_nvme_attach_controller" 00:20:30.006 } 00:20:30.006 EOF 00:20:30.006 )") 00:20:30.006 11:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:30.006 11:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:30.006 11:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:30.006 { 00:20:30.006 "params": { 00:20:30.006 "name": "Nvme$subsystem", 00:20:30.006 "trtype": "$TEST_TRANSPORT", 00:20:30.006 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:30.006 "adrfam": "ipv4", 00:20:30.006 "trsvcid": "$NVMF_PORT", 00:20:30.006 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:30.006 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:30.006 "hdgst": ${hdgst:-false}, 00:20:30.006 "ddgst": ${ddgst:-false} 00:20:30.006 }, 00:20:30.006 "method": "bdev_nvme_attach_controller" 00:20:30.006 } 00:20:30.006 EOF 00:20:30.006 )") 00:20:30.006 11:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:30.006 11:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:30.006 11:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:30.006 { 00:20:30.006 "params": { 00:20:30.006 "name": "Nvme$subsystem", 00:20:30.006 "trtype": "$TEST_TRANSPORT", 00:20:30.006 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:30.006 "adrfam": "ipv4", 00:20:30.006 "trsvcid": "$NVMF_PORT", 00:20:30.006 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:30.006 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:30.006 "hdgst": ${hdgst:-false}, 00:20:30.006 "ddgst": ${ddgst:-false} 00:20:30.006 }, 00:20:30.006 "method": "bdev_nvme_attach_controller" 00:20:30.006 } 00:20:30.006 EOF 00:20:30.006 )") 00:20:30.006 11:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:30.006 EAL: No free 2048 kB hugepages reported on node 1 00:20:30.006 11:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:20:30.006 11:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:20:30.006 11:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:30.006 "params": { 00:20:30.006 "name": "Nvme1", 00:20:30.006 "trtype": "tcp", 00:20:30.006 "traddr": "10.0.0.2", 00:20:30.006 "adrfam": "ipv4", 00:20:30.006 "trsvcid": "4420", 00:20:30.006 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:30.006 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:30.006 "hdgst": false, 00:20:30.006 "ddgst": false 00:20:30.006 }, 00:20:30.006 "method": "bdev_nvme_attach_controller" 00:20:30.006 },{ 00:20:30.006 "params": { 00:20:30.006 "name": "Nvme2", 00:20:30.006 "trtype": "tcp", 00:20:30.006 "traddr": "10.0.0.2", 00:20:30.006 "adrfam": "ipv4", 00:20:30.006 "trsvcid": "4420", 00:20:30.006 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:30.006 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:30.006 "hdgst": false, 00:20:30.006 "ddgst": false 00:20:30.006 }, 00:20:30.006 "method": "bdev_nvme_attach_controller" 00:20:30.006 },{ 00:20:30.006 "params": { 00:20:30.006 "name": "Nvme3", 00:20:30.006 "trtype": "tcp", 00:20:30.006 "traddr": "10.0.0.2", 00:20:30.006 "adrfam": "ipv4", 00:20:30.006 "trsvcid": "4420", 00:20:30.006 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:30.006 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:30.006 "hdgst": false, 00:20:30.006 "ddgst": false 00:20:30.006 }, 00:20:30.006 "method": "bdev_nvme_attach_controller" 00:20:30.006 },{ 00:20:30.006 "params": { 00:20:30.006 "name": "Nvme4", 00:20:30.006 "trtype": "tcp", 00:20:30.006 "traddr": "10.0.0.2", 00:20:30.006 "adrfam": "ipv4", 00:20:30.006 "trsvcid": "4420", 00:20:30.006 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:30.006 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:30.006 "hdgst": false, 00:20:30.006 "ddgst": false 00:20:30.006 }, 00:20:30.006 "method": "bdev_nvme_attach_controller" 00:20:30.006 },{ 00:20:30.006 "params": { 00:20:30.006 "name": "Nvme5", 00:20:30.006 "trtype": "tcp", 00:20:30.006 "traddr": "10.0.0.2", 00:20:30.006 "adrfam": "ipv4", 00:20:30.006 "trsvcid": "4420", 00:20:30.006 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:30.006 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:30.006 "hdgst": false, 00:20:30.006 "ddgst": false 00:20:30.006 }, 00:20:30.006 "method": "bdev_nvme_attach_controller" 00:20:30.006 },{ 00:20:30.006 "params": { 00:20:30.006 "name": "Nvme6", 00:20:30.006 "trtype": "tcp", 00:20:30.006 "traddr": "10.0.0.2", 00:20:30.006 "adrfam": "ipv4", 00:20:30.006 "trsvcid": "4420", 00:20:30.006 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:30.006 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:30.006 "hdgst": false, 00:20:30.006 "ddgst": false 00:20:30.006 }, 00:20:30.006 "method": "bdev_nvme_attach_controller" 00:20:30.006 },{ 00:20:30.006 "params": { 00:20:30.006 "name": "Nvme7", 00:20:30.006 "trtype": "tcp", 00:20:30.006 "traddr": "10.0.0.2", 00:20:30.006 "adrfam": "ipv4", 00:20:30.006 "trsvcid": "4420", 00:20:30.006 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:30.006 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:30.006 "hdgst": false, 00:20:30.006 "ddgst": false 00:20:30.006 }, 00:20:30.006 "method": "bdev_nvme_attach_controller" 00:20:30.006 },{ 00:20:30.006 "params": { 00:20:30.006 "name": "Nvme8", 00:20:30.006 "trtype": "tcp", 00:20:30.006 "traddr": "10.0.0.2", 00:20:30.006 "adrfam": "ipv4", 00:20:30.006 "trsvcid": "4420", 00:20:30.006 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:30.006 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:30.006 "hdgst": false, 00:20:30.006 "ddgst": false 00:20:30.006 }, 00:20:30.007 "method": "bdev_nvme_attach_controller" 00:20:30.007 },{ 00:20:30.007 "params": { 00:20:30.007 "name": "Nvme9", 00:20:30.007 "trtype": "tcp", 00:20:30.007 "traddr": "10.0.0.2", 00:20:30.007 "adrfam": "ipv4", 00:20:30.007 "trsvcid": "4420", 00:20:30.007 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:30.007 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:30.007 "hdgst": false, 00:20:30.007 "ddgst": false 00:20:30.007 }, 00:20:30.007 "method": "bdev_nvme_attach_controller" 00:20:30.007 },{ 00:20:30.007 "params": { 00:20:30.007 "name": "Nvme10", 00:20:30.007 "trtype": "tcp", 00:20:30.007 "traddr": "10.0.0.2", 00:20:30.007 "adrfam": "ipv4", 00:20:30.007 "trsvcid": "4420", 00:20:30.007 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:30.007 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:30.007 "hdgst": false, 00:20:30.007 "ddgst": false 00:20:30.007 }, 00:20:30.007 "method": "bdev_nvme_attach_controller" 00:20:30.007 }' 00:20:30.007 [2024-05-15 11:12:27.065900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.007 [2024-05-15 11:12:27.140338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.378 Running I/O for 1 seconds... 00:20:32.310 00:20:32.310 Latency(us) 00:20:32.310 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.310 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:32.310 Verification LBA range: start 0x0 length 0x400 00:20:32.310 Nvme1n1 : 1.14 281.35 17.58 0.00 0.00 225484.27 15158.76 218833.25 00:20:32.310 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:32.310 Verification LBA range: start 0x0 length 0x400 00:20:32.310 Nvme2n1 : 1.03 249.50 15.59 0.00 0.00 250142.94 25416.57 214274.23 00:20:32.310 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:32.310 Verification LBA range: start 0x0 length 0x400 00:20:32.311 Nvme3n1 : 1.12 295.54 18.47 0.00 0.00 201124.37 16298.52 209715.20 00:20:32.311 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:32.311 Verification LBA range: start 0x0 length 0x400 00:20:32.311 Nvme4n1 : 1.13 287.65 17.98 0.00 0.00 211078.51 8719.14 218833.25 00:20:32.311 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:32.311 Verification LBA range: start 0x0 length 0x400 00:20:32.311 Nvme5n1 : 1.14 280.31 17.52 0.00 0.00 213571.67 16754.42 214274.23 00:20:32.311 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:32.311 Verification LBA range: start 0x0 length 0x400 00:20:32.311 Nvme6n1 : 1.15 282.48 17.66 0.00 0.00 208475.41 3533.25 212450.62 00:20:32.311 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:32.311 Verification LBA range: start 0x0 length 0x400 00:20:32.311 Nvme7n1 : 1.13 282.31 17.64 0.00 0.00 205657.31 15386.71 219745.06 00:20:32.311 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:32.311 Verification LBA range: start 0x0 length 0x400 00:20:32.311 Nvme8n1 : 1.15 278.82 17.43 0.00 0.00 205216.01 17210.32 216097.84 00:20:32.311 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:32.311 Verification LBA range: start 0x0 length 0x400 00:20:32.311 Nvme9n1 : 1.15 277.85 17.37 0.00 0.00 203060.27 18578.03 219745.06 00:20:32.311 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:32.311 Verification LBA range: start 0x0 length 0x400 00:20:32.311 Nvme10n1 : 1.15 277.09 17.32 0.00 0.00 200585.93 14019.01 237069.36 00:20:32.311 =================================================================================================================== 00:20:32.311 Total : 2792.89 174.56 0.00 0.00 211631.98 3533.25 237069.36 00:20:32.571 11:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:20:32.571 11:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:32.571 11:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:32.571 11:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:32.571 11:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:32.571 11:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:32.571 11:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:20:32.571 11:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:32.571 11:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:20:32.571 11:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:32.571 11:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:32.571 rmmod nvme_tcp 00:20:32.571 rmmod nvme_fabrics 00:20:32.571 rmmod nvme_keyring 00:20:32.828 11:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:32.828 11:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:20:32.828 11:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:20:32.828 11:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 2310299 ']' 00:20:32.828 11:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 2310299 00:20:32.828 11:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@947 -- # '[' -z 2310299 ']' 00:20:32.828 11:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # kill -0 2310299 00:20:32.828 11:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # uname 00:20:32.828 11:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:20:32.828 11:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2310299 00:20:32.828 11:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:20:32.828 11:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:20:32.828 11:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2310299' 00:20:32.828 killing process with pid 2310299 00:20:32.828 11:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # kill 2310299 00:20:32.828 [2024-05-15 11:12:29.892361] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:32.828 11:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@971 -- # wait 2310299 00:20:33.085 11:12:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:33.085 11:12:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:33.085 11:12:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:33.085 11:12:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:33.085 11:12:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:33.086 11:12:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:33.086 11:12:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:33.086 11:12:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:35.616 00:20:35.616 real 0m14.541s 00:20:35.616 user 0m33.694s 00:20:35.616 sys 0m5.166s 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:35.616 ************************************ 00:20:35.616 END TEST nvmf_shutdown_tc1 00:20:35.616 ************************************ 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1104 -- # xtrace_disable 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:35.616 ************************************ 00:20:35.616 START TEST nvmf_shutdown_tc2 00:20:35.616 ************************************ 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1122 -- # nvmf_shutdown_tc2 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:35.616 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:35.617 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:35.617 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:35.617 Found net devices under 0000:86:00.0: cvl_0_0 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:35.617 Found net devices under 0000:86:00.1: cvl_0_1 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:35.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:35.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:20:35.617 00:20:35.617 --- 10.0.0.2 ping statistics --- 00:20:35.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.617 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:35.617 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:35.617 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:20:35.617 00:20:35.617 --- 10.0.0.1 ping statistics --- 00:20:35.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.617 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@721 -- # xtrace_disable 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2312098 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2312098 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@828 -- # '[' -z 2312098 ']' 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:35.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:35.617 11:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:35.617 [2024-05-15 11:12:32.804870] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:20:35.617 [2024-05-15 11:12:32.804908] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:35.617 EAL: No free 2048 kB hugepages reported on node 1 00:20:35.617 [2024-05-15 11:12:32.860977] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:35.874 [2024-05-15 11:12:32.934209] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:35.874 [2024-05-15 11:12:32.934250] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:35.874 [2024-05-15 11:12:32.934256] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:35.874 [2024-05-15 11:12:32.934262] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:35.874 [2024-05-15 11:12:32.934267] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:35.874 [2024-05-15 11:12:32.934390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:35.874 [2024-05-15 11:12:32.934483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:35.874 [2024-05-15 11:12:32.934589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:35.874 [2024-05-15 11:12:32.934590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:36.437 11:12:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:36.437 11:12:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@861 -- # return 0 00:20:36.437 11:12:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:36.437 11:12:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@727 -- # xtrace_disable 00:20:36.437 11:12:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:36.437 11:12:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:36.437 11:12:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:36.437 11:12:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:36.437 11:12:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:36.437 [2024-05-15 11:12:33.640052] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:36.437 11:12:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:36.437 11:12:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:36.437 11:12:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:36.437 11:12:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@721 -- # xtrace_disable 00:20:36.437 11:12:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:36.437 11:12:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:36.437 11:12:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:36.437 11:12:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:36.437 11:12:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:36.437 11:12:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:36.437 11:12:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:36.437 11:12:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:36.437 11:12:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:36.437 11:12:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:36.437 11:12:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:36.437 11:12:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:36.437 11:12:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:36.437 11:12:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:36.437 11:12:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:36.437 11:12:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:36.437 11:12:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:36.437 11:12:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:36.437 11:12:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:36.437 11:12:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:36.437 11:12:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:36.437 11:12:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:36.437 11:12:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:36.437 11:12:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:36.437 11:12:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:36.695 Malloc1 00:20:36.695 [2024-05-15 11:12:33.731585] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:36.695 [2024-05-15 11:12:33.731831] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:36.695 Malloc2 00:20:36.695 Malloc3 00:20:36.695 Malloc4 00:20:36.695 Malloc5 00:20:36.695 Malloc6 00:20:36.695 Malloc7 00:20:36.953 Malloc8 00:20:36.953 Malloc9 00:20:36.953 Malloc10 00:20:36.953 11:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:36.953 11:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:36.953 11:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@727 -- # xtrace_disable 00:20:36.953 11:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:36.953 11:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=2312384 00:20:36.953 11:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 2312384 /var/tmp/bdevperf.sock 00:20:36.953 11:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@828 -- # '[' -z 2312384 ']' 00:20:36.953 11:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:36.953 11:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:36.953 11:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:36.953 11:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:36.953 11:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:36.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:36.953 11:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:36.953 11:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:20:36.953 11:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:36.953 11:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:20:36.953 11:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:36.953 11:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:36.953 { 00:20:36.953 "params": { 00:20:36.953 "name": "Nvme$subsystem", 00:20:36.953 "trtype": "$TEST_TRANSPORT", 00:20:36.953 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.953 "adrfam": "ipv4", 00:20:36.953 "trsvcid": "$NVMF_PORT", 00:20:36.953 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.953 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.953 "hdgst": ${hdgst:-false}, 00:20:36.953 "ddgst": ${ddgst:-false} 00:20:36.953 }, 00:20:36.953 "method": "bdev_nvme_attach_controller" 00:20:36.953 } 00:20:36.953 EOF 00:20:36.953 )") 00:20:36.953 11:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:36.953 11:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:36.953 11:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:36.953 { 00:20:36.953 "params": { 00:20:36.953 "name": "Nvme$subsystem", 00:20:36.953 "trtype": "$TEST_TRANSPORT", 00:20:36.953 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.953 "adrfam": "ipv4", 00:20:36.953 "trsvcid": "$NVMF_PORT", 00:20:36.953 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.953 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.953 "hdgst": ${hdgst:-false}, 00:20:36.953 "ddgst": ${ddgst:-false} 00:20:36.953 }, 00:20:36.953 "method": "bdev_nvme_attach_controller" 00:20:36.953 } 00:20:36.953 EOF 00:20:36.953 )") 00:20:36.953 11:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:36.953 11:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:36.953 11:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:36.953 { 00:20:36.953 "params": { 00:20:36.953 "name": "Nvme$subsystem", 00:20:36.953 "trtype": "$TEST_TRANSPORT", 00:20:36.953 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.953 "adrfam": "ipv4", 00:20:36.953 "trsvcid": "$NVMF_PORT", 00:20:36.953 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.953 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.953 "hdgst": ${hdgst:-false}, 00:20:36.953 "ddgst": ${ddgst:-false} 00:20:36.953 }, 00:20:36.953 "method": "bdev_nvme_attach_controller" 00:20:36.953 } 00:20:36.953 EOF 00:20:36.953 )") 00:20:36.953 11:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:36.953 11:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:36.953 11:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:36.953 { 00:20:36.953 "params": { 00:20:36.953 "name": "Nvme$subsystem", 00:20:36.953 "trtype": "$TEST_TRANSPORT", 00:20:36.953 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.953 "adrfam": "ipv4", 00:20:36.953 "trsvcid": "$NVMF_PORT", 00:20:36.953 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.953 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.953 "hdgst": ${hdgst:-false}, 00:20:36.953 "ddgst": ${ddgst:-false} 00:20:36.953 }, 00:20:36.953 "method": "bdev_nvme_attach_controller" 00:20:36.953 } 00:20:36.953 EOF 00:20:36.953 )") 00:20:36.953 11:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:36.953 11:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:36.953 11:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:36.953 { 00:20:36.953 "params": { 00:20:36.953 "name": "Nvme$subsystem", 00:20:36.953 "trtype": "$TEST_TRANSPORT", 00:20:36.953 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.953 "adrfam": "ipv4", 00:20:36.953 "trsvcid": "$NVMF_PORT", 00:20:36.953 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.953 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.953 "hdgst": ${hdgst:-false}, 00:20:36.953 "ddgst": ${ddgst:-false} 00:20:36.953 }, 00:20:36.953 "method": "bdev_nvme_attach_controller" 00:20:36.953 } 00:20:36.953 EOF 00:20:36.953 )") 00:20:36.953 11:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:36.953 11:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:36.953 11:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:36.953 { 00:20:36.953 "params": { 00:20:36.953 "name": "Nvme$subsystem", 00:20:36.953 "trtype": "$TEST_TRANSPORT", 00:20:36.953 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.953 "adrfam": "ipv4", 00:20:36.953 "trsvcid": "$NVMF_PORT", 00:20:36.953 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.954 "hdgst": ${hdgst:-false}, 00:20:36.954 "ddgst": ${ddgst:-false} 00:20:36.954 }, 00:20:36.954 "method": "bdev_nvme_attach_controller" 00:20:36.954 } 00:20:36.954 EOF 00:20:36.954 )") 00:20:36.954 11:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:36.954 11:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:36.954 11:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:36.954 { 00:20:36.954 "params": { 00:20:36.954 "name": "Nvme$subsystem", 00:20:36.954 "trtype": "$TEST_TRANSPORT", 00:20:36.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.954 "adrfam": "ipv4", 00:20:36.954 "trsvcid": "$NVMF_PORT", 00:20:36.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.954 "hdgst": ${hdgst:-false}, 00:20:36.954 "ddgst": ${ddgst:-false} 00:20:36.954 }, 00:20:36.954 "method": "bdev_nvme_attach_controller" 00:20:36.954 } 00:20:36.954 EOF 00:20:36.954 )") 00:20:36.954 [2024-05-15 11:12:34.201030] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:20:36.954 [2024-05-15 11:12:34.201076] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2312384 ] 00:20:36.954 11:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:36.954 11:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:36.954 11:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:36.954 { 00:20:36.954 "params": { 00:20:36.954 "name": "Nvme$subsystem", 00:20:36.954 "trtype": "$TEST_TRANSPORT", 00:20:36.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.954 "adrfam": "ipv4", 00:20:36.954 "trsvcid": "$NVMF_PORT", 00:20:36.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.954 "hdgst": ${hdgst:-false}, 00:20:36.954 "ddgst": ${ddgst:-false} 00:20:36.954 }, 00:20:36.954 "method": "bdev_nvme_attach_controller" 00:20:36.954 } 00:20:36.954 EOF 00:20:36.954 )") 00:20:36.954 11:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:36.954 11:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:36.954 11:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:36.954 { 00:20:36.954 "params": { 00:20:36.954 "name": "Nvme$subsystem", 00:20:36.954 "trtype": "$TEST_TRANSPORT", 00:20:36.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.954 "adrfam": "ipv4", 00:20:36.954 "trsvcid": "$NVMF_PORT", 00:20:36.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.954 "hdgst": ${hdgst:-false}, 00:20:36.954 "ddgst": ${ddgst:-false} 00:20:36.954 }, 00:20:36.954 "method": "bdev_nvme_attach_controller" 00:20:36.954 } 00:20:36.954 EOF 00:20:36.954 )") 00:20:36.954 11:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:36.954 11:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:37.211 11:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:37.211 { 00:20:37.211 "params": { 00:20:37.211 "name": "Nvme$subsystem", 00:20:37.211 "trtype": "$TEST_TRANSPORT", 00:20:37.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.211 "adrfam": "ipv4", 00:20:37.211 "trsvcid": "$NVMF_PORT", 00:20:37.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.211 "hdgst": ${hdgst:-false}, 00:20:37.211 "ddgst": ${ddgst:-false} 00:20:37.211 }, 00:20:37.211 "method": "bdev_nvme_attach_controller" 00:20:37.211 } 00:20:37.211 EOF 00:20:37.211 )") 00:20:37.211 11:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:37.211 EAL: No free 2048 kB hugepages reported on node 1 00:20:37.211 11:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:20:37.211 11:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:20:37.212 11:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:37.212 "params": { 00:20:37.212 "name": "Nvme1", 00:20:37.212 "trtype": "tcp", 00:20:37.212 "traddr": "10.0.0.2", 00:20:37.212 "adrfam": "ipv4", 00:20:37.212 "trsvcid": "4420", 00:20:37.212 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.212 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:37.212 "hdgst": false, 00:20:37.212 "ddgst": false 00:20:37.212 }, 00:20:37.212 "method": "bdev_nvme_attach_controller" 00:20:37.212 },{ 00:20:37.212 "params": { 00:20:37.212 "name": "Nvme2", 00:20:37.212 "trtype": "tcp", 00:20:37.212 "traddr": "10.0.0.2", 00:20:37.212 "adrfam": "ipv4", 00:20:37.212 "trsvcid": "4420", 00:20:37.212 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:37.212 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:37.212 "hdgst": false, 00:20:37.212 "ddgst": false 00:20:37.212 }, 00:20:37.212 "method": "bdev_nvme_attach_controller" 00:20:37.212 },{ 00:20:37.212 "params": { 00:20:37.212 "name": "Nvme3", 00:20:37.212 "trtype": "tcp", 00:20:37.212 "traddr": "10.0.0.2", 00:20:37.212 "adrfam": "ipv4", 00:20:37.212 "trsvcid": "4420", 00:20:37.212 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:37.212 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:37.212 "hdgst": false, 00:20:37.212 "ddgst": false 00:20:37.212 }, 00:20:37.212 "method": "bdev_nvme_attach_controller" 00:20:37.212 },{ 00:20:37.212 "params": { 00:20:37.212 "name": "Nvme4", 00:20:37.212 "trtype": "tcp", 00:20:37.212 "traddr": "10.0.0.2", 00:20:37.212 "adrfam": "ipv4", 00:20:37.212 "trsvcid": "4420", 00:20:37.212 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:37.212 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:37.212 "hdgst": false, 00:20:37.212 "ddgst": false 00:20:37.212 }, 00:20:37.212 "method": "bdev_nvme_attach_controller" 00:20:37.212 },{ 00:20:37.212 "params": { 00:20:37.212 "name": "Nvme5", 00:20:37.212 "trtype": "tcp", 00:20:37.212 "traddr": "10.0.0.2", 00:20:37.212 "adrfam": "ipv4", 00:20:37.212 "trsvcid": "4420", 00:20:37.212 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:37.212 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:37.212 "hdgst": false, 00:20:37.212 "ddgst": false 00:20:37.212 }, 00:20:37.212 "method": "bdev_nvme_attach_controller" 00:20:37.212 },{ 00:20:37.212 "params": { 00:20:37.212 "name": "Nvme6", 00:20:37.212 "trtype": "tcp", 00:20:37.212 "traddr": "10.0.0.2", 00:20:37.212 "adrfam": "ipv4", 00:20:37.212 "trsvcid": "4420", 00:20:37.212 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:37.212 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:37.212 "hdgst": false, 00:20:37.212 "ddgst": false 00:20:37.212 }, 00:20:37.212 "method": "bdev_nvme_attach_controller" 00:20:37.212 },{ 00:20:37.212 "params": { 00:20:37.212 "name": "Nvme7", 00:20:37.212 "trtype": "tcp", 00:20:37.212 "traddr": "10.0.0.2", 00:20:37.212 "adrfam": "ipv4", 00:20:37.212 "trsvcid": "4420", 00:20:37.212 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:37.212 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:37.212 "hdgst": false, 00:20:37.212 "ddgst": false 00:20:37.212 }, 00:20:37.212 "method": "bdev_nvme_attach_controller" 00:20:37.212 },{ 00:20:37.212 "params": { 00:20:37.212 "name": "Nvme8", 00:20:37.212 "trtype": "tcp", 00:20:37.212 "traddr": "10.0.0.2", 00:20:37.212 "adrfam": "ipv4", 00:20:37.212 "trsvcid": "4420", 00:20:37.212 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:37.212 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:37.212 "hdgst": false, 00:20:37.212 "ddgst": false 00:20:37.212 }, 00:20:37.212 "method": "bdev_nvme_attach_controller" 00:20:37.212 },{ 00:20:37.212 "params": { 00:20:37.212 "name": "Nvme9", 00:20:37.212 "trtype": "tcp", 00:20:37.212 "traddr": "10.0.0.2", 00:20:37.212 "adrfam": "ipv4", 00:20:37.212 "trsvcid": "4420", 00:20:37.212 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:37.212 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:37.212 "hdgst": false, 00:20:37.212 "ddgst": false 00:20:37.212 }, 00:20:37.212 "method": "bdev_nvme_attach_controller" 00:20:37.212 },{ 00:20:37.212 "params": { 00:20:37.212 "name": "Nvme10", 00:20:37.212 "trtype": "tcp", 00:20:37.212 "traddr": "10.0.0.2", 00:20:37.212 "adrfam": "ipv4", 00:20:37.212 "trsvcid": "4420", 00:20:37.212 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:37.212 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:37.212 "hdgst": false, 00:20:37.212 "ddgst": false 00:20:37.212 }, 00:20:37.212 "method": "bdev_nvme_attach_controller" 00:20:37.212 }' 00:20:37.212 [2024-05-15 11:12:34.255956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.212 [2024-05-15 11:12:34.328870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.582 Running I/O for 10 seconds... 00:20:38.582 11:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:38.582 11:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@861 -- # return 0 00:20:38.582 11:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:38.582 11:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:38.582 11:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:38.840 11:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:38.840 11:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:38.840 11:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:38.840 11:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:38.840 11:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:20:38.840 11:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:20:38.840 11:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:38.840 11:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:38.840 11:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:38.840 11:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:38.840 11:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:38.840 11:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:38.840 11:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:38.840 11:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:20:38.840 11:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:20:38.840 11:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:39.097 11:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:39.097 11:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:39.097 11:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:39.097 11:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:39.097 11:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:39.097 11:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:39.097 11:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:39.097 11:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:20:39.097 11:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:20:39.097 11:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:39.354 11:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:39.354 11:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:39.354 11:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:39.354 11:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:39.354 11:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:39.354 11:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:39.354 11:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:39.354 11:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=195 00:20:39.354 11:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:20:39.354 11:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:20:39.354 11:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:20:39.354 11:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:20:39.354 11:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 2312384 00:20:39.354 11:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@947 -- # '[' -z 2312384 ']' 00:20:39.354 11:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # kill -0 2312384 00:20:39.354 11:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # uname 00:20:39.638 11:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:20:39.638 11:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2312384 00:20:39.638 11:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:20:39.638 11:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:20:39.638 11:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2312384' 00:20:39.638 killing process with pid 2312384 00:20:39.638 11:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # kill 2312384 00:20:39.638 11:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # wait 2312384 00:20:39.638 Received shutdown signal, test time was about 0.896653 seconds 00:20:39.638 00:20:39.638 Latency(us) 00:20:39.638 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.638 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.638 Verification LBA range: start 0x0 length 0x400 00:20:39.638 Nvme1n1 : 0.88 289.77 18.11 0.00 0.00 217989.57 16298.52 213362.42 00:20:39.638 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.638 Verification LBA range: start 0x0 length 0x400 00:20:39.638 Nvme2n1 : 0.90 285.72 17.86 0.00 0.00 217567.72 16640.45 215186.03 00:20:39.638 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.638 Verification LBA range: start 0x0 length 0x400 00:20:39.638 Nvme3n1 : 0.87 301.58 18.85 0.00 0.00 200556.43 13962.02 217921.45 00:20:39.638 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.638 Verification LBA range: start 0x0 length 0x400 00:20:39.638 Nvme4n1 : 0.87 292.92 18.31 0.00 0.00 204105.24 12765.27 214274.23 00:20:39.638 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.638 Verification LBA range: start 0x0 length 0x400 00:20:39.638 Nvme5n1 : 0.89 286.81 17.93 0.00 0.00 204674.67 17438.27 209715.20 00:20:39.638 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.638 Verification LBA range: start 0x0 length 0x400 00:20:39.638 Nvme6n1 : 0.88 290.07 18.13 0.00 0.00 198489.27 17096.35 217921.45 00:20:39.638 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.638 Verification LBA range: start 0x0 length 0x400 00:20:39.638 Nvme7n1 : 0.89 288.82 18.05 0.00 0.00 195391.89 13677.08 217921.45 00:20:39.638 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.638 Verification LBA range: start 0x0 length 0x400 00:20:39.638 Nvme8n1 : 0.89 286.58 17.91 0.00 0.00 193032.01 17096.35 219745.06 00:20:39.638 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.638 Verification LBA range: start 0x0 length 0x400 00:20:39.638 Nvme9n1 : 0.87 221.61 13.85 0.00 0.00 242469.84 22225.25 227039.50 00:20:39.638 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.638 Verification LBA range: start 0x0 length 0x400 00:20:39.638 Nvme10n1 : 0.86 222.29 13.89 0.00 0.00 237164.93 17666.23 232510.33 00:20:39.639 =================================================================================================================== 00:20:39.639 Total : 2766.17 172.89 0.00 0.00 209616.41 12765.27 232510.33 00:20:39.895 11:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:20:40.827 11:12:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 2312098 00:20:40.827 11:12:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:20:40.827 11:12:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:40.827 11:12:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:40.827 11:12:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:40.827 11:12:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:40.827 11:12:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:40.827 11:12:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:20:40.827 11:12:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:40.827 11:12:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:20:40.827 11:12:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:40.827 11:12:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:40.827 rmmod nvme_tcp 00:20:40.827 rmmod nvme_fabrics 00:20:40.827 rmmod nvme_keyring 00:20:40.827 11:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:40.827 11:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:20:40.827 11:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:20:40.827 11:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 2312098 ']' 00:20:40.827 11:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 2312098 00:20:40.827 11:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@947 -- # '[' -z 2312098 ']' 00:20:40.827 11:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # kill -0 2312098 00:20:40.827 11:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # uname 00:20:40.827 11:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:20:40.827 11:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2312098 00:20:40.827 11:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:20:40.827 11:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:20:40.827 11:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2312098' 00:20:40.827 killing process with pid 2312098 00:20:40.827 11:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # kill 2312098 00:20:40.827 [2024-05-15 11:12:38.056560] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:40.827 11:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # wait 2312098 00:20:41.395 11:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:41.395 11:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:41.395 11:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:41.395 11:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:41.395 11:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:41.395 11:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:41.395 11:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:41.395 11:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.299 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:43.299 00:20:43.299 real 0m8.083s 00:20:43.299 user 0m24.691s 00:20:43.299 sys 0m1.293s 00:20:43.299 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:20:43.299 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:43.299 ************************************ 00:20:43.299 END TEST nvmf_shutdown_tc2 00:20:43.299 ************************************ 00:20:43.559 11:12:40 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:43.559 11:12:40 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:20:43.559 11:12:40 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1104 -- # xtrace_disable 00:20:43.559 11:12:40 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:43.559 ************************************ 00:20:43.559 START TEST nvmf_shutdown_tc3 00:20:43.559 ************************************ 00:20:43.559 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1122 -- # nvmf_shutdown_tc3 00:20:43.559 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:20:43.559 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:43.559 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:43.559 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:43.559 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:43.559 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:43.559 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:43.559 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:43.559 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:43.559 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.559 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:43.559 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:43.559 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:43.559 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:43.559 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:43.559 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:43.559 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:43.559 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:43.559 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:43.559 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:43.559 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:43.559 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:20:43.559 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:43.559 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:20:43.559 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:20:43.559 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:20:43.559 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:20:43.559 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:20:43.559 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:43.560 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:43.560 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:43.560 Found net devices under 0000:86:00.0: cvl_0_0 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:43.560 Found net devices under 0000:86:00.1: cvl_0_1 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:43.560 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:43.819 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:43.819 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:43.819 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:43.819 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:43.820 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:20:43.820 00:20:43.820 --- 10.0.0.2 ping statistics --- 00:20:43.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.820 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:20:43.820 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:43.820 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:43.820 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:20:43.820 00:20:43.820 --- 10.0.0.1 ping statistics --- 00:20:43.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.820 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:20:43.820 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:43.820 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:20:43.820 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:43.820 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:43.820 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:43.820 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:43.820 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:43.820 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:43.820 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:43.820 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:43.820 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:43.820 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@721 -- # xtrace_disable 00:20:43.820 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:43.820 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2313548 00:20:43.820 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2313548 00:20:43.820 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:43.820 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@828 -- # '[' -z 2313548 ']' 00:20:43.820 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:43.820 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:43.820 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:43.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:43.820 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:43.820 11:12:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:43.820 [2024-05-15 11:12:40.986529] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:20:43.820 [2024-05-15 11:12:40.986578] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:43.820 EAL: No free 2048 kB hugepages reported on node 1 00:20:43.820 [2024-05-15 11:12:41.045229] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:44.078 [2024-05-15 11:12:41.119094] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:44.078 [2024-05-15 11:12:41.119133] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:44.078 [2024-05-15 11:12:41.119140] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:44.078 [2024-05-15 11:12:41.119146] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:44.078 [2024-05-15 11:12:41.119151] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:44.078 [2024-05-15 11:12:41.119256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:44.078 [2024-05-15 11:12:41.119362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:44.078 [2024-05-15 11:12:41.119473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:44.078 [2024-05-15 11:12:41.119474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:44.691 11:12:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:44.691 11:12:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@861 -- # return 0 00:20:44.691 11:12:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:44.691 11:12:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@727 -- # xtrace_disable 00:20:44.691 11:12:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:44.691 11:12:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:44.691 11:12:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:44.691 11:12:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:44.691 11:12:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:44.691 [2024-05-15 11:12:41.831066] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:44.691 11:12:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:44.691 11:12:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:44.691 11:12:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:44.692 11:12:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@721 -- # xtrace_disable 00:20:44.692 11:12:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:44.692 11:12:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:44.692 11:12:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:44.692 11:12:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:44.692 11:12:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:44.692 11:12:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:44.692 11:12:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:44.692 11:12:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:44.692 11:12:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:44.692 11:12:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:44.692 11:12:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:44.692 11:12:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:44.692 11:12:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:44.692 11:12:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:44.692 11:12:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:44.692 11:12:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:44.692 11:12:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:44.692 11:12:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:44.692 11:12:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:44.692 11:12:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:44.692 11:12:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:44.692 11:12:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:44.692 11:12:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:44.692 11:12:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:44.692 11:12:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:44.692 Malloc1 00:20:44.692 [2024-05-15 11:12:41.926850] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:44.692 [2024-05-15 11:12:41.927089] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:44.692 Malloc2 00:20:44.950 Malloc3 00:20:44.950 Malloc4 00:20:44.950 Malloc5 00:20:44.950 Malloc6 00:20:44.950 Malloc7 00:20:44.950 Malloc8 00:20:45.209 Malloc9 00:20:45.209 Malloc10 00:20:45.209 11:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:45.209 11:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:45.209 11:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@727 -- # xtrace_disable 00:20:45.209 11:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:45.209 11:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=2313840 00:20:45.209 11:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 2313840 /var/tmp/bdevperf.sock 00:20:45.209 11:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@828 -- # '[' -z 2313840 ']' 00:20:45.209 11:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:45.209 11:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:45.209 11:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:45.209 11:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:45.209 11:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:45.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:45.209 11:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:20:45.209 11:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:45.209 11:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:45.209 11:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:20:45.209 11:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:45.209 11:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:45.209 { 00:20:45.209 "params": { 00:20:45.209 "name": "Nvme$subsystem", 00:20:45.209 "trtype": "$TEST_TRANSPORT", 00:20:45.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.209 "adrfam": "ipv4", 00:20:45.209 "trsvcid": "$NVMF_PORT", 00:20:45.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.209 "hdgst": ${hdgst:-false}, 00:20:45.209 "ddgst": ${ddgst:-false} 00:20:45.209 }, 00:20:45.209 "method": "bdev_nvme_attach_controller" 00:20:45.209 } 00:20:45.209 EOF 00:20:45.209 )") 00:20:45.209 11:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:45.209 11:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:45.209 11:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:45.209 { 00:20:45.209 "params": { 00:20:45.209 "name": "Nvme$subsystem", 00:20:45.209 "trtype": "$TEST_TRANSPORT", 00:20:45.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.209 "adrfam": "ipv4", 00:20:45.209 "trsvcid": "$NVMF_PORT", 00:20:45.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.209 "hdgst": ${hdgst:-false}, 00:20:45.209 "ddgst": ${ddgst:-false} 00:20:45.209 }, 00:20:45.209 "method": "bdev_nvme_attach_controller" 00:20:45.209 } 00:20:45.209 EOF 00:20:45.209 )") 00:20:45.209 11:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:45.209 11:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:45.209 11:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:45.209 { 00:20:45.209 "params": { 00:20:45.209 "name": "Nvme$subsystem", 00:20:45.209 "trtype": "$TEST_TRANSPORT", 00:20:45.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.209 "adrfam": "ipv4", 00:20:45.209 "trsvcid": "$NVMF_PORT", 00:20:45.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.209 "hdgst": ${hdgst:-false}, 00:20:45.209 "ddgst": ${ddgst:-false} 00:20:45.209 }, 00:20:45.209 "method": "bdev_nvme_attach_controller" 00:20:45.209 } 00:20:45.209 EOF 00:20:45.209 )") 00:20:45.209 11:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:45.209 11:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:45.209 11:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:45.209 { 00:20:45.209 "params": { 00:20:45.209 "name": "Nvme$subsystem", 00:20:45.209 "trtype": "$TEST_TRANSPORT", 00:20:45.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.209 "adrfam": "ipv4", 00:20:45.209 "trsvcid": "$NVMF_PORT", 00:20:45.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.209 "hdgst": ${hdgst:-false}, 00:20:45.209 "ddgst": ${ddgst:-false} 00:20:45.209 }, 00:20:45.210 "method": "bdev_nvme_attach_controller" 00:20:45.210 } 00:20:45.210 EOF 00:20:45.210 )") 00:20:45.210 11:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:45.210 11:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:45.210 11:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:45.210 { 00:20:45.210 "params": { 00:20:45.210 "name": "Nvme$subsystem", 00:20:45.210 "trtype": "$TEST_TRANSPORT", 00:20:45.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.210 "adrfam": "ipv4", 00:20:45.210 "trsvcid": "$NVMF_PORT", 00:20:45.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.210 "hdgst": ${hdgst:-false}, 00:20:45.210 "ddgst": ${ddgst:-false} 00:20:45.210 }, 00:20:45.210 "method": "bdev_nvme_attach_controller" 00:20:45.210 } 00:20:45.210 EOF 00:20:45.210 )") 00:20:45.210 11:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:45.210 11:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:45.210 11:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:45.210 { 00:20:45.210 "params": { 00:20:45.210 "name": "Nvme$subsystem", 00:20:45.210 "trtype": "$TEST_TRANSPORT", 00:20:45.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.210 "adrfam": "ipv4", 00:20:45.210 "trsvcid": "$NVMF_PORT", 00:20:45.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.210 "hdgst": ${hdgst:-false}, 00:20:45.210 "ddgst": ${ddgst:-false} 00:20:45.210 }, 00:20:45.210 "method": "bdev_nvme_attach_controller" 00:20:45.210 } 00:20:45.210 EOF 00:20:45.210 )") 00:20:45.210 11:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:45.210 [2024-05-15 11:12:42.392258] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:20:45.210 [2024-05-15 11:12:42.392305] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2313840 ] 00:20:45.210 11:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:45.210 11:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:45.210 { 00:20:45.210 "params": { 00:20:45.210 "name": "Nvme$subsystem", 00:20:45.210 "trtype": "$TEST_TRANSPORT", 00:20:45.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.210 "adrfam": "ipv4", 00:20:45.210 "trsvcid": "$NVMF_PORT", 00:20:45.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.210 "hdgst": ${hdgst:-false}, 00:20:45.210 "ddgst": ${ddgst:-false} 00:20:45.210 }, 00:20:45.210 "method": "bdev_nvme_attach_controller" 00:20:45.210 } 00:20:45.210 EOF 00:20:45.210 )") 00:20:45.210 11:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:45.210 11:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:45.210 11:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:45.210 { 00:20:45.210 "params": { 00:20:45.210 "name": "Nvme$subsystem", 00:20:45.210 "trtype": "$TEST_TRANSPORT", 00:20:45.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.210 "adrfam": "ipv4", 00:20:45.210 "trsvcid": "$NVMF_PORT", 00:20:45.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.210 "hdgst": ${hdgst:-false}, 00:20:45.210 "ddgst": ${ddgst:-false} 00:20:45.210 }, 00:20:45.210 "method": "bdev_nvme_attach_controller" 00:20:45.210 } 00:20:45.210 EOF 00:20:45.210 )") 00:20:45.210 11:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:45.210 11:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:45.210 11:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:45.210 { 00:20:45.210 "params": { 00:20:45.210 "name": "Nvme$subsystem", 00:20:45.210 "trtype": "$TEST_TRANSPORT", 00:20:45.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.210 "adrfam": "ipv4", 00:20:45.210 "trsvcid": "$NVMF_PORT", 00:20:45.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.210 "hdgst": ${hdgst:-false}, 00:20:45.210 "ddgst": ${ddgst:-false} 00:20:45.210 }, 00:20:45.210 "method": "bdev_nvme_attach_controller" 00:20:45.210 } 00:20:45.210 EOF 00:20:45.210 )") 00:20:45.210 11:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:45.210 11:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:45.210 11:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:45.210 { 00:20:45.210 "params": { 00:20:45.210 "name": "Nvme$subsystem", 00:20:45.210 "trtype": "$TEST_TRANSPORT", 00:20:45.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.210 "adrfam": "ipv4", 00:20:45.210 "trsvcid": "$NVMF_PORT", 00:20:45.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.210 "hdgst": ${hdgst:-false}, 00:20:45.210 "ddgst": ${ddgst:-false} 00:20:45.210 }, 00:20:45.210 "method": "bdev_nvme_attach_controller" 00:20:45.210 } 00:20:45.210 EOF 00:20:45.210 )") 00:20:45.210 11:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:45.210 EAL: No free 2048 kB hugepages reported on node 1 00:20:45.210 11:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:20:45.210 11:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:20:45.210 11:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:45.210 "params": { 00:20:45.210 "name": "Nvme1", 00:20:45.210 "trtype": "tcp", 00:20:45.210 "traddr": "10.0.0.2", 00:20:45.210 "adrfam": "ipv4", 00:20:45.210 "trsvcid": "4420", 00:20:45.210 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.210 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:45.210 "hdgst": false, 00:20:45.210 "ddgst": false 00:20:45.210 }, 00:20:45.210 "method": "bdev_nvme_attach_controller" 00:20:45.210 },{ 00:20:45.210 "params": { 00:20:45.210 "name": "Nvme2", 00:20:45.210 "trtype": "tcp", 00:20:45.210 "traddr": "10.0.0.2", 00:20:45.210 "adrfam": "ipv4", 00:20:45.210 "trsvcid": "4420", 00:20:45.210 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:45.210 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:45.210 "hdgst": false, 00:20:45.210 "ddgst": false 00:20:45.210 }, 00:20:45.210 "method": "bdev_nvme_attach_controller" 00:20:45.210 },{ 00:20:45.210 "params": { 00:20:45.210 "name": "Nvme3", 00:20:45.210 "trtype": "tcp", 00:20:45.210 "traddr": "10.0.0.2", 00:20:45.210 "adrfam": "ipv4", 00:20:45.210 "trsvcid": "4420", 00:20:45.210 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:45.210 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:45.210 "hdgst": false, 00:20:45.210 "ddgst": false 00:20:45.210 }, 00:20:45.210 "method": "bdev_nvme_attach_controller" 00:20:45.210 },{ 00:20:45.210 "params": { 00:20:45.210 "name": "Nvme4", 00:20:45.210 "trtype": "tcp", 00:20:45.210 "traddr": "10.0.0.2", 00:20:45.210 "adrfam": "ipv4", 00:20:45.210 "trsvcid": "4420", 00:20:45.210 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:45.210 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:45.210 "hdgst": false, 00:20:45.210 "ddgst": false 00:20:45.210 }, 00:20:45.210 "method": "bdev_nvme_attach_controller" 00:20:45.210 },{ 00:20:45.210 "params": { 00:20:45.210 "name": "Nvme5", 00:20:45.210 "trtype": "tcp", 00:20:45.210 "traddr": "10.0.0.2", 00:20:45.210 "adrfam": "ipv4", 00:20:45.210 "trsvcid": "4420", 00:20:45.210 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:45.210 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:45.210 "hdgst": false, 00:20:45.210 "ddgst": false 00:20:45.210 }, 00:20:45.210 "method": "bdev_nvme_attach_controller" 00:20:45.210 },{ 00:20:45.210 "params": { 00:20:45.210 "name": "Nvme6", 00:20:45.210 "trtype": "tcp", 00:20:45.210 "traddr": "10.0.0.2", 00:20:45.210 "adrfam": "ipv4", 00:20:45.210 "trsvcid": "4420", 00:20:45.210 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:45.210 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:45.210 "hdgst": false, 00:20:45.210 "ddgst": false 00:20:45.210 }, 00:20:45.210 "method": "bdev_nvme_attach_controller" 00:20:45.210 },{ 00:20:45.210 "params": { 00:20:45.210 "name": "Nvme7", 00:20:45.210 "trtype": "tcp", 00:20:45.210 "traddr": "10.0.0.2", 00:20:45.210 "adrfam": "ipv4", 00:20:45.210 "trsvcid": "4420", 00:20:45.210 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:45.210 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:45.210 "hdgst": false, 00:20:45.210 "ddgst": false 00:20:45.210 }, 00:20:45.210 "method": "bdev_nvme_attach_controller" 00:20:45.210 },{ 00:20:45.210 "params": { 00:20:45.210 "name": "Nvme8", 00:20:45.210 "trtype": "tcp", 00:20:45.210 "traddr": "10.0.0.2", 00:20:45.210 "adrfam": "ipv4", 00:20:45.210 "trsvcid": "4420", 00:20:45.210 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:45.210 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:45.210 "hdgst": false, 00:20:45.210 "ddgst": false 00:20:45.210 }, 00:20:45.210 "method": "bdev_nvme_attach_controller" 00:20:45.210 },{ 00:20:45.210 "params": { 00:20:45.210 "name": "Nvme9", 00:20:45.210 "trtype": "tcp", 00:20:45.210 "traddr": "10.0.0.2", 00:20:45.210 "adrfam": "ipv4", 00:20:45.210 "trsvcid": "4420", 00:20:45.210 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:45.210 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:45.210 "hdgst": false, 00:20:45.211 "ddgst": false 00:20:45.211 }, 00:20:45.211 "method": "bdev_nvme_attach_controller" 00:20:45.211 },{ 00:20:45.211 "params": { 00:20:45.211 "name": "Nvme10", 00:20:45.211 "trtype": "tcp", 00:20:45.211 "traddr": "10.0.0.2", 00:20:45.211 "adrfam": "ipv4", 00:20:45.211 "trsvcid": "4420", 00:20:45.211 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:45.211 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:45.211 "hdgst": false, 00:20:45.211 "ddgst": false 00:20:45.211 }, 00:20:45.211 "method": "bdev_nvme_attach_controller" 00:20:45.211 }' 00:20:45.211 [2024-05-15 11:12:42.448966] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.469 [2024-05-15 11:12:42.522229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:46.846 Running I/O for 10 seconds... 00:20:46.846 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:46.846 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@861 -- # return 0 00:20:46.846 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:46.846 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:46.846 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:47.104 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:47.104 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:47.104 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:47.104 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:47.104 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:47.104 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:20:47.104 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:20:47.105 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:47.105 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:47.105 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:47.105 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:47.105 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:47.105 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:47.105 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:47.105 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:20:47.105 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:20:47.105 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:47.363 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:47.363 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:47.363 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:47.363 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:47.363 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:47.363 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:47.363 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:47.364 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:20:47.364 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:20:47.364 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:47.622 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:47.622 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:47.622 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:47.622 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:47.622 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:47.622 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:47.622 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:47.622 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:20:47.622 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:20:47.622 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:20:47.622 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:20:47.622 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:20:47.622 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 2313548 00:20:47.622 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@947 -- # '[' -z 2313548 ']' 00:20:47.622 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # kill -0 2313548 00:20:47.622 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # uname 00:20:47.622 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:20:47.622 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2313548 00:20:47.898 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:20:47.898 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:20:47.898 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2313548' 00:20:47.898 killing process with pid 2313548 00:20:47.898 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # kill 2313548 00:20:47.898 [2024-05-15 11:12:44.903872] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:47.898 11:12:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@971 -- # wait 2313548 00:20:47.898 [2024-05-15 11:12:44.905252] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.898 [2024-05-15 11:12:44.905285] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.898 [2024-05-15 11:12:44.905293] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.898 [2024-05-15 11:12:44.905300] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.898 [2024-05-15 11:12:44.905307] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.898 [2024-05-15 11:12:44.905313] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.898 [2024-05-15 11:12:44.905319] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.898 [2024-05-15 11:12:44.905326] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.898 [2024-05-15 11:12:44.905332] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.898 [2024-05-15 11:12:44.905339] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.898 [2024-05-15 11:12:44.905345] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.898 [2024-05-15 11:12:44.905351] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.898 [2024-05-15 11:12:44.905357] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.898 [2024-05-15 11:12:44.905363] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.898 [2024-05-15 11:12:44.905369] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.898 [2024-05-15 11:12:44.905375] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.898 [2024-05-15 11:12:44.905381] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.898 [2024-05-15 11:12:44.905388] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.898 [2024-05-15 11:12:44.905394] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.898 [2024-05-15 11:12:44.905400] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.898 [2024-05-15 11:12:44.905406] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.898 [2024-05-15 11:12:44.905412] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.898 [2024-05-15 11:12:44.905418] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.898 [2024-05-15 11:12:44.905424] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.898 [2024-05-15 11:12:44.905430] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.898 [2024-05-15 11:12:44.905437] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.898 [2024-05-15 11:12:44.905446] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.898 [2024-05-15 11:12:44.905453] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.898 [2024-05-15 11:12:44.905459] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.898 [2024-05-15 11:12:44.905465] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.898 [2024-05-15 11:12:44.905471] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.898 [2024-05-15 11:12:44.905477] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.898 [2024-05-15 11:12:44.905483] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.898 [2024-05-15 11:12:44.905489] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.898 [2024-05-15 11:12:44.905495] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.898 [2024-05-15 11:12:44.905501] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.898 [2024-05-15 11:12:44.905507] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.898 [2024-05-15 11:12:44.905513] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.898 [2024-05-15 11:12:44.905519] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.898 [2024-05-15 11:12:44.905525] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.898 [2024-05-15 11:12:44.905531] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.898 [2024-05-15 11:12:44.905537] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.898 [2024-05-15 11:12:44.905543] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.898 [2024-05-15 11:12:44.905549] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.898 [2024-05-15 11:12:44.905556] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.898 [2024-05-15 11:12:44.905562] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.898 [2024-05-15 11:12:44.905568] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.898 [2024-05-15 11:12:44.905575] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.898 [2024-05-15 11:12:44.905581] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.898 [2024-05-15 11:12:44.905587] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.898 [2024-05-15 11:12:44.905593] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.898 [2024-05-15 11:12:44.905599] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.898 [2024-05-15 11:12:44.905605] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.905611] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.905619] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.905625] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.905631] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.905637] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.905643] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.905649] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.905655] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.905661] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.905667] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21f30 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.906792] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1fa10 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.906802] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1fa10 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.906809] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1fa10 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.906815] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1fa10 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.906821] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1fa10 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.906827] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1fa10 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.906833] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1fa10 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.906840] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1fa10 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.906846] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1fa10 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.906852] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1fa10 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.906858] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1fa10 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.906863] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1fa10 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.906869] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1fa10 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.906875] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1fa10 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.906880] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1fa10 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.906886] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1fa10 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.906892] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1fa10 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.906899] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1fa10 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.906907] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1fa10 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.906913] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1fa10 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.906918] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1fa10 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.906924] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1fa10 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.906930] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1fa10 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.906936] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1fa10 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.906941] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1fa10 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.906947] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1fa10 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.906955] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1fa10 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.906960] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1fa10 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.906967] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1fa10 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.906972] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1fa10 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.906979] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1fa10 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.906984] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1fa10 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.906991] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1fa10 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.906997] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1fa10 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.907003] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1fa10 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.907008] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1fa10 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.907014] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1fa10 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.907020] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1fa10 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.908279] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.908304] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.908312] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.908319] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.908325] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.908331] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.908337] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.908347] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.908354] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.908361] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.908367] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.908374] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.908380] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.908386] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.908392] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.908398] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.908404] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.908410] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.908416] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.908422] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.908428] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.908434] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.908440] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.908446] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.908452] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.908458] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.908464] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.908470] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.908475] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.908481] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.908487] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.908492] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.908499] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.908505] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.908512] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.908518] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.908525] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.899 [2024-05-15 11:12:44.908531] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.908537] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.908543] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.908549] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.908555] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.908561] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.908567] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.908573] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.908579] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.908585] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.908590] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.908596] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.908603] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.908608] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.908614] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.908620] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.908626] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.908632] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.908638] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.908645] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.908651] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.908657] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.908663] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.908669] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.908676] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.908682] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1feb0 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.909815] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.909831] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.909838] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.909844] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.909851] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.909856] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.909863] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.909869] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.909876] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.909882] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.909888] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.909894] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.909886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-05-15 11:12:44.909901] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with id:0 cdw10:00000000 cdw11:00000000 00:20:47.900 the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.909911] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.909917] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with [2024-05-15 11:12:44.909916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:20:47.900 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.900 [2024-05-15 11:12:44.909926] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.909929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.900 [2024-05-15 11:12:44.909933] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.909937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.900 [2024-05-15 11:12:44.909940] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.909946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-05-15 11:12:44.909947] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with id:0 cdw10:00000000 cdw11:00000000 00:20:47.900 the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.909956] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with [2024-05-15 11:12:44.909956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:20:47.900 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.900 [2024-05-15 11:12:44.909969] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.909971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.900 [2024-05-15 11:12:44.909977] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.909979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.900 [2024-05-15 11:12:44.909984] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.909987] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b9db0 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.909991] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.909998] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.910004] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.910010] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.910016] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with [2024-05-15 11:12:44.910016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsthe state(5) to be set 00:20:47.900 id:0 cdw10:00000000 cdw11:00000000 00:20:47.900 [2024-05-15 11:12:44.910026] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.910027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.900 [2024-05-15 11:12:44.910032] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.910036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.900 [2024-05-15 11:12:44.910039] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.910044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.900 [2024-05-15 11:12:44.910047] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.910052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.900 [2024-05-15 11:12:44.910056] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.910059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.900 [2024-05-15 11:12:44.910064] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.910067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.900 [2024-05-15 11:12:44.910071] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.910074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.900 [2024-05-15 11:12:44.910080] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.910082] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fd730 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.910087] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.910094] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.910101] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.910104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-05-15 11:12:44.910106] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with id:0 cdw10:00000000 cdw11:00000000 00:20:47.900 the state(5) to be set 00:20:47.900 [2024-05-15 11:12:44.910115] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with [2024-05-15 11:12:44.910115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:20:47.901 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.901 [2024-05-15 11:12:44.910123] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.910126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.901 [2024-05-15 11:12:44.910130] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.910134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.901 [2024-05-15 11:12:44.910137] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.910143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.901 [2024-05-15 11:12:44.910144] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.910150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.901 [2024-05-15 11:12:44.910151] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.910159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-05-15 11:12:44.910159] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with the state(5) to be set 00:20:47.901 id:0 cdw10:00000000 cdw11:00000000 00:20:47.901 [2024-05-15 11:12:44.910172] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.910174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.901 [2024-05-15 11:12:44.910179] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.910181] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ab70 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.910186] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.910192] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.910200] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.910206] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with [2024-05-15 11:12:44.910206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsthe state(5) to be set 00:20:47.901 id:0 cdw10:00000000 cdw11:00000000 00:20:47.901 [2024-05-15 11:12:44.910215] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.910217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.901 [2024-05-15 11:12:44.910222] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.910225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.901 [2024-05-15 11:12:44.910229] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.910233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.901 [2024-05-15 11:12:44.910236] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.910240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.901 [2024-05-15 11:12:44.910243] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.910248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.901 [2024-05-15 11:12:44.910250] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.910256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-05-15 11:12:44.910257] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with id:0 cdw10:00000000 cdw11:00000000 00:20:47.901 the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.910266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-05-15 11:12:44.910266] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.901 the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.910275] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ff8a0 is same [2024-05-15 11:12:44.910275] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with with the state(5) to be set 00:20:47.901 the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.910285] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20350 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.910311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.901 [2024-05-15 11:12:44.910320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.901 [2024-05-15 11:12:44.910327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.901 [2024-05-15 11:12:44.910333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.901 [2024-05-15 11:12:44.910341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.901 [2024-05-15 11:12:44.910350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.901 [2024-05-15 11:12:44.910357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.901 [2024-05-15 11:12:44.910365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.901 [2024-05-15 11:12:44.910372] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e3e10 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.911063] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.911086] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.911094] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.911101] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.911107] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.911114] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.911120] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.911127] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.911133] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.911139] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.911145] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.911152] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.911158] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.911168] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.911175] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.911182] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.911188] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.911194] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.911200] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.911206] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.911212] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.911218] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.911224] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.911234] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.911240] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.911246] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.911252] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.911258] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.911265] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.911271] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.911278] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.911284] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.911290] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.911296] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.911302] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.911308] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.911314] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.911321] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.901 [2024-05-15 11:12:44.911328] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.902 [2024-05-15 11:12:44.911334] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.902 [2024-05-15 11:12:44.911340] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.902 [2024-05-15 11:12:44.911346] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.902 [2024-05-15 11:12:44.911351] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.902 [2024-05-15 11:12:44.911357] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.902 [2024-05-15 11:12:44.911364] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.902 [2024-05-15 11:12:44.911370] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.902 [2024-05-15 11:12:44.911376] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.902 [2024-05-15 11:12:44.911382] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.902 [2024-05-15 11:12:44.911387] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.902 [2024-05-15 11:12:44.911393] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.902 [2024-05-15 11:12:44.911401] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.902 [2024-05-15 11:12:44.911406] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.902 [2024-05-15 11:12:44.911412] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.902 [2024-05-15 11:12:44.911418] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.902 [2024-05-15 11:12:44.911424] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.902 [2024-05-15 11:12:44.911430] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.902 [2024-05-15 11:12:44.911436] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.902 [2024-05-15 11:12:44.911442] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.902 [2024-05-15 11:12:44.911448] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.902 [2024-05-15 11:12:44.911454] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.902 [2024-05-15 11:12:44.911460] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.902 [2024-05-15 11:12:44.911465] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.902 [2024-05-15 11:12:44.911471] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d207f0 is same with the state(5) to be set 00:20:47.902 [2024-05-15 11:12:44.912667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.902 [2024-05-15 11:12:44.912691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.902 [2024-05-15 11:12:44.912707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.902 [2024-05-15 11:12:44.912714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.902 [2024-05-15 11:12:44.912723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.902 [2024-05-15 11:12:44.912730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.902 [2024-05-15 11:12:44.912739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.902 [2024-05-15 11:12:44.912746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.902 [2024-05-15 11:12:44.912754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.902 [2024-05-15 11:12:44.912761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.902 [2024-05-15 11:12:44.912769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.902 [2024-05-15 11:12:44.912775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.902 [2024-05-15 11:12:44.912783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.902 [2024-05-15 11:12:44.912793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.902 [2024-05-15 11:12:44.912806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.902 [2024-05-15 11:12:44.912813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.902 [2024-05-15 11:12:44.912821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.902 [2024-05-15 11:12:44.912828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.902 [2024-05-15 11:12:44.912836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.902 [2024-05-15 11:12:44.912842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.902 [2024-05-15 11:12:44.912850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.902 [2024-05-15 11:12:44.912857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.902 [2024-05-15 11:12:44.912865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.902 [2024-05-15 11:12:44.912871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.903 [2024-05-15 11:12:44.912879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.903 [2024-05-15 11:12:44.912886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.903 [2024-05-15 11:12:44.912894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.903 [2024-05-15 11:12:44.912900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.903 [2024-05-15 11:12:44.912908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.903 [2024-05-15 11:12:44.912915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.903 [2024-05-15 11:12:44.912923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.903 [2024-05-15 11:12:44.912929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.903 [2024-05-15 11:12:44.912937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.903 [2024-05-15 11:12:44.912944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.903 [2024-05-15 11:12:44.912952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.903 [2024-05-15 11:12:44.912958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.903 [2024-05-15 11:12:44.912966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.903 [2024-05-15 11:12:44.912973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.903 [2024-05-15 11:12:44.912982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.903 [2024-05-15 11:12:44.912988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.903 [2024-05-15 11:12:44.912997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.903 [2024-05-15 11:12:44.913004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.903 [2024-05-15 11:12:44.913011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.903 [2024-05-15 11:12:44.913018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.903 [2024-05-15 11:12:44.913026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.903 [2024-05-15 11:12:44.913032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.903 [2024-05-15 11:12:44.913041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.903 [2024-05-15 11:12:44.913048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.903 [2024-05-15 11:12:44.913056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.903 [2024-05-15 11:12:44.913063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.903 [2024-05-15 11:12:44.913070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.903 [2024-05-15 11:12:44.913077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.903 [2024-05-15 11:12:44.913085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.903 [2024-05-15 11:12:44.913091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.903 [2024-05-15 11:12:44.913090] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with the state(5) to be set 00:20:47.903 [2024-05-15 11:12:44.913100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.903 [2024-05-15 11:12:44.913104] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with the state(5) to be set 00:20:47.903 [2024-05-15 11:12:44.913108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.903 [2024-05-15 11:12:44.913111] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with the state(5) to be set 00:20:47.903 [2024-05-15 11:12:44.913116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.903 [2024-05-15 11:12:44.913118] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with the state(5) to be set 00:20:47.903 [2024-05-15 11:12:44.913124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.903 [2024-05-15 11:12:44.913125] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with the state(5) to be set 00:20:47.903 [2024-05-15 11:12:44.913132] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with [2024-05-15 11:12:44.913132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:12the state(5) to be set 00:20:47.903 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.903 [2024-05-15 11:12:44.913145] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with [2024-05-15 11:12:44.913145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:47.903 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.903 [2024-05-15 11:12:44.913154] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with the state(5) to be set 00:20:47.903 [2024-05-15 11:12:44.913156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.903 [2024-05-15 11:12:44.913161] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with the state(5) to be set 00:20:47.903 [2024-05-15 11:12:44.913169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.903 [2024-05-15 11:12:44.913173] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with the state(5) to be set 00:20:47.903 [2024-05-15 11:12:44.913178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.903 [2024-05-15 11:12:44.913180] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with the state(5) to be set 00:20:47.903 [2024-05-15 11:12:44.913186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.903 [2024-05-15 11:12:44.913187] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with the state(5) to be set 00:20:47.903 [2024-05-15 11:12:44.913195] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with [2024-05-15 11:12:44.913195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:12the state(5) to be set 00:20:47.903 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.903 [2024-05-15 11:12:44.913203] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with [2024-05-15 11:12:44.913204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:47.903 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.903 [2024-05-15 11:12:44.913212] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with the state(5) to be set 00:20:47.903 [2024-05-15 11:12:44.913215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.903 [2024-05-15 11:12:44.913219] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with the state(5) to be set 00:20:47.903 [2024-05-15 11:12:44.913222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.903 [2024-05-15 11:12:44.913226] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with the state(5) to be set 00:20:47.903 [2024-05-15 11:12:44.913231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.903 [2024-05-15 11:12:44.913233] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with the state(5) to be set 00:20:47.903 [2024-05-15 11:12:44.913239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.903 [2024-05-15 11:12:44.913240] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with the state(5) to be set 00:20:47.903 [2024-05-15 11:12:44.913248] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with [2024-05-15 11:12:44.913248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:12the state(5) to be set 00:20:47.903 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.903 [2024-05-15 11:12:44.913257] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with [2024-05-15 11:12:44.913258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:47.903 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.903 [2024-05-15 11:12:44.913266] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with the state(5) to be set 00:20:47.903 [2024-05-15 11:12:44.913269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.903 [2024-05-15 11:12:44.913273] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with the state(5) to be set 00:20:47.903 [2024-05-15 11:12:44.913276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.903 [2024-05-15 11:12:44.913280] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with the state(5) to be set 00:20:47.903 [2024-05-15 11:12:44.913285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.904 [2024-05-15 11:12:44.913287] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with the state(5) to be set 00:20:47.904 [2024-05-15 11:12:44.913292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.904 [2024-05-15 11:12:44.913294] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with the state(5) to be set 00:20:47.904 [2024-05-15 11:12:44.913302] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with [2024-05-15 11:12:44.913302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:12the state(5) to be set 00:20:47.904 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.904 [2024-05-15 11:12:44.913310] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with the state(5) to be set 00:20:47.904 [2024-05-15 11:12:44.913312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.904 [2024-05-15 11:12:44.913318] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with the state(5) to be set 00:20:47.904 [2024-05-15 11:12:44.913322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.904 [2024-05-15 11:12:44.913324] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with the state(5) to be set 00:20:47.904 [2024-05-15 11:12:44.913330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.904 [2024-05-15 11:12:44.913332] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with the state(5) to be set 00:20:47.904 [2024-05-15 11:12:44.913338] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with the state(5) to be set 00:20:47.904 [2024-05-15 11:12:44.913339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.904 [2024-05-15 11:12:44.913344] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with the state(5) to be set 00:20:47.904 [2024-05-15 11:12:44.913348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.904 [2024-05-15 11:12:44.913351] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with the state(5) to be set 00:20:47.904 [2024-05-15 11:12:44.913358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.904 [2024-05-15 11:12:44.913359] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with the state(5) to be set 00:20:47.904 [2024-05-15 11:12:44.913366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 11:12:44.913366] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.904 the state(5) to be set 00:20:47.904 [2024-05-15 11:12:44.913376] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with [2024-05-15 11:12:44.913376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:12the state(5) to be set 00:20:47.904 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.904 [2024-05-15 11:12:44.913384] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with the state(5) to be set 00:20:47.904 [2024-05-15 11:12:44.913386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.904 [2024-05-15 11:12:44.913391] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with the state(5) to be set 00:20:47.904 [2024-05-15 11:12:44.913395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.904 [2024-05-15 11:12:44.913398] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with the state(5) to be set 00:20:47.904 [2024-05-15 11:12:44.913403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.904 [2024-05-15 11:12:44.913405] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with the state(5) to be set 00:20:47.904 [2024-05-15 11:12:44.913412] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with the state(5) to be set 00:20:47.904 [2024-05-15 11:12:44.913412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.904 [2024-05-15 11:12:44.913418] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with the state(5) to be set 00:20:47.904 [2024-05-15 11:12:44.913420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.904 [2024-05-15 11:12:44.913425] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with the state(5) to be set 00:20:47.904 [2024-05-15 11:12:44.913429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.904 [2024-05-15 11:12:44.913431] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with the state(5) to be set 00:20:47.904 [2024-05-15 11:12:44.913437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.904 [2024-05-15 11:12:44.913438] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with the state(5) to be set 00:20:47.904 [2024-05-15 11:12:44.913445] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with the state(5) to be set 00:20:47.904 [2024-05-15 11:12:44.913446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.904 [2024-05-15 11:12:44.913451] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with the state(5) to be set 00:20:47.904 [2024-05-15 11:12:44.913453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.904 [2024-05-15 11:12:44.913458] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with the state(5) to be set 00:20:47.904 [2024-05-15 11:12:44.913464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:12[2024-05-15 11:12:44.913465] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.904 the state(5) to be set 00:20:47.904 [2024-05-15 11:12:44.913473] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with [2024-05-15 11:12:44.913473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:47.904 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.904 [2024-05-15 11:12:44.913481] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with the state(5) to be set 00:20:47.904 [2024-05-15 11:12:44.913485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.904 [2024-05-15 11:12:44.913488] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with the state(5) to be set 00:20:47.904 [2024-05-15 11:12:44.913492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.904 [2024-05-15 11:12:44.913495] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with the state(5) to be set 00:20:47.904 [2024-05-15 11:12:44.913501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.904 [2024-05-15 11:12:44.913503] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with the state(5) to be set 00:20:47.904 [2024-05-15 11:12:44.913509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 11:12:44.913510] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.904 the state(5) to be set 00:20:47.904 [2024-05-15 11:12:44.913518] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with the state(5) to be set 00:20:47.904 [2024-05-15 11:12:44.913520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.904 [2024-05-15 11:12:44.913524] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with the state(5) to be set 00:20:47.904 [2024-05-15 11:12:44.913527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.904 [2024-05-15 11:12:44.913531] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with the state(5) to be set 00:20:47.904 [2024-05-15 11:12:44.913536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.904 [2024-05-15 11:12:44.913538] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with the state(5) to be set 00:20:47.904 [2024-05-15 11:12:44.913543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.904 [2024-05-15 11:12:44.913545] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with the state(5) to be set 00:20:47.904 [2024-05-15 11:12:44.913553] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with the state(5) to be set 00:20:47.904 [2024-05-15 11:12:44.913554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.904 [2024-05-15 11:12:44.913559] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21150 is same with [2024-05-15 11:12:44.913561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:47.904 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.904 [2024-05-15 11:12:44.913572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.904 [2024-05-15 11:12:44.913578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.904 [2024-05-15 11:12:44.913586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.904 [2024-05-15 11:12:44.913592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.904 [2024-05-15 11:12:44.913601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.904 [2024-05-15 11:12:44.913608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.904 [2024-05-15 11:12:44.913616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.904 [2024-05-15 11:12:44.913622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.904 [2024-05-15 11:12:44.913630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.904 [2024-05-15 11:12:44.913637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.904 [2024-05-15 11:12:44.913645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.904 [2024-05-15 11:12:44.913651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.904 [2024-05-15 11:12:44.913659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.904 [2024-05-15 11:12:44.913666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.905 [2024-05-15 11:12:44.913674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.905 [2024-05-15 11:12:44.913680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.905 [2024-05-15 11:12:44.913688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.905 [2024-05-15 11:12:44.913694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.905 [2024-05-15 11:12:44.913703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.905 [2024-05-15 11:12:44.913709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.905 [2024-05-15 11:12:44.913717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.905 [2024-05-15 11:12:44.913723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.905 [2024-05-15 11:12:44.913786] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x229f950 was disconnected and freed. reset controller. 00:20:47.905 [2024-05-15 11:12:44.914400] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914413] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914422] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914428] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914434] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914439] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914445] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914451] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914457] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914462] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914468] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914474] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914480] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914485] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914491] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914497] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914502] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914508] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914513] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914519] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914525] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914531] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914536] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914542] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914548] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914553] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914559] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914564] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914570] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914577] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914583] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914589] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914595] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914601] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914606] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914612] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914617] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914623] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914628] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914634] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914640] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914646] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914651] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914657] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914663] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914668] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914674] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914687] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914693] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914699] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914705] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914710] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914716] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914721] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914727] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914733] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914740] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914746] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914751] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914757] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914763] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914768] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.914774] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d215f0 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.915356] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.915370] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.915377] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.915384] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.915391] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.915397] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.915403] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.915409] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.915415] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.915421] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.915427] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.915432] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.915438] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.915444] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.905 [2024-05-15 11:12:44.915450] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.906 [2024-05-15 11:12:44.915456] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.906 [2024-05-15 11:12:44.915461] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.906 [2024-05-15 11:12:44.915467] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.906 [2024-05-15 11:12:44.915473] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.906 [2024-05-15 11:12:44.915478] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.906 [2024-05-15 11:12:44.915489] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.906 [2024-05-15 11:12:44.915496] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.906 [2024-05-15 11:12:44.915502] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.906 [2024-05-15 11:12:44.915508] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.906 [2024-05-15 11:12:44.915514] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.906 [2024-05-15 11:12:44.915520] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.906 [2024-05-15 11:12:44.915526] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.906 [2024-05-15 11:12:44.915531] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.906 [2024-05-15 11:12:44.915537] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.906 [2024-05-15 11:12:44.915543] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.906 [2024-05-15 11:12:44.915549] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.906 [2024-05-15 11:12:44.915556] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.906 [2024-05-15 11:12:44.915561] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.906 [2024-05-15 11:12:44.915567] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.906 [2024-05-15 11:12:44.915573] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.906 [2024-05-15 11:12:44.915579] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.906 [2024-05-15 11:12:44.915585] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.906 [2024-05-15 11:12:44.915591] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.906 [2024-05-15 11:12:44.915597] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.906 [2024-05-15 11:12:44.915603] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.906 [2024-05-15 11:12:44.915609] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.906 [2024-05-15 11:12:44.915615] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.906 [2024-05-15 11:12:44.915627] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.906 [2024-05-15 11:12:44.915633] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.906 [2024-05-15 11:12:44.915639] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.906 [2024-05-15 11:12:44.915645] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.906 [2024-05-15 11:12:44.915651] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.906 [2024-05-15 11:12:44.915658] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.906 [2024-05-15 11:12:44.915664] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.906 [2024-05-15 11:12:44.915670] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.906 [2024-05-15 11:12:44.915675] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.906 [2024-05-15 11:12:44.915681] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.906 [2024-05-15 11:12:44.915686] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.906 [2024-05-15 11:12:44.915692] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.906 [2024-05-15 11:12:44.915698] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.906 [2024-05-15 11:12:44.915703] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.906 [2024-05-15 11:12:44.917248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.906 [2024-05-15 11:12:44.917268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.906 [2024-05-15 11:12:44.917281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.906 [2024-05-15 11:12:44.917288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.906 [2024-05-15 11:12:44.917296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.906 [2024-05-15 11:12:44.917303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.906 [2024-05-15 11:12:44.917312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.906 [2024-05-15 11:12:44.917318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.906 [2024-05-15 11:12:44.917326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.906 [2024-05-15 11:12:44.917332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.906 [2024-05-15 11:12:44.917340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.906 [2024-05-15 11:12:44.917347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.906 [2024-05-15 11:12:44.917355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.906 [2024-05-15 11:12:44.917362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.906 [2024-05-15 11:12:44.917369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.906 [2024-05-15 11:12:44.917376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.906 [2024-05-15 11:12:44.917384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.906 [2024-05-15 11:12:44.917397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.906 [2024-05-15 11:12:44.917405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.906 [2024-05-15 11:12:44.917412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.906 [2024-05-15 11:12:44.917421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.906 [2024-05-15 11:12:44.917427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.906 [2024-05-15 11:12:44.917435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.906 [2024-05-15 11:12:44.917442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.906 [2024-05-15 11:12:44.917450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.906 [2024-05-15 11:12:44.917456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.906 [2024-05-15 11:12:44.917464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.906 [2024-05-15 11:12:44.917470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.906 [2024-05-15 11:12:44.917479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.906 [2024-05-15 11:12:44.917485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.906 [2024-05-15 11:12:44.917493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.906 [2024-05-15 11:12:44.917500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.906 [2024-05-15 11:12:44.917508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.906 [2024-05-15 11:12:44.917514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.906 [2024-05-15 11:12:44.917522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.906 [2024-05-15 11:12:44.917528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.906 [2024-05-15 11:12:44.917536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.906 [2024-05-15 11:12:44.917542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.906 [2024-05-15 11:12:44.917551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.906 [2024-05-15 11:12:44.917557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.906 [2024-05-15 11:12:44.917565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.906 [2024-05-15 11:12:44.917571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.906 [2024-05-15 11:12:44.917581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.907 [2024-05-15 11:12:44.917587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.907 [2024-05-15 11:12:44.917595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.907 [2024-05-15 11:12:44.917602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.907 [2024-05-15 11:12:44.917610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.907 [2024-05-15 11:12:44.917616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.907 [2024-05-15 11:12:44.917626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.907 [2024-05-15 11:12:44.917632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.907 [2024-05-15 11:12:44.917640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.907 [2024-05-15 11:12:44.917647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.907 [2024-05-15 11:12:44.917655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.907 [2024-05-15 11:12:44.917661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.907 [2024-05-15 11:12:44.917669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.907 [2024-05-15 11:12:44.917675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.907 [2024-05-15 11:12:44.917683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.907 [2024-05-15 11:12:44.917690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.907 [2024-05-15 11:12:44.917698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.907 [2024-05-15 11:12:44.917705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.907 [2024-05-15 11:12:44.917713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.907 [2024-05-15 11:12:44.917719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.907 [2024-05-15 11:12:44.917727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.907 [2024-05-15 11:12:44.917733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.907 [2024-05-15 11:12:44.917742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.907 [2024-05-15 11:12:44.917748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.907 [2024-05-15 11:12:44.917756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.907 [2024-05-15 11:12:44.917763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.907 [2024-05-15 11:12:44.917772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.907 [2024-05-15 11:12:44.917778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.907 [2024-05-15 11:12:44.917786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.907 [2024-05-15 11:12:44.917792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.907 [2024-05-15 11:12:44.917800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.907 [2024-05-15 11:12:44.917806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.907 [2024-05-15 11:12:44.917814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.907 [2024-05-15 11:12:44.917821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.907 [2024-05-15 11:12:44.917829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.907 [2024-05-15 11:12:44.917835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.907 [2024-05-15 11:12:44.917843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.907 [2024-05-15 11:12:44.917850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.907 [2024-05-15 11:12:44.917859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.907 [2024-05-15 11:12:44.917866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.907 [2024-05-15 11:12:44.917873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.907 [2024-05-15 11:12:44.917880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.907 [2024-05-15 11:12:44.917889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.907 [2024-05-15 11:12:44.917895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.907 [2024-05-15 11:12:44.917903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.907 [2024-05-15 11:12:44.917909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.907 [2024-05-15 11:12:44.917917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.907 [2024-05-15 11:12:44.917924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.907 [2024-05-15 11:12:44.917932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.907 [2024-05-15 11:12:44.917939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.907 [2024-05-15 11:12:44.917948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.907 [2024-05-15 11:12:44.917954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.907 [2024-05-15 11:12:44.917962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.907 [2024-05-15 11:12:44.917969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.907 [2024-05-15 11:12:44.917976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.907 [2024-05-15 11:12:44.917983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.907 [2024-05-15 11:12:44.917991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.907 [2024-05-15 11:12:44.917998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.907 [2024-05-15 11:12:44.918006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.907 [2024-05-15 11:12:44.918012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.907 [2024-05-15 11:12:44.918020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.907 [2024-05-15 11:12:44.918026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.907 [2024-05-15 11:12:44.918034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.907 [2024-05-15 11:12:44.918040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.907 [2024-05-15 11:12:44.918049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.907 [2024-05-15 11:12:44.918055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.907 [2024-05-15 11:12:44.918063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.908 [2024-05-15 11:12:44.918070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.908 [2024-05-15 11:12:44.918078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.908 [2024-05-15 11:12:44.918085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.908 [2024-05-15 11:12:44.918094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.908 [2024-05-15 11:12:44.918100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.908 [2024-05-15 11:12:44.918108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.908 [2024-05-15 11:12:44.918114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.908 [2024-05-15 11:12:44.918122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.908 [2024-05-15 11:12:44.918130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.908 [2024-05-15 11:12:44.918138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.908 [2024-05-15 11:12:44.918145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.908 [2024-05-15 11:12:44.918153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.908 [2024-05-15 11:12:44.918201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.908 [2024-05-15 11:12:44.918252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.908 [2024-05-15 11:12:44.918299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.908 [2024-05-15 11:12:44.918351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.908 [2024-05-15 11:12:44.918399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.908 [2024-05-15 11:12:44.918451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.908 [2024-05-15 11:12:44.918499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.908 [2024-05-15 11:12:44.918604] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21f8150 was disconnected and freed. reset controller. 00:20:47.908 [2024-05-15 11:12:44.919756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.908 [2024-05-15 11:12:44.919774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.908 [2024-05-15 11:12:44.919785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.908 [2024-05-15 11:12:44.919792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.908 [2024-05-15 11:12:44.919801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.908 [2024-05-15 11:12:44.919808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.908 [2024-05-15 11:12:44.919816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.908 [2024-05-15 11:12:44.919823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.908 [2024-05-15 11:12:44.919831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.908 [2024-05-15 11:12:44.919837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.908 [2024-05-15 11:12:44.919846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.908 [2024-05-15 11:12:44.919853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.908 [2024-05-15 11:12:44.919861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.908 [2024-05-15 11:12:44.919871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.908 [2024-05-15 11:12:44.919880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.908 [2024-05-15 11:12:44.919886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.908 [2024-05-15 11:12:44.919894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.908 [2024-05-15 11:12:44.919901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.908 [2024-05-15 11:12:44.919909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.908 [2024-05-15 11:12:44.919915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.908 [2024-05-15 11:12:44.919923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.908 [2024-05-15 11:12:44.928659] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.908 [2024-05-15 11:12:44.928668] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.908 [2024-05-15 11:12:44.928675] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.908 [2024-05-15 11:12:44.928682] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.908 [2024-05-15 11:12:44.928688] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.908 [2024-05-15 11:12:44.928694] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.908 [2024-05-15 11:12:44.928700] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d21a90 is same with the state(5) to be set 00:20:47.908 [2024-05-15 11:12:44.931872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.908 [2024-05-15 11:12:44.931887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.908 [2024-05-15 11:12:44.931897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.908 [2024-05-15 11:12:44.931909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.908 [2024-05-15 11:12:44.931918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.908 [2024-05-15 11:12:44.931929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.908 [2024-05-15 11:12:44.931938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.908 [2024-05-15 11:12:44.931949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.908 [2024-05-15 11:12:44.931958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.908 [2024-05-15 11:12:44.931969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.908 [2024-05-15 11:12:44.931980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.908 [2024-05-15 11:12:44.931991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.908 [2024-05-15 11:12:44.932000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.908 [2024-05-15 11:12:44.932011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.908 [2024-05-15 11:12:44.932020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.908 [2024-05-15 11:12:44.932031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.908 [2024-05-15 11:12:44.932040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.908 [2024-05-15 11:12:44.932051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.908 [2024-05-15 11:12:44.932060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.908 [2024-05-15 11:12:44.932071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.908 [2024-05-15 11:12:44.932080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.908 [2024-05-15 11:12:44.932091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.908 [2024-05-15 11:12:44.932100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.908 [2024-05-15 11:12:44.932111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.908 [2024-05-15 11:12:44.932120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.908 [2024-05-15 11:12:44.932132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.908 [2024-05-15 11:12:44.932141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.908 [2024-05-15 11:12:44.932152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.908 [2024-05-15 11:12:44.932160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.908 [2024-05-15 11:12:44.932176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.908 [2024-05-15 11:12:44.932185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.908 [2024-05-15 11:12:44.932196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.908 [2024-05-15 11:12:44.932205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.908 [2024-05-15 11:12:44.932217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.908 [2024-05-15 11:12:44.932226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.909 [2024-05-15 11:12:44.932239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.909 [2024-05-15 11:12:44.932248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.909 [2024-05-15 11:12:44.932259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.909 [2024-05-15 11:12:44.932269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.909 [2024-05-15 11:12:44.932280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.909 [2024-05-15 11:12:44.932289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.909 [2024-05-15 11:12:44.932300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.909 [2024-05-15 11:12:44.932309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.909 [2024-05-15 11:12:44.932320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.909 [2024-05-15 11:12:44.932329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.909 [2024-05-15 11:12:44.932340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.909 [2024-05-15 11:12:44.932349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.909 [2024-05-15 11:12:44.932360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.909 [2024-05-15 11:12:44.932370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.909 [2024-05-15 11:12:44.932381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.909 [2024-05-15 11:12:44.932390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.909 [2024-05-15 11:12:44.932401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.909 [2024-05-15 11:12:44.932411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.909 [2024-05-15 11:12:44.932422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.909 [2024-05-15 11:12:44.932431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.909 [2024-05-15 11:12:44.932442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.909 [2024-05-15 11:12:44.932450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.909 [2024-05-15 11:12:44.932462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.909 [2024-05-15 11:12:44.932471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.909 [2024-05-15 11:12:44.932482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.909 [2024-05-15 11:12:44.932492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.909 [2024-05-15 11:12:44.932504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.909 [2024-05-15 11:12:44.932512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.909 [2024-05-15 11:12:44.932524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.909 [2024-05-15 11:12:44.932533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.909 [2024-05-15 11:12:44.932544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.909 [2024-05-15 11:12:44.932553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.909 [2024-05-15 11:12:44.932564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.909 [2024-05-15 11:12:44.932573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.909 [2024-05-15 11:12:44.932584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.909 [2024-05-15 11:12:44.932593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.909 [2024-05-15 11:12:44.932604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.909 [2024-05-15 11:12:44.932613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.909 [2024-05-15 11:12:44.932624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.909 [2024-05-15 11:12:44.932633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.909 [2024-05-15 11:12:44.932644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.909 [2024-05-15 11:12:44.932653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.909 [2024-05-15 11:12:44.932664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.909 [2024-05-15 11:12:44.932673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.909 [2024-05-15 11:12:44.932684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.909 [2024-05-15 11:12:44.932693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.909 [2024-05-15 11:12:44.932704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.909 [2024-05-15 11:12:44.932713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.909 [2024-05-15 11:12:44.932724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.909 [2024-05-15 11:12:44.932733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.909 [2024-05-15 11:12:44.932744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.909 [2024-05-15 11:12:44.932755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.909 [2024-05-15 11:12:44.932766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.909 [2024-05-15 11:12:44.932775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.909 [2024-05-15 11:12:44.932786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.909 [2024-05-15 11:12:44.932795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.909 [2024-05-15 11:12:44.932806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.909 [2024-05-15 11:12:44.932815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.909 [2024-05-15 11:12:44.932826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.909 [2024-05-15 11:12:44.932835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.909 [2024-05-15 11:12:44.932846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.909 [2024-05-15 11:12:44.932855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.909 [2024-05-15 11:12:44.932865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.909 [2024-05-15 11:12:44.932874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.909 [2024-05-15 11:12:44.932885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.909 [2024-05-15 11:12:44.932894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.909 [2024-05-15 11:12:44.932905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.909 [2024-05-15 11:12:44.932914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.909 [2024-05-15 11:12:44.932926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.909 [2024-05-15 11:12:44.932935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.909 [2024-05-15 11:12:44.932945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.909 [2024-05-15 11:12:44.932955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.909 [2024-05-15 11:12:44.933040] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22a0c80 was disconnected and freed. reset controller. 00:20:47.909 [2024-05-15 11:12:44.934671] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:47.909 [2024-05-15 11:12:44.934702] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:20:47.909 [2024-05-15 11:12:44.934745] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22aff60 (9): Bad file descriptor 00:20:47.909 [2024-05-15 11:12:44.934763] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ff8a0 (9): Bad file descriptor 00:20:47.909 [2024-05-15 11:12:44.934789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.909 [2024-05-15 11:12:44.934800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.909 [2024-05-15 11:12:44.934811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.909 [2024-05-15 11:12:44.934820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.909 [2024-05-15 11:12:44.934830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.910 [2024-05-15 11:12:44.934838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.910 [2024-05-15 11:12:44.934848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.910 [2024-05-15 11:12:44.934857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.910 [2024-05-15 11:12:44.934866] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c9960 is same with the state(5) to be set 00:20:47.910 [2024-05-15 11:12:44.934897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.910 [2024-05-15 11:12:44.934907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.910 [2024-05-15 11:12:44.934917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.910 [2024-05-15 11:12:44.934926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.910 [2024-05-15 11:12:44.934936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.910 [2024-05-15 11:12:44.934945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.910 [2024-05-15 11:12:44.934955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.910 [2024-05-15 11:12:44.934965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.910 [2024-05-15 11:12:44.934974] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d04610 is same with the state(5) to be set 00:20:47.910 [2024-05-15 11:12:44.934992] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b9db0 (9): Bad file descriptor 00:20:47.910 [2024-05-15 11:12:44.935011] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21fd730 (9): Bad file descriptor 00:20:47.910 [2024-05-15 11:12:44.935025] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x222ab70 (9): Bad file descriptor 00:20:47.910 [2024-05-15 11:12:44.935057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.910 [2024-05-15 11:12:44.935069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.910 [2024-05-15 11:12:44.935078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.910 [2024-05-15 11:12:44.935087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.910 [2024-05-15 11:12:44.935100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.910 [2024-05-15 11:12:44.935109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.910 [2024-05-15 11:12:44.935118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.910 [2024-05-15 11:12:44.935127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.910 [2024-05-15 11:12:44.935136] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22db0b0 is same with the state(5) to be set 00:20:47.910 [2024-05-15 11:12:44.935170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.910 [2024-05-15 11:12:44.935182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.910 [2024-05-15 11:12:44.935191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.910 [2024-05-15 11:12:44.935200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.910 [2024-05-15 11:12:44.935210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.910 [2024-05-15 11:12:44.935219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.910 [2024-05-15 11:12:44.935229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.910 [2024-05-15 11:12:44.935238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.910 [2024-05-15 11:12:44.935246] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22af0a0 is same with the state(5) to be set 00:20:47.910 [2024-05-15 11:12:44.935265] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e3e10 (9): Bad file descriptor 00:20:47.910 [2024-05-15 11:12:44.936917] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:47.910 [2024-05-15 11:12:44.936946] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:47.910 [2024-05-15 11:12:44.937334] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:47.910 [2024-05-15 11:12:44.937388] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:47.910 [2024-05-15 11:12:44.938227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.910 [2024-05-15 11:12:44.938347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.910 [2024-05-15 11:12:44.938361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21ff8a0 with addr=10.0.0.2, port=4420 00:20:47.910 [2024-05-15 11:12:44.938372] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ff8a0 is same with the state(5) to be set 00:20:47.910 [2024-05-15 11:12:44.938523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.910 [2024-05-15 11:12:44.938750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.910 [2024-05-15 11:12:44.938764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22aff60 with addr=10.0.0.2, port=4420 00:20:47.910 [2024-05-15 11:12:44.938773] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22aff60 is same with the state(5) to be set 00:20:47.910 [2024-05-15 11:12:44.938924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.910 [2024-05-15 11:12:44.939173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.910 [2024-05-15 11:12:44.939192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21fd730 with addr=10.0.0.2, port=4420 00:20:47.910 [2024-05-15 11:12:44.939201] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fd730 is same with the state(5) to be set 00:20:47.910 [2024-05-15 11:12:44.939585] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:47.910 [2024-05-15 11:12:44.939649] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:47.910 [2024-05-15 11:12:44.939702] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:47.910 [2024-05-15 11:12:44.939777] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ff8a0 (9): Bad file descriptor 00:20:47.910 [2024-05-15 11:12:44.939793] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22aff60 (9): Bad file descriptor 00:20:47.910 [2024-05-15 11:12:44.939805] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21fd730 (9): Bad file descriptor 00:20:47.910 [2024-05-15 11:12:44.939933] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:47.910 [2024-05-15 11:12:44.939956] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:47.910 [2024-05-15 11:12:44.939966] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:47.910 [2024-05-15 11:12:44.939979] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:47.910 [2024-05-15 11:12:44.939996] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:20:47.910 [2024-05-15 11:12:44.940005] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:20:47.910 [2024-05-15 11:12:44.940015] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:20:47.910 [2024-05-15 11:12:44.940029] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:47.910 [2024-05-15 11:12:44.940038] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:47.910 [2024-05-15 11:12:44.940047] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:47.910 [2024-05-15 11:12:44.940111] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.910 [2024-05-15 11:12:44.940123] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.910 [2024-05-15 11:12:44.940131] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.910 [2024-05-15 11:12:44.944701] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c9960 (9): Bad file descriptor 00:20:47.910 [2024-05-15 11:12:44.944726] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d04610 (9): Bad file descriptor 00:20:47.910 [2024-05-15 11:12:44.944759] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22db0b0 (9): Bad file descriptor 00:20:47.910 [2024-05-15 11:12:44.944779] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22af0a0 (9): Bad file descriptor 00:20:47.910 [2024-05-15 11:12:44.944906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.910 [2024-05-15 11:12:44.944921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.910 [2024-05-15 11:12:44.944938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.910 [2024-05-15 11:12:44.944948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.910 [2024-05-15 11:12:44.944961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.910 [2024-05-15 11:12:44.944974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.910 [2024-05-15 11:12:44.944986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.910 [2024-05-15 11:12:44.944996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.910 [2024-05-15 11:12:44.945007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.910 [2024-05-15 11:12:44.945017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.910 [2024-05-15 11:12:44.945028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.910 [2024-05-15 11:12:44.945037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.910 [2024-05-15 11:12:44.945049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.910 [2024-05-15 11:12:44.945058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.910 [2024-05-15 11:12:44.945070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.910 [2024-05-15 11:12:44.945079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.911 [2024-05-15 11:12:44.945091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.911 [2024-05-15 11:12:44.945100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.911 [2024-05-15 11:12:44.945112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.911 [2024-05-15 11:12:44.945121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.911 [2024-05-15 11:12:44.945132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.911 [2024-05-15 11:12:44.945141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.911 [2024-05-15 11:12:44.945153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.911 [2024-05-15 11:12:44.945162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.911 [2024-05-15 11:12:44.945179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.911 [2024-05-15 11:12:44.945189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.911 [2024-05-15 11:12:44.945200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.911 [2024-05-15 11:12:44.945210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.911 [2024-05-15 11:12:44.945222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.911 [2024-05-15 11:12:44.945231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.911 [2024-05-15 11:12:44.945245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.911 [2024-05-15 11:12:44.945254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.911 [2024-05-15 11:12:44.945266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.911 [2024-05-15 11:12:44.945275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.911 [2024-05-15 11:12:44.945286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.911 [2024-05-15 11:12:44.945296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.911 [2024-05-15 11:12:44.945308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.911 [2024-05-15 11:12:44.945317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.911 [2024-05-15 11:12:44.945329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.911 [2024-05-15 11:12:44.945338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.911 [2024-05-15 11:12:44.945350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.911 [2024-05-15 11:12:44.945359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.911 [2024-05-15 11:12:44.945371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.911 [2024-05-15 11:12:44.945380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.911 [2024-05-15 11:12:44.945391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.911 [2024-05-15 11:12:44.945401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.911 [2024-05-15 11:12:44.945413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.911 [2024-05-15 11:12:44.945423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.911 [2024-05-15 11:12:44.945434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.911 [2024-05-15 11:12:44.945443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.911 [2024-05-15 11:12:44.945455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.911 [2024-05-15 11:12:44.945464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.911 [2024-05-15 11:12:44.945475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.911 [2024-05-15 11:12:44.945485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.911 [2024-05-15 11:12:44.945496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.911 [2024-05-15 11:12:44.945511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.911 [2024-05-15 11:12:44.945523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.911 [2024-05-15 11:12:44.945532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.911 [2024-05-15 11:12:44.945543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.911 [2024-05-15 11:12:44.945552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.911 [2024-05-15 11:12:44.945564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.911 [2024-05-15 11:12:44.945574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.911 [2024-05-15 11:12:44.945585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.911 [2024-05-15 11:12:44.945594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.911 [2024-05-15 11:12:44.945606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.911 [2024-05-15 11:12:44.945615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.911 [2024-05-15 11:12:44.945627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.911 [2024-05-15 11:12:44.945636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.911 [2024-05-15 11:12:44.945648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.911 [2024-05-15 11:12:44.945657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.911 [2024-05-15 11:12:44.945668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.911 [2024-05-15 11:12:44.945678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.911 [2024-05-15 11:12:44.945689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.911 [2024-05-15 11:12:44.945698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.911 [2024-05-15 11:12:44.945710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.911 [2024-05-15 11:12:44.945719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.911 [2024-05-15 11:12:44.945731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.911 [2024-05-15 11:12:44.945741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.911 [2024-05-15 11:12:44.945753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.912 [2024-05-15 11:12:44.945762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.912 [2024-05-15 11:12:44.945775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.912 [2024-05-15 11:12:44.945784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.912 [2024-05-15 11:12:44.945796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.912 [2024-05-15 11:12:44.945805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.912 [2024-05-15 11:12:44.945817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.912 [2024-05-15 11:12:44.945826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.912 [2024-05-15 11:12:44.945837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.912 [2024-05-15 11:12:44.945847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.912 [2024-05-15 11:12:44.945858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.912 [2024-05-15 11:12:44.945869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.912 [2024-05-15 11:12:44.945880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.912 [2024-05-15 11:12:44.945890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.912 [2024-05-15 11:12:44.945901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.912 [2024-05-15 11:12:44.945911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.912 [2024-05-15 11:12:44.945922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.912 [2024-05-15 11:12:44.945931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.912 [2024-05-15 11:12:44.945943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.912 [2024-05-15 11:12:44.945951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.912 [2024-05-15 11:12:44.945963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.912 [2024-05-15 11:12:44.945973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.912 [2024-05-15 11:12:44.945984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.912 [2024-05-15 11:12:44.945993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.912 [2024-05-15 11:12:44.946004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.912 [2024-05-15 11:12:44.946014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.912 [2024-05-15 11:12:44.946025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.912 [2024-05-15 11:12:44.946036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.912 [2024-05-15 11:12:44.946048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.912 [2024-05-15 11:12:44.946057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.912 [2024-05-15 11:12:44.946068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.912 [2024-05-15 11:12:44.946078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.912 [2024-05-15 11:12:44.946089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.912 [2024-05-15 11:12:44.946098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.912 [2024-05-15 11:12:44.946110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.912 [2024-05-15 11:12:44.946119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.912 [2024-05-15 11:12:44.946131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.912 [2024-05-15 11:12:44.946140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.912 [2024-05-15 11:12:44.946151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.912 [2024-05-15 11:12:44.946160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.912 [2024-05-15 11:12:44.946176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.912 [2024-05-15 11:12:44.946186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.912 [2024-05-15 11:12:44.946198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.912 [2024-05-15 11:12:44.946207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.912 [2024-05-15 11:12:44.946219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.912 [2024-05-15 11:12:44.946228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.912 [2024-05-15 11:12:44.946239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.912 [2024-05-15 11:12:44.946249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.912 [2024-05-15 11:12:44.946260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.912 [2024-05-15 11:12:44.946270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.912 [2024-05-15 11:12:44.946280] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2361a00 is same with the state(5) to be set 00:20:47.912 [2024-05-15 11:12:44.947695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.912 [2024-05-15 11:12:44.947716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.912 [2024-05-15 11:12:44.947730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.912 [2024-05-15 11:12:44.947740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.912 [2024-05-15 11:12:44.947752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.912 [2024-05-15 11:12:44.947762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.912 [2024-05-15 11:12:44.947773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.912 [2024-05-15 11:12:44.947783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.912 [2024-05-15 11:12:44.947795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.912 [2024-05-15 11:12:44.947804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.912 [2024-05-15 11:12:44.947816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.912 [2024-05-15 11:12:44.947825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.912 [2024-05-15 11:12:44.947837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.912 [2024-05-15 11:12:44.947846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.912 [2024-05-15 11:12:44.947857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.912 [2024-05-15 11:12:44.947867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.912 [2024-05-15 11:12:44.947878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.912 [2024-05-15 11:12:44.947887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.912 [2024-05-15 11:12:44.947899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.912 [2024-05-15 11:12:44.947908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.912 [2024-05-15 11:12:44.947920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.912 [2024-05-15 11:12:44.947929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.912 [2024-05-15 11:12:44.947940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.912 [2024-05-15 11:12:44.947950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.912 [2024-05-15 11:12:44.947962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.912 [2024-05-15 11:12:44.947971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.912 [2024-05-15 11:12:44.947984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.912 [2024-05-15 11:12:44.947994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.912 [2024-05-15 11:12:44.948005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.912 [2024-05-15 11:12:44.948015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.912 [2024-05-15 11:12:44.948026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.912 [2024-05-15 11:12:44.948036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.913 [2024-05-15 11:12:44.948047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.913 [2024-05-15 11:12:44.948056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.913 [2024-05-15 11:12:44.948068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.913 [2024-05-15 11:12:44.948077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.913 [2024-05-15 11:12:44.948089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.913 [2024-05-15 11:12:44.948098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.913 [2024-05-15 11:12:44.948109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.913 [2024-05-15 11:12:44.948119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.913 [2024-05-15 11:12:44.948130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.913 [2024-05-15 11:12:44.948140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.913 [2024-05-15 11:12:44.948151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.913 [2024-05-15 11:12:44.948160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.913 [2024-05-15 11:12:44.948183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.913 [2024-05-15 11:12:44.948193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.913 [2024-05-15 11:12:44.948204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.913 [2024-05-15 11:12:44.948214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.913 [2024-05-15 11:12:44.948225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.913 [2024-05-15 11:12:44.948235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.913 [2024-05-15 11:12:44.948246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.913 [2024-05-15 11:12:44.948258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.913 [2024-05-15 11:12:44.948270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.913 [2024-05-15 11:12:44.948279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.913 [2024-05-15 11:12:44.948291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.913 [2024-05-15 11:12:44.948300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.913 [2024-05-15 11:12:44.948311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.913 [2024-05-15 11:12:44.948321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.913 [2024-05-15 11:12:44.948333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.913 [2024-05-15 11:12:44.948342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.913 [2024-05-15 11:12:44.948354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.913 [2024-05-15 11:12:44.948363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.913 [2024-05-15 11:12:44.948374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.913 [2024-05-15 11:12:44.948384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.913 [2024-05-15 11:12:44.948395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.913 [2024-05-15 11:12:44.948405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.913 [2024-05-15 11:12:44.948417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.913 [2024-05-15 11:12:44.948426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.913 [2024-05-15 11:12:44.948437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.913 [2024-05-15 11:12:44.948447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.913 [2024-05-15 11:12:44.948458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.913 [2024-05-15 11:12:44.948467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.913 [2024-05-15 11:12:44.948479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.913 [2024-05-15 11:12:44.948488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.913 [2024-05-15 11:12:44.948500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.913 [2024-05-15 11:12:44.948509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.913 [2024-05-15 11:12:44.948522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.913 [2024-05-15 11:12:44.948532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.913 [2024-05-15 11:12:44.948543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.913 [2024-05-15 11:12:44.948552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.913 [2024-05-15 11:12:44.948564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.913 [2024-05-15 11:12:44.948573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.913 [2024-05-15 11:12:44.948585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.913 [2024-05-15 11:12:44.948594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.913 [2024-05-15 11:12:44.948606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.913 [2024-05-15 11:12:44.948615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.913 [2024-05-15 11:12:44.948626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.913 [2024-05-15 11:12:44.948636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.913 [2024-05-15 11:12:44.948648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.913 [2024-05-15 11:12:44.948657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.913 [2024-05-15 11:12:44.948669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.913 [2024-05-15 11:12:44.948678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.913 [2024-05-15 11:12:44.948690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.913 [2024-05-15 11:12:44.948700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.913 [2024-05-15 11:12:44.948711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.913 [2024-05-15 11:12:44.948722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.913 [2024-05-15 11:12:44.948733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.913 [2024-05-15 11:12:44.948743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.913 [2024-05-15 11:12:44.948754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.913 [2024-05-15 11:12:44.948763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.913 [2024-05-15 11:12:44.948775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.913 [2024-05-15 11:12:44.948786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.913 [2024-05-15 11:12:44.948797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.913 [2024-05-15 11:12:44.948807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.913 [2024-05-15 11:12:44.948818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.913 [2024-05-15 11:12:44.948828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.913 [2024-05-15 11:12:44.948840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.913 [2024-05-15 11:12:44.948849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.913 [2024-05-15 11:12:44.948860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.913 [2024-05-15 11:12:44.948880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.913 [2024-05-15 11:12:44.948889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.913 [2024-05-15 11:12:44.948896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.913 [2024-05-15 11:12:44.948904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.913 [2024-05-15 11:12:44.948911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.914 [2024-05-15 11:12:44.948919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.914 [2024-05-15 11:12:44.948926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.914 [2024-05-15 11:12:44.948934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.914 [2024-05-15 11:12:44.948941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.914 [2024-05-15 11:12:44.948949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.914 [2024-05-15 11:12:44.948955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.914 [2024-05-15 11:12:44.948964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.914 [2024-05-15 11:12:44.948970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.914 [2024-05-15 11:12:44.948979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.914 [2024-05-15 11:12:44.948985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.914 [2024-05-15 11:12:44.948994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.914 [2024-05-15 11:12:44.949001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.914 [2024-05-15 11:12:44.949009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.914 [2024-05-15 11:12:44.949018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.914 [2024-05-15 11:12:44.949025] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2362f90 is same with the state(5) to be set 00:20:47.914 [2024-05-15 11:12:44.950086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.914 [2024-05-15 11:12:44.950101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.914 [2024-05-15 11:12:44.950112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.914 [2024-05-15 11:12:44.950119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.914 [2024-05-15 11:12:44.950128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.914 [2024-05-15 11:12:44.950135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.914 [2024-05-15 11:12:44.950143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.914 [2024-05-15 11:12:44.950151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.914 [2024-05-15 11:12:44.950159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.914 [2024-05-15 11:12:44.950169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.914 [2024-05-15 11:12:44.950178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.914 [2024-05-15 11:12:44.950185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.914 [2024-05-15 11:12:44.950194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.914 [2024-05-15 11:12:44.950201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.914 [2024-05-15 11:12:44.950209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.914 [2024-05-15 11:12:44.950216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.914 [2024-05-15 11:12:44.950225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.914 [2024-05-15 11:12:44.950232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.914 [2024-05-15 11:12:44.950241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.914 [2024-05-15 11:12:44.950249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.914 [2024-05-15 11:12:44.950257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.914 [2024-05-15 11:12:44.950265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.914 [2024-05-15 11:12:44.950274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.914 [2024-05-15 11:12:44.950283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.914 [2024-05-15 11:12:44.950291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.914 [2024-05-15 11:12:44.950298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.914 [2024-05-15 11:12:44.950306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.914 [2024-05-15 11:12:44.950313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.914 [2024-05-15 11:12:44.950322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.914 [2024-05-15 11:12:44.950328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.914 [2024-05-15 11:12:44.950337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.914 [2024-05-15 11:12:44.950343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.914 [2024-05-15 11:12:44.950352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.914 [2024-05-15 11:12:44.950359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.914 [2024-05-15 11:12:44.950367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.914 [2024-05-15 11:12:44.950374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.914 [2024-05-15 11:12:44.950382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.914 [2024-05-15 11:12:44.950389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.914 [2024-05-15 11:12:44.950398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.914 [2024-05-15 11:12:44.950404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.914 [2024-05-15 11:12:44.950413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.914 [2024-05-15 11:12:44.950420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.914 [2024-05-15 11:12:44.950428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.914 [2024-05-15 11:12:44.950435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.914 [2024-05-15 11:12:44.950443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.914 [2024-05-15 11:12:44.950450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.914 [2024-05-15 11:12:44.950458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.914 [2024-05-15 11:12:44.950465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.914 [2024-05-15 11:12:44.950474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.914 [2024-05-15 11:12:44.950482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.914 [2024-05-15 11:12:44.950491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.914 [2024-05-15 11:12:44.950498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.914 [2024-05-15 11:12:44.950506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.914 [2024-05-15 11:12:44.950513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.914 [2024-05-15 11:12:44.950522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.914 [2024-05-15 11:12:44.950529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.914 [2024-05-15 11:12:44.950537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.914 [2024-05-15 11:12:44.950544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.914 [2024-05-15 11:12:44.950552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.914 [2024-05-15 11:12:44.950559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.914 [2024-05-15 11:12:44.950567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.914 [2024-05-15 11:12:44.950574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.914 [2024-05-15 11:12:44.950582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.914 [2024-05-15 11:12:44.950589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.914 [2024-05-15 11:12:44.950597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.914 [2024-05-15 11:12:44.950604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.914 [2024-05-15 11:12:44.950612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.915 [2024-05-15 11:12:44.950619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.915 [2024-05-15 11:12:44.950627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.915 [2024-05-15 11:12:44.950634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.915 [2024-05-15 11:12:44.950643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.915 [2024-05-15 11:12:44.950649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.915 [2024-05-15 11:12:44.950658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.915 [2024-05-15 11:12:44.950666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.915 [2024-05-15 11:12:44.950674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.915 [2024-05-15 11:12:44.950681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.915 [2024-05-15 11:12:44.950689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.915 [2024-05-15 11:12:44.950696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.915 [2024-05-15 11:12:44.950704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.915 [2024-05-15 11:12:44.950711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.915 [2024-05-15 11:12:44.950719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.915 [2024-05-15 11:12:44.950726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.915 [2024-05-15 11:12:44.950735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.915 [2024-05-15 11:12:44.950741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.915 [2024-05-15 11:12:44.950750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.915 [2024-05-15 11:12:44.950757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.915 [2024-05-15 11:12:44.950765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.915 [2024-05-15 11:12:44.950772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.915 [2024-05-15 11:12:44.950781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.915 [2024-05-15 11:12:44.950787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.915 [2024-05-15 11:12:44.950796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.915 [2024-05-15 11:12:44.950802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.915 [2024-05-15 11:12:44.950810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.915 [2024-05-15 11:12:44.950817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.915 [2024-05-15 11:12:44.950825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.915 [2024-05-15 11:12:44.950832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.915 [2024-05-15 11:12:44.950841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.915 [2024-05-15 11:12:44.950847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.915 [2024-05-15 11:12:44.950857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.915 [2024-05-15 11:12:44.950864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.915 [2024-05-15 11:12:44.950873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.915 [2024-05-15 11:12:44.950879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.915 [2024-05-15 11:12:44.950888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.915 [2024-05-15 11:12:44.950894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.915 [2024-05-15 11:12:44.950903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.915 [2024-05-15 11:12:44.950910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.915 [2024-05-15 11:12:44.950918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.915 [2024-05-15 11:12:44.950925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.915 [2024-05-15 11:12:44.950933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.915 [2024-05-15 11:12:44.950940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.915 [2024-05-15 11:12:44.950948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.915 [2024-05-15 11:12:44.950955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.915 [2024-05-15 11:12:44.950963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.915 [2024-05-15 11:12:44.950970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.915 [2024-05-15 11:12:44.950978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.915 [2024-05-15 11:12:44.950986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.915 [2024-05-15 11:12:44.950994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.915 [2024-05-15 11:12:44.951000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.915 [2024-05-15 11:12:44.951008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.915 [2024-05-15 11:12:44.951015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.915 [2024-05-15 11:12:44.951023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.915 [2024-05-15 11:12:44.951030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.915 [2024-05-15 11:12:44.951038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.915 [2024-05-15 11:12:44.951047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.915 [2024-05-15 11:12:44.951055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.915 [2024-05-15 11:12:44.951062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.915 [2024-05-15 11:12:44.951070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.915 [2024-05-15 11:12:44.951077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.915 [2024-05-15 11:12:44.951085] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22987f0 is same with the state(5) to be set 00:20:47.915 [2024-05-15 11:12:44.952422] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:47.915 [2024-05-15 11:12:44.952444] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:47.915 [2024-05-15 11:12:44.952453] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:47.915 [2024-05-15 11:12:44.952785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.915 [2024-05-15 11:12:44.953021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.915 [2024-05-15 11:12:44.953033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222ab70 with addr=10.0.0.2, port=4420 00:20:47.915 [2024-05-15 11:12:44.953042] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ab70 is same with the state(5) to be set 00:20:47.915 [2024-05-15 11:12:44.953255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.915 [2024-05-15 11:12:44.953435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.915 [2024-05-15 11:12:44.953445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9db0 with addr=10.0.0.2, port=4420 00:20:47.915 [2024-05-15 11:12:44.953452] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b9db0 is same with the state(5) to be set 00:20:47.915 [2024-05-15 11:12:44.953676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.915 [2024-05-15 11:12:44.953898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.915 [2024-05-15 11:12:44.953908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e3e10 with addr=10.0.0.2, port=4420 00:20:47.915 [2024-05-15 11:12:44.953915] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e3e10 is same with the state(5) to be set 00:20:47.915 [2024-05-15 11:12:44.954662] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:47.915 [2024-05-15 11:12:44.954678] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:20:47.915 [2024-05-15 11:12:44.954687] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:47.915 [2024-05-15 11:12:44.954712] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x222ab70 (9): Bad file descriptor 00:20:47.915 [2024-05-15 11:12:44.954722] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b9db0 (9): Bad file descriptor 00:20:47.915 [2024-05-15 11:12:44.954731] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e3e10 (9): Bad file descriptor 00:20:47.915 [2024-05-15 11:12:44.955004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.915 [2024-05-15 11:12:44.955234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.915 [2024-05-15 11:12:44.955245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21fd730 with addr=10.0.0.2, port=4420 00:20:47.916 [2024-05-15 11:12:44.955255] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fd730 is same with the state(5) to be set 00:20:47.916 [2024-05-15 11:12:44.955476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.916 [2024-05-15 11:12:44.955675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.916 [2024-05-15 11:12:44.955686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22aff60 with addr=10.0.0.2, port=4420 00:20:47.916 [2024-05-15 11:12:44.955692] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22aff60 is same with the state(5) to be set 00:20:47.916 [2024-05-15 11:12:44.955893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.916 [2024-05-15 11:12:44.956133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.916 [2024-05-15 11:12:44.956143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21ff8a0 with addr=10.0.0.2, port=4420 00:20:47.916 [2024-05-15 11:12:44.956150] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ff8a0 is same with the state(5) to be set 00:20:47.916 [2024-05-15 11:12:44.956157] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:47.916 [2024-05-15 11:12:44.956166] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:47.916 [2024-05-15 11:12:44.956175] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:47.916 [2024-05-15 11:12:44.956188] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:47.916 [2024-05-15 11:12:44.956194] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:47.916 [2024-05-15 11:12:44.956200] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:47.916 [2024-05-15 11:12:44.956210] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:47.916 [2024-05-15 11:12:44.956216] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:47.916 [2024-05-15 11:12:44.956222] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:47.916 [2024-05-15 11:12:44.956290] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.916 [2024-05-15 11:12:44.956298] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.916 [2024-05-15 11:12:44.956304] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.916 [2024-05-15 11:12:44.956323] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21fd730 (9): Bad file descriptor 00:20:47.916 [2024-05-15 11:12:44.956332] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22aff60 (9): Bad file descriptor 00:20:47.916 [2024-05-15 11:12:44.956340] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ff8a0 (9): Bad file descriptor 00:20:47.916 [2024-05-15 11:12:44.956378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.916 [2024-05-15 11:12:44.956387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.916 [2024-05-15 11:12:44.956399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.916 [2024-05-15 11:12:44.956406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.916 [2024-05-15 11:12:44.956415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.916 [2024-05-15 11:12:44.956426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.916 [2024-05-15 11:12:44.956435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.916 [2024-05-15 11:12:44.956442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.916 [2024-05-15 11:12:44.956450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.916 [2024-05-15 11:12:44.956457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.916 [2024-05-15 11:12:44.956466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.916 [2024-05-15 11:12:44.956472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.916 [2024-05-15 11:12:44.956480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.916 [2024-05-15 11:12:44.956487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.916 [2024-05-15 11:12:44.956495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.916 [2024-05-15 11:12:44.956502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.916 [2024-05-15 11:12:44.956511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.916 [2024-05-15 11:12:44.956517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.916 [2024-05-15 11:12:44.956526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.916 [2024-05-15 11:12:44.956533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.916 [2024-05-15 11:12:44.956541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.916 [2024-05-15 11:12:44.956547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.916 [2024-05-15 11:12:44.956556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.916 [2024-05-15 11:12:44.956562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.916 [2024-05-15 11:12:44.956571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.916 [2024-05-15 11:12:44.956578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.916 [2024-05-15 11:12:44.956586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.916 [2024-05-15 11:12:44.956592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.916 [2024-05-15 11:12:44.956601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.916 [2024-05-15 11:12:44.956607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.916 [2024-05-15 11:12:44.956617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.916 [2024-05-15 11:12:44.956624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.916 [2024-05-15 11:12:44.956633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.916 [2024-05-15 11:12:44.956639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.916 [2024-05-15 11:12:44.956648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.916 [2024-05-15 11:12:44.956655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.916 [2024-05-15 11:12:44.956663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.916 [2024-05-15 11:12:44.956670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.916 [2024-05-15 11:12:44.956678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.916 [2024-05-15 11:12:44.956685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.916 [2024-05-15 11:12:44.956694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.916 [2024-05-15 11:12:44.956701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.916 [2024-05-15 11:12:44.956709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.916 [2024-05-15 11:12:44.956715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.916 [2024-05-15 11:12:44.956724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.916 [2024-05-15 11:12:44.956731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.916 [2024-05-15 11:12:44.956739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.917 [2024-05-15 11:12:44.956747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.917 [2024-05-15 11:12:44.956755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.917 [2024-05-15 11:12:44.956762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.917 [2024-05-15 11:12:44.956770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.917 [2024-05-15 11:12:44.956777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.917 [2024-05-15 11:12:44.956786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.917 [2024-05-15 11:12:44.956793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.917 [2024-05-15 11:12:44.956802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.917 [2024-05-15 11:12:44.956810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.917 [2024-05-15 11:12:44.956818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.917 [2024-05-15 11:12:44.956825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.917 [2024-05-15 11:12:44.956834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.917 [2024-05-15 11:12:44.956840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.917 [2024-05-15 11:12:44.956849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.917 [2024-05-15 11:12:44.956856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.917 [2024-05-15 11:12:44.956864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.917 [2024-05-15 11:12:44.956871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.917 [2024-05-15 11:12:44.956879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.917 [2024-05-15 11:12:44.956886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.917 [2024-05-15 11:12:44.956895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.917 [2024-05-15 11:12:44.956901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.917 [2024-05-15 11:12:44.956910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.917 [2024-05-15 11:12:44.956916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.917 [2024-05-15 11:12:44.956925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.917 [2024-05-15 11:12:44.956932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.917 [2024-05-15 11:12:44.956941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.917 [2024-05-15 11:12:44.956948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.917 [2024-05-15 11:12:44.956956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.917 [2024-05-15 11:12:44.956963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.917 [2024-05-15 11:12:44.956971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.917 [2024-05-15 11:12:44.956978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.917 [2024-05-15 11:12:44.956986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.917 [2024-05-15 11:12:44.956993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.917 [2024-05-15 11:12:44.957001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.917 [2024-05-15 11:12:44.957016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.917 [2024-05-15 11:12:44.957025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.917 [2024-05-15 11:12:44.957033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.917 [2024-05-15 11:12:44.957041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.917 [2024-05-15 11:12:44.957048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.917 [2024-05-15 11:12:44.957056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.917 [2024-05-15 11:12:44.957063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.917 [2024-05-15 11:12:44.957072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.917 [2024-05-15 11:12:44.957079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.917 [2024-05-15 11:12:44.957087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.917 [2024-05-15 11:12:44.957094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.917 [2024-05-15 11:12:44.957102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.917 [2024-05-15 11:12:44.957109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.917 [2024-05-15 11:12:44.957117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.917 [2024-05-15 11:12:44.957124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.917 [2024-05-15 11:12:44.957133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.917 [2024-05-15 11:12:44.957139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.917 [2024-05-15 11:12:44.957148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.917 [2024-05-15 11:12:44.957155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.917 [2024-05-15 11:12:44.957169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.917 [2024-05-15 11:12:44.957176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.917 [2024-05-15 11:12:44.957185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.917 [2024-05-15 11:12:44.957191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.917 [2024-05-15 11:12:44.957200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.917 [2024-05-15 11:12:44.957207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.917 [2024-05-15 11:12:44.957217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.917 [2024-05-15 11:12:44.957224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.917 [2024-05-15 11:12:44.957232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.917 [2024-05-15 11:12:44.957240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.917 [2024-05-15 11:12:44.957248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.917 [2024-05-15 11:12:44.957255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.917 [2024-05-15 11:12:44.957263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.917 [2024-05-15 11:12:44.957270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.917 [2024-05-15 11:12:44.957278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.917 [2024-05-15 11:12:44.957285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.917 [2024-05-15 11:12:44.957293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.917 [2024-05-15 11:12:44.957300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.917 [2024-05-15 11:12:44.957309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.917 [2024-05-15 11:12:44.957315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.917 [2024-05-15 11:12:44.957324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.917 [2024-05-15 11:12:44.957330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.917 [2024-05-15 11:12:44.957339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.917 [2024-05-15 11:12:44.957345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.917 [2024-05-15 11:12:44.957353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.917 [2024-05-15 11:12:44.957360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.917 [2024-05-15 11:12:44.957369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.917 [2024-05-15 11:12:44.957375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.917 [2024-05-15 11:12:44.957383] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f6cf0 is same with the state(5) to be set 00:20:47.918 [2024-05-15 11:12:44.958420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.918 [2024-05-15 11:12:44.958433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.918 [2024-05-15 11:12:44.958445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.918 [2024-05-15 11:12:44.958453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.918 [2024-05-15 11:12:44.958461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.918 [2024-05-15 11:12:44.958468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.918 [2024-05-15 11:12:44.958477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.918 [2024-05-15 11:12:44.958484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.918 [2024-05-15 11:12:44.958492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.918 [2024-05-15 11:12:44.958499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.918 [2024-05-15 11:12:44.958507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.918 [2024-05-15 11:12:44.958514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.918 [2024-05-15 11:12:44.958523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.918 [2024-05-15 11:12:44.958529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.918 [2024-05-15 11:12:44.958538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.918 [2024-05-15 11:12:44.958545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.918 [2024-05-15 11:12:44.958553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.918 [2024-05-15 11:12:44.958560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.918 [2024-05-15 11:12:44.958569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.918 [2024-05-15 11:12:44.958576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.918 [2024-05-15 11:12:44.958585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.918 [2024-05-15 11:12:44.958592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.918 [2024-05-15 11:12:44.958600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.918 [2024-05-15 11:12:44.958607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.918 [2024-05-15 11:12:44.958616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.918 [2024-05-15 11:12:44.958623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.918 [2024-05-15 11:12:44.958632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.918 [2024-05-15 11:12:44.958641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.918 [2024-05-15 11:12:44.958650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.918 [2024-05-15 11:12:44.958656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.918 [2024-05-15 11:12:44.958665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.918 [2024-05-15 11:12:44.958672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.918 [2024-05-15 11:12:44.958681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.918 [2024-05-15 11:12:44.958688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.918 [2024-05-15 11:12:44.958696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.918 [2024-05-15 11:12:44.958703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.918 [2024-05-15 11:12:44.958712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.918 [2024-05-15 11:12:44.958719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.918 [2024-05-15 11:12:44.958727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.918 [2024-05-15 11:12:44.958734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.918 [2024-05-15 11:12:44.958742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.918 [2024-05-15 11:12:44.958749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.918 [2024-05-15 11:12:44.958758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.918 [2024-05-15 11:12:44.958764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.918 [2024-05-15 11:12:44.958773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.918 [2024-05-15 11:12:44.958779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.918 [2024-05-15 11:12:44.958788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.918 [2024-05-15 11:12:44.958795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.918 [2024-05-15 11:12:44.958804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.918 [2024-05-15 11:12:44.958810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.918 [2024-05-15 11:12:44.958819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.918 [2024-05-15 11:12:44.958826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.918 [2024-05-15 11:12:44.958836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.918 [2024-05-15 11:12:44.958843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.918 [2024-05-15 11:12:44.958851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.918 [2024-05-15 11:12:44.958858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.918 [2024-05-15 11:12:44.958866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.918 [2024-05-15 11:12:44.958874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.918 [2024-05-15 11:12:44.958882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.918 [2024-05-15 11:12:44.958889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.918 [2024-05-15 11:12:44.958897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.918 [2024-05-15 11:12:44.958904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.918 [2024-05-15 11:12:44.958913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.918 [2024-05-15 11:12:44.958920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.918 [2024-05-15 11:12:44.958928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.918 [2024-05-15 11:12:44.958936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.918 [2024-05-15 11:12:44.958944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.918 [2024-05-15 11:12:44.958951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.918 [2024-05-15 11:12:44.958960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.918 [2024-05-15 11:12:44.958967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.918 [2024-05-15 11:12:44.958976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.918 [2024-05-15 11:12:44.958982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.918 [2024-05-15 11:12:44.958990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.918 [2024-05-15 11:12:44.958997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.918 [2024-05-15 11:12:44.959006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.918 [2024-05-15 11:12:44.959012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.918 [2024-05-15 11:12:44.959021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.918 [2024-05-15 11:12:44.959029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.918 [2024-05-15 11:12:44.959038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.918 [2024-05-15 11:12:44.959045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.918 [2024-05-15 11:12:44.959053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.918 [2024-05-15 11:12:44.959060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.919 [2024-05-15 11:12:44.959068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.919 [2024-05-15 11:12:44.959074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.919 [2024-05-15 11:12:44.959083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.919 [2024-05-15 11:12:44.959090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.919 [2024-05-15 11:12:44.959108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.919 [2024-05-15 11:12:44.959115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.919 [2024-05-15 11:12:44.959123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.919 [2024-05-15 11:12:44.959130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.919 [2024-05-15 11:12:44.959138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.919 [2024-05-15 11:12:44.959145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.919 [2024-05-15 11:12:44.959153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.919 [2024-05-15 11:12:44.959159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.919 [2024-05-15 11:12:44.959172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.919 [2024-05-15 11:12:44.959179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.919 [2024-05-15 11:12:44.959187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.919 [2024-05-15 11:12:44.959195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.919 [2024-05-15 11:12:44.959203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.919 [2024-05-15 11:12:44.959210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.919 [2024-05-15 11:12:44.959219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.919 [2024-05-15 11:12:44.959226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.919 [2024-05-15 11:12:44.959235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.919 [2024-05-15 11:12:44.959242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.919 [2024-05-15 11:12:44.959250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.919 [2024-05-15 11:12:44.959257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.919 [2024-05-15 11:12:44.959265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.919 [2024-05-15 11:12:44.959272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.919 [2024-05-15 11:12:44.959280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.919 [2024-05-15 11:12:44.959287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.919 [2024-05-15 11:12:44.959295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.919 [2024-05-15 11:12:44.959301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.919 [2024-05-15 11:12:44.959309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.919 [2024-05-15 11:12:44.959316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.919 [2024-05-15 11:12:44.959324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.919 [2024-05-15 11:12:44.959331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.919 [2024-05-15 11:12:44.959339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.919 [2024-05-15 11:12:44.959347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.919 [2024-05-15 11:12:44.959355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.919 [2024-05-15 11:12:44.959361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.919 [2024-05-15 11:12:44.959370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.919 [2024-05-15 11:12:44.959376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.919 [2024-05-15 11:12:44.959384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.919 [2024-05-15 11:12:44.959391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.919 [2024-05-15 11:12:44.959399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.919 [2024-05-15 11:12:44.959406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.919 [2024-05-15 11:12:44.959414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.919 [2024-05-15 11:12:44.959422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.919 [2024-05-15 11:12:44.959430] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f9650 is same with the state(5) to be set 00:20:47.919 [2024-05-15 11:12:44.960429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.919 [2024-05-15 11:12:44.960441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.919 [2024-05-15 11:12:44.960451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.919 [2024-05-15 11:12:44.960458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.919 [2024-05-15 11:12:44.960467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.919 [2024-05-15 11:12:44.960474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.919 [2024-05-15 11:12:44.960483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.919 [2024-05-15 11:12:44.960489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.919 [2024-05-15 11:12:44.960498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.919 [2024-05-15 11:12:44.960505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.919 [2024-05-15 11:12:44.960513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.919 [2024-05-15 11:12:44.960520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.919 [2024-05-15 11:12:44.960528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.919 [2024-05-15 11:12:44.960535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.919 [2024-05-15 11:12:44.960543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.919 [2024-05-15 11:12:44.960549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.919 [2024-05-15 11:12:44.960557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.919 [2024-05-15 11:12:44.960564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.919 [2024-05-15 11:12:44.960573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.919 [2024-05-15 11:12:44.960579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.919 [2024-05-15 11:12:44.960587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.919 [2024-05-15 11:12:44.960594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.919 [2024-05-15 11:12:44.960602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.919 [2024-05-15 11:12:44.960612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.919 [2024-05-15 11:12:44.960620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.919 [2024-05-15 11:12:44.960626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.919 [2024-05-15 11:12:44.960635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.919 [2024-05-15 11:12:44.960642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.919 [2024-05-15 11:12:44.960650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.919 [2024-05-15 11:12:44.960657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.919 [2024-05-15 11:12:44.960665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.919 [2024-05-15 11:12:44.960671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.919 [2024-05-15 11:12:44.960680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.919 [2024-05-15 11:12:44.960687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.919 [2024-05-15 11:12:44.960695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.919 [2024-05-15 11:12:44.960701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.919 [2024-05-15 11:12:44.960710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.919 [2024-05-15 11:12:44.960717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.919 [2024-05-15 11:12:44.960725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.920 [2024-05-15 11:12:44.960732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.920 [2024-05-15 11:12:44.960740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.920 [2024-05-15 11:12:44.960746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.920 [2024-05-15 11:12:44.960754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.920 [2024-05-15 11:12:44.960761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.920 [2024-05-15 11:12:44.960769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.920 [2024-05-15 11:12:44.960776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.920 [2024-05-15 11:12:44.960784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.920 [2024-05-15 11:12:44.960790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.920 [2024-05-15 11:12:44.960800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.920 [2024-05-15 11:12:44.960807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.920 [2024-05-15 11:12:44.960815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.920 [2024-05-15 11:12:44.960821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.920 [2024-05-15 11:12:44.960829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.920 [2024-05-15 11:12:44.960836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.920 [2024-05-15 11:12:44.960845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.920 [2024-05-15 11:12:44.960851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.920 [2024-05-15 11:12:44.960859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.920 [2024-05-15 11:12:44.960866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.920 [2024-05-15 11:12:44.960874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.920 [2024-05-15 11:12:44.960881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.920 [2024-05-15 11:12:44.960889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.920 [2024-05-15 11:12:44.960896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.920 [2024-05-15 11:12:44.960904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.920 [2024-05-15 11:12:44.960910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.920 [2024-05-15 11:12:44.960920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.920 [2024-05-15 11:12:44.960926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.920 [2024-05-15 11:12:44.960934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.920 [2024-05-15 11:12:44.960941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.920 [2024-05-15 11:12:44.960948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.920 [2024-05-15 11:12:44.960955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.920 [2024-05-15 11:12:44.960963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.920 [2024-05-15 11:12:44.960969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.920 [2024-05-15 11:12:44.960978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.920 [2024-05-15 11:12:44.960989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.920 [2024-05-15 11:12:44.960999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.920 [2024-05-15 11:12:44.961007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.920 [2024-05-15 11:12:44.961017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.920 [2024-05-15 11:12:44.961026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.920 [2024-05-15 11:12:44.961036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.920 [2024-05-15 11:12:44.961044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.920 [2024-05-15 11:12:44.961052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.920 [2024-05-15 11:12:44.961059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.920 [2024-05-15 11:12:44.961067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.920 [2024-05-15 11:12:44.961074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.920 [2024-05-15 11:12:44.961082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.920 [2024-05-15 11:12:44.961088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.920 [2024-05-15 11:12:44.961097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.920 [2024-05-15 11:12:44.961103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.920 [2024-05-15 11:12:44.961111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.920 [2024-05-15 11:12:44.961118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.920 [2024-05-15 11:12:44.961126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.920 [2024-05-15 11:12:44.961132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.920 [2024-05-15 11:12:44.961140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.920 [2024-05-15 11:12:44.961147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.920 [2024-05-15 11:12:44.961155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.920 [2024-05-15 11:12:44.961162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.920 [2024-05-15 11:12:44.961175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.920 [2024-05-15 11:12:44.961182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.920 [2024-05-15 11:12:44.961191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.920 [2024-05-15 11:12:44.961198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.920 [2024-05-15 11:12:44.961206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.920 [2024-05-15 11:12:44.961213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.920 [2024-05-15 11:12:44.961221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.920 [2024-05-15 11:12:44.961227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.920 [2024-05-15 11:12:44.961235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.920 [2024-05-15 11:12:44.961242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.920 [2024-05-15 11:12:44.961250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.920 [2024-05-15 11:12:44.961257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.920 [2024-05-15 11:12:44.961265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.920 [2024-05-15 11:12:44.961271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.920 [2024-05-15 11:12:44.961279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.920 [2024-05-15 11:12:44.961286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.920 [2024-05-15 11:12:44.961294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.920 [2024-05-15 11:12:44.961301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.921 [2024-05-15 11:12:44.961309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.921 [2024-05-15 11:12:44.961316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.921 [2024-05-15 11:12:44.961324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.921 [2024-05-15 11:12:44.961330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.921 [2024-05-15 11:12:44.961338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.921 [2024-05-15 11:12:44.961345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.921 [2024-05-15 11:12:44.961352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.921 [2024-05-15 11:12:44.961359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.921 [2024-05-15 11:12:44.961367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.921 [2024-05-15 11:12:44.961375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.921 [2024-05-15 11:12:44.961383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.921 [2024-05-15 11:12:44.961389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.921 [2024-05-15 11:12:44.961397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.921 [2024-05-15 11:12:44.961404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.921 [2024-05-15 11:12:44.961411] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fab50 is same with the state(5) to be set 00:20:47.921 [2024-05-15 11:12:44.962405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.921 [2024-05-15 11:12:44.962418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.921 [2024-05-15 11:12:44.962428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.921 [2024-05-15 11:12:44.962435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.921 [2024-05-15 11:12:44.962444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.921 [2024-05-15 11:12:44.962451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.921 [2024-05-15 11:12:44.962459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.921 [2024-05-15 11:12:44.962466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.921 [2024-05-15 11:12:44.962474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.921 [2024-05-15 11:12:44.962481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.921 [2024-05-15 11:12:44.962489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.921 [2024-05-15 11:12:44.962495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.921 [2024-05-15 11:12:44.962503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.921 [2024-05-15 11:12:44.962510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.921 [2024-05-15 11:12:44.962518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.921 [2024-05-15 11:12:44.962524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.921 [2024-05-15 11:12:44.962532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.921 [2024-05-15 11:12:44.962539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.921 [2024-05-15 11:12:44.962547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.921 [2024-05-15 11:12:44.962553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.921 [2024-05-15 11:12:44.962564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.921 [2024-05-15 11:12:44.962570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.921 [2024-05-15 11:12:44.962578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.921 [2024-05-15 11:12:44.962584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.921 [2024-05-15 11:12:44.962593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.921 [2024-05-15 11:12:44.962599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.921 [2024-05-15 11:12:44.962607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.921 [2024-05-15 11:12:44.962614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.921 [2024-05-15 11:12:44.962621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.921 [2024-05-15 11:12:44.962628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.921 [2024-05-15 11:12:44.962636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.921 [2024-05-15 11:12:44.962642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.921 [2024-05-15 11:12:44.962650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.921 [2024-05-15 11:12:44.962657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.921 [2024-05-15 11:12:44.962665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.921 [2024-05-15 11:12:44.962671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.921 [2024-05-15 11:12:44.962679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.921 [2024-05-15 11:12:44.962686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.921 [2024-05-15 11:12:44.962694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.921 [2024-05-15 11:12:44.962700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.921 [2024-05-15 11:12:44.962709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.921 [2024-05-15 11:12:44.962715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.921 [2024-05-15 11:12:44.962723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.921 [2024-05-15 11:12:44.962729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.921 [2024-05-15 11:12:44.962737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.921 [2024-05-15 11:12:44.962745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.921 [2024-05-15 11:12:44.962753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.921 [2024-05-15 11:12:44.962760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.921 [2024-05-15 11:12:44.962768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.921 [2024-05-15 11:12:44.962774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.921 [2024-05-15 11:12:44.962782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.921 [2024-05-15 11:12:44.962789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.921 [2024-05-15 11:12:44.962797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.921 [2024-05-15 11:12:44.962804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.921 [2024-05-15 11:12:44.962812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.921 [2024-05-15 11:12:44.962818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.921 [2024-05-15 11:12:44.962827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.921 [2024-05-15 11:12:44.962834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.921 [2024-05-15 11:12:44.962842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.921 [2024-05-15 11:12:44.962849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.921 [2024-05-15 11:12:44.962857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.921 [2024-05-15 11:12:44.962865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.921 [2024-05-15 11:12:44.962873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.921 [2024-05-15 11:12:44.962880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.921 [2024-05-15 11:12:44.962888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.921 [2024-05-15 11:12:44.962896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.921 [2024-05-15 11:12:44.962904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.921 [2024-05-15 11:12:44.962911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.921 [2024-05-15 11:12:44.962919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.921 [2024-05-15 11:12:44.962925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.921 [2024-05-15 11:12:44.962938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.921 [2024-05-15 11:12:44.962944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.921 [2024-05-15 11:12:44.962953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.921 [2024-05-15 11:12:44.962959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.921 [2024-05-15 11:12:44.962970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.922 [2024-05-15 11:12:44.962977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.922 [2024-05-15 11:12:44.962984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.922 [2024-05-15 11:12:44.962992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.922 [2024-05-15 11:12:44.963001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.922 [2024-05-15 11:12:44.963008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.922 [2024-05-15 11:12:44.963016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.922 [2024-05-15 11:12:44.963024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.922 [2024-05-15 11:12:44.963034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.922 [2024-05-15 11:12:44.963040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.922 [2024-05-15 11:12:44.963048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.922 [2024-05-15 11:12:44.963058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.922 [2024-05-15 11:12:44.963067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.922 [2024-05-15 11:12:44.963073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.922 [2024-05-15 11:12:44.963081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.922 [2024-05-15 11:12:44.963091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.922 [2024-05-15 11:12:44.963099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.922 [2024-05-15 11:12:44.963107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.922 [2024-05-15 11:12:44.963115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.922 [2024-05-15 11:12:44.963121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.922 [2024-05-15 11:12:44.963132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.922 [2024-05-15 11:12:44.963141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.922 [2024-05-15 11:12:44.963149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.922 [2024-05-15 11:12:44.963155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.922 [2024-05-15 11:12:44.963167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.922 [2024-05-15 11:12:44.963175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.922 [2024-05-15 11:12:44.963183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.922 [2024-05-15 11:12:44.963191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.922 [2024-05-15 11:12:44.963199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.922 [2024-05-15 11:12:44.963205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.922 [2024-05-15 11:12:44.963213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.922 [2024-05-15 11:12:44.963220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.922 [2024-05-15 11:12:44.963230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.922 [2024-05-15 11:12:44.963237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.922 [2024-05-15 11:12:44.963245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.922 [2024-05-15 11:12:44.963252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.922 [2024-05-15 11:12:44.963262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.922 [2024-05-15 11:12:44.963269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.922 [2024-05-15 11:12:44.963277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.922 [2024-05-15 11:12:44.963283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.922 [2024-05-15 11:12:44.963294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.922 [2024-05-15 11:12:44.963300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.922 [2024-05-15 11:12:44.963309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.922 [2024-05-15 11:12:44.963315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.922 [2024-05-15 11:12:44.963323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.922 [2024-05-15 11:12:44.963330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.922 [2024-05-15 11:12:44.963339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.922 [2024-05-15 11:12:44.963345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.922 [2024-05-15 11:12:44.963354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.922 [2024-05-15 11:12:44.963360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.922 [2024-05-15 11:12:44.963368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.922 [2024-05-15 11:12:44.963377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.922 [2024-05-15 11:12:44.963389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.922 [2024-05-15 11:12:44.963397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.922 [2024-05-15 11:12:44.963405] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22972f0 is same with the state(5) to be set 00:20:47.922 [2024-05-15 11:12:44.965927] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:47.922 [2024-05-15 11:12:44.965945] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:20:47.922 [2024-05-15 11:12:44.965955] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:20:47.922 task offset: 32384 on job bdev=Nvme1n1 fails 00:20:47.922 00:20:47.922 Latency(us) 00:20:47.922 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:47.922 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:47.922 Job: Nvme1n1 ended in about 0.87 seconds with error 00:20:47.922 Verification LBA range: start 0x0 length 0x400 00:20:47.922 Nvme1n1 : 0.87 220.66 13.79 73.55 0.00 215236.34 8377.21 240716.58 00:20:47.922 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:47.922 Job: Nvme2n1 ended in about 0.89 seconds with error 00:20:47.922 Verification LBA range: start 0x0 length 0x400 00:20:47.922 Nvme2n1 : 0.89 244.74 15.30 72.18 0.00 196208.07 18578.03 206067.98 00:20:47.922 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:47.922 Job: Nvme3n1 ended in about 0.90 seconds with error 00:20:47.922 Verification LBA range: start 0x0 length 0x400 00:20:47.922 Nvme3n1 : 0.90 213.90 13.37 71.30 0.00 214179.39 25758.50 188743.68 00:20:47.922 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:47.922 Job: Nvme4n1 ended in about 0.90 seconds with error 00:20:47.922 Verification LBA range: start 0x0 length 0x400 00:20:47.922 Nvme4n1 : 0.90 213.27 13.33 71.09 0.00 210897.70 14588.88 214274.23 00:20:47.922 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:47.922 Job: Nvme5n1 ended in about 0.91 seconds with error 00:20:47.922 Verification LBA range: start 0x0 length 0x400 00:20:47.922 Nvme5n1 : 0.91 216.82 13.55 70.44 0.00 204969.49 16868.40 198773.54 00:20:47.922 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:47.922 Job: Nvme6n1 ended in about 0.88 seconds with error 00:20:47.922 Verification LBA range: start 0x0 length 0x400 00:20:47.922 Nvme6n1 : 0.88 217.07 13.57 72.36 0.00 198977.89 16868.40 221568.67 00:20:47.922 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:47.922 Job: Nvme7n1 ended in about 0.91 seconds with error 00:20:47.922 Verification LBA range: start 0x0 length 0x400 00:20:47.922 Nvme7n1 : 0.91 210.84 13.18 70.28 0.00 201579.97 12993.22 217009.64 00:20:47.922 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:47.922 Job: Nvme8n1 ended in about 0.91 seconds with error 00:20:47.922 Verification LBA range: start 0x0 length 0x400 00:20:47.922 Nvme8n1 : 0.91 210.39 13.15 70.13 0.00 198139.10 19033.93 238892.97 00:20:47.922 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:47.922 Job: Nvme9n1 ended in about 0.91 seconds with error 00:20:47.922 Verification LBA range: start 0x0 length 0x400 00:20:47.922 Nvme9n1 : 0.91 139.95 8.75 69.98 0.00 259692.04 18805.98 257129.07 00:20:47.922 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:47.922 Job: Nvme10n1 ended in about 0.90 seconds with error 00:20:47.922 Verification LBA range: start 0x0 length 0x400 00:20:47.922 Nvme10n1 : 0.90 141.86 8.87 70.93 0.00 250277.47 26898.25 248011.02 00:20:47.922 =================================================================================================================== 00:20:47.922 Total : 2029.49 126.84 712.23 0.00 212726.37 8377.21 257129.07 00:20:47.922 [2024-05-15 11:12:44.994201] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:47.922 [2024-05-15 11:12:44.994247] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:20:47.922 [2024-05-15 11:12:44.994305] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:47.922 [2024-05-15 11:12:44.994315] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:47.922 [2024-05-15 11:12:44.994324] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:47.922 [2024-05-15 11:12:44.994339] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:20:47.922 [2024-05-15 11:12:44.994347] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:20:47.922 [2024-05-15 11:12:44.994356] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:20:47.923 [2024-05-15 11:12:44.994368] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:47.923 [2024-05-15 11:12:44.994378] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:47.923 [2024-05-15 11:12:44.994386] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:47.923 [2024-05-15 11:12:44.994435] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:47.923 [2024-05-15 11:12:44.994451] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:47.923 [2024-05-15 11:12:44.994464] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:47.923 [2024-05-15 11:12:44.994553] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.923 [2024-05-15 11:12:44.994562] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.923 [2024-05-15 11:12:44.994568] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.923 [2024-05-15 11:12:44.994883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.923 [2024-05-15 11:12:44.995118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.923 [2024-05-15 11:12:44.995130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23c9960 with addr=10.0.0.2, port=4420 00:20:47.923 [2024-05-15 11:12:44.995141] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c9960 is same with the state(5) to be set 00:20:47.923 [2024-05-15 11:12:44.995343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.923 [2024-05-15 11:12:44.995554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.923 [2024-05-15 11:12:44.995568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d04610 with addr=10.0.0.2, port=4420 00:20:47.923 [2024-05-15 11:12:44.995576] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d04610 is same with the state(5) to be set 00:20:47.923 [2024-05-15 11:12:44.995742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.923 [2024-05-15 11:12:44.995913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.923 [2024-05-15 11:12:44.995924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22db0b0 with addr=10.0.0.2, port=4420 00:20:47.923 [2024-05-15 11:12:44.995932] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22db0b0 is same with the state(5) to be set 00:20:47.923 [2024-05-15 11:12:44.996089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.923 [2024-05-15 11:12:44.996280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.923 [2024-05-15 11:12:44.996292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22af0a0 with addr=10.0.0.2, port=4420 00:20:47.923 [2024-05-15 11:12:44.996300] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22af0a0 is same with the state(5) to be set 00:20:47.923 [2024-05-15 11:12:44.996320] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:47.923 [2024-05-15 11:12:44.996331] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:47.923 [2024-05-15 11:12:44.996342] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:47.923 [2024-05-15 11:12:44.997373] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:47.923 [2024-05-15 11:12:44.997391] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:47.923 [2024-05-15 11:12:44.997400] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:47.923 [2024-05-15 11:12:44.997456] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c9960 (9): Bad file descriptor 00:20:47.923 [2024-05-15 11:12:44.997471] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d04610 (9): Bad file descriptor 00:20:47.923 [2024-05-15 11:12:44.997481] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22db0b0 (9): Bad file descriptor 00:20:47.923 [2024-05-15 11:12:44.997490] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22af0a0 (9): Bad file descriptor 00:20:47.923 [2024-05-15 11:12:44.997547] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:47.923 [2024-05-15 11:12:44.997558] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:20:47.923 [2024-05-15 11:12:44.997567] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:47.923 [2024-05-15 11:12:44.997794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.923 [2024-05-15 11:12:44.998008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.923 [2024-05-15 11:12:44.998020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e3e10 with addr=10.0.0.2, port=4420 00:20:47.923 [2024-05-15 11:12:44.998029] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e3e10 is same with the state(5) to be set 00:20:47.923 [2024-05-15 11:12:44.998184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.923 [2024-05-15 11:12:44.998413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.923 [2024-05-15 11:12:44.998425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9db0 with addr=10.0.0.2, port=4420 00:20:47.923 [2024-05-15 11:12:44.998437] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b9db0 is same with the state(5) to be set 00:20:47.923 [2024-05-15 11:12:44.998587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.923 [2024-05-15 11:12:44.998669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.923 [2024-05-15 11:12:44.998682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222ab70 with addr=10.0.0.2, port=4420 00:20:47.923 [2024-05-15 11:12:44.998692] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ab70 is same with the state(5) to be set 00:20:47.923 [2024-05-15 11:12:44.998701] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:20:47.923 [2024-05-15 11:12:44.998710] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:20:47.923 [2024-05-15 11:12:44.998721] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:47.923 [2024-05-15 11:12:44.998732] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:20:47.923 [2024-05-15 11:12:44.998742] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:20:47.923 [2024-05-15 11:12:44.998752] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:20:47.923 [2024-05-15 11:12:44.998764] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:20:47.923 [2024-05-15 11:12:44.998772] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:20:47.923 [2024-05-15 11:12:44.998780] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:20:47.923 [2024-05-15 11:12:44.998790] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:20:47.923 [2024-05-15 11:12:44.998797] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:20:47.923 [2024-05-15 11:12:44.998804] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:20:47.923 [2024-05-15 11:12:44.998861] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.923 [2024-05-15 11:12:44.998871] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.923 [2024-05-15 11:12:44.998877] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.923 [2024-05-15 11:12:44.998883] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.923 [2024-05-15 11:12:44.999050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.923 [2024-05-15 11:12:44.999274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.923 [2024-05-15 11:12:44.999287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21ff8a0 with addr=10.0.0.2, port=4420 00:20:47.923 [2024-05-15 11:12:44.999294] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ff8a0 is same with the state(5) to be set 00:20:47.923 [2024-05-15 11:12:44.999426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.923 [2024-05-15 11:12:44.999599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.923 [2024-05-15 11:12:44.999611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22aff60 with addr=10.0.0.2, port=4420 00:20:47.923 [2024-05-15 11:12:44.999620] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22aff60 is same with the state(5) to be set 00:20:47.923 [2024-05-15 11:12:44.999774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.923 [2024-05-15 11:12:45.000083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.923 [2024-05-15 11:12:45.000097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21fd730 with addr=10.0.0.2, port=4420 00:20:47.923 [2024-05-15 11:12:45.000108] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fd730 is same with the state(5) to be set 00:20:47.923 [2024-05-15 11:12:45.000118] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e3e10 (9): Bad file descriptor 00:20:47.923 [2024-05-15 11:12:45.000128] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b9db0 (9): Bad file descriptor 00:20:47.923 [2024-05-15 11:12:45.000137] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x222ab70 (9): Bad file descriptor 00:20:47.923 [2024-05-15 11:12:45.000185] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ff8a0 (9): Bad file descriptor 00:20:47.923 [2024-05-15 11:12:45.000198] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22aff60 (9): Bad file descriptor 00:20:47.923 [2024-05-15 11:12:45.000207] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21fd730 (9): Bad file descriptor 00:20:47.923 [2024-05-15 11:12:45.000215] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:47.923 [2024-05-15 11:12:45.000222] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:47.923 [2024-05-15 11:12:45.000229] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:47.923 [2024-05-15 11:12:45.000239] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:47.923 [2024-05-15 11:12:45.000246] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:47.923 [2024-05-15 11:12:45.000253] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:47.923 [2024-05-15 11:12:45.000262] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:47.923 [2024-05-15 11:12:45.000268] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:47.923 [2024-05-15 11:12:45.000275] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:47.923 [2024-05-15 11:12:45.000304] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.923 [2024-05-15 11:12:45.000311] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.923 [2024-05-15 11:12:45.000318] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.923 [2024-05-15 11:12:45.000324] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:47.923 [2024-05-15 11:12:45.000331] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:47.923 [2024-05-15 11:12:45.000338] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:47.923 [2024-05-15 11:12:45.000347] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:20:47.923 [2024-05-15 11:12:45.000354] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:20:47.923 [2024-05-15 11:12:45.000360] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:20:47.923 [2024-05-15 11:12:45.000369] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:47.923 [2024-05-15 11:12:45.000376] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:47.923 [2024-05-15 11:12:45.000383] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:47.924 [2024-05-15 11:12:45.000412] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.924 [2024-05-15 11:12:45.000419] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.924 [2024-05-15 11:12:45.000429] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:48.183 11:12:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:20:48.183 11:12:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:20:49.121 11:12:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 2313840 00:20:49.121 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (2313840) - No such process 00:20:49.121 11:12:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:20:49.121 11:12:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:20:49.121 11:12:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:49.121 11:12:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:49.380 11:12:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:49.380 11:12:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:49.380 11:12:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:49.380 11:12:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:20:49.380 11:12:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:49.380 11:12:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:20:49.380 11:12:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:49.380 11:12:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:49.380 rmmod nvme_tcp 00:20:49.380 rmmod nvme_fabrics 00:20:49.380 rmmod nvme_keyring 00:20:49.380 11:12:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:49.380 11:12:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:20:49.380 11:12:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:20:49.380 11:12:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:20:49.380 11:12:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:49.380 11:12:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:49.380 11:12:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:49.380 11:12:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:49.380 11:12:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:49.380 11:12:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:49.380 11:12:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:49.380 11:12:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:51.286 11:12:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:51.286 00:20:51.286 real 0m7.892s 00:20:51.286 user 0m19.571s 00:20:51.286 sys 0m1.297s 00:20:51.286 11:12:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:20:51.286 11:12:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:51.286 ************************************ 00:20:51.286 END TEST nvmf_shutdown_tc3 00:20:51.286 ************************************ 00:20:51.286 11:12:48 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:20:51.286 00:20:51.286 real 0m30.854s 00:20:51.286 user 1m18.085s 00:20:51.286 sys 0m7.975s 00:20:51.286 11:12:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # xtrace_disable 00:20:51.286 11:12:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:51.286 ************************************ 00:20:51.286 END TEST nvmf_shutdown 00:20:51.286 ************************************ 00:20:51.545 11:12:48 nvmf_tcp -- nvmf/nvmf.sh@85 -- # timing_exit target 00:20:51.545 11:12:48 nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:20:51.545 11:12:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:51.545 11:12:48 nvmf_tcp -- nvmf/nvmf.sh@87 -- # timing_enter host 00:20:51.545 11:12:48 nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:20:51.545 11:12:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:51.545 11:12:48 nvmf_tcp -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:20:51.545 11:12:48 nvmf_tcp -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:51.546 11:12:48 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:20:51.546 11:12:48 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:20:51.546 11:12:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:51.546 ************************************ 00:20:51.546 START TEST nvmf_multicontroller 00:20:51.546 ************************************ 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:51.546 * Looking for test storage... 00:20:51.546 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:20:51.546 11:12:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:56.826 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:56.826 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:56.826 Found net devices under 0000:86:00.0: cvl_0_0 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:56.826 Found net devices under 0000:86:00.1: cvl_0_1 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:56.826 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:56.826 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:56.826 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:20:56.826 00:20:56.826 --- 10.0.0.2 ping statistics --- 00:20:56.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.826 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:20:56.827 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:56.827 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:56.827 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:20:56.827 00:20:56.827 --- 10.0.0.1 ping statistics --- 00:20:56.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.827 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:20:56.827 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:56.827 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:20:56.827 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:56.827 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:56.827 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:56.827 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:56.827 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:56.827 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:56.827 11:12:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:56.827 11:12:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:56.827 11:12:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:56.827 11:12:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@721 -- # xtrace_disable 00:20:56.827 11:12:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:56.827 11:12:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=2317996 00:20:56.827 11:12:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 2317996 00:20:56.827 11:12:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:56.827 11:12:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@828 -- # '[' -z 2317996 ']' 00:20:56.827 11:12:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.827 11:12:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:56.827 11:12:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:56.827 11:12:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:56.827 11:12:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:56.827 [2024-05-15 11:12:54.073585] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:20:56.827 [2024-05-15 11:12:54.073626] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:57.086 EAL: No free 2048 kB hugepages reported on node 1 00:20:57.086 [2024-05-15 11:12:54.131182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:57.086 [2024-05-15 11:12:54.210597] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:57.086 [2024-05-15 11:12:54.210630] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:57.086 [2024-05-15 11:12:54.210637] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:57.086 [2024-05-15 11:12:54.210643] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:57.086 [2024-05-15 11:12:54.210649] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:57.086 [2024-05-15 11:12:54.210766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:57.086 [2024-05-15 11:12:54.210856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:57.086 [2024-05-15 11:12:54.210858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:57.655 11:12:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:57.655 11:12:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@861 -- # return 0 00:20:57.655 11:12:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:57.655 11:12:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@727 -- # xtrace_disable 00:20:57.655 11:12:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:57.655 11:12:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:57.655 11:12:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:57.655 11:12:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:57.655 11:12:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:57.915 [2024-05-15 11:12:54.924094] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:57.915 11:12:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:57.915 11:12:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:57.915 11:12:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:57.915 11:12:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:57.915 Malloc0 00:20:57.915 11:12:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:57.915 11:12:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:57.915 11:12:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:57.915 11:12:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:57.915 11:12:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:57.915 11:12:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:57.915 11:12:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:57.915 11:12:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:57.915 11:12:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:57.915 11:12:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:57.915 11:12:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:57.915 11:12:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:57.915 [2024-05-15 11:12:54.988817] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:57.915 [2024-05-15 11:12:54.989046] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:57.915 11:12:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:57.915 11:12:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:57.915 11:12:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:57.915 11:12:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:57.915 [2024-05-15 11:12:54.996952] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:57.915 11:12:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:57.915 11:12:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:57.915 11:12:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:57.916 11:12:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:57.916 Malloc1 00:20:57.916 11:12:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:57.916 11:12:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:57.916 11:12:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:57.916 11:12:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:57.916 11:12:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:57.916 11:12:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:57.916 11:12:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:57.916 11:12:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:57.916 11:12:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:57.916 11:12:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:57.916 11:12:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:57.916 11:12:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:57.916 11:12:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:57.916 11:12:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:57.916 11:12:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:57.916 11:12:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:57.916 11:12:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:57.916 11:12:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2318135 00:20:57.916 11:12:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:57.916 11:12:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:57.916 11:12:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2318135 /var/tmp/bdevperf.sock 00:20:57.916 11:12:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@828 -- # '[' -z 2318135 ']' 00:20:57.916 11:12:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:57.916 11:12:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:57.916 11:12:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:57.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:57.916 11:12:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:57.916 11:12:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:58.859 11:12:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:58.859 11:12:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@861 -- # return 0 00:20:58.859 11:12:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:58.859 11:12:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:58.859 11:12:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:58.859 NVMe0n1 00:20:58.859 11:12:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:58.859 11:12:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:58.859 11:12:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:58.859 11:12:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:58.859 11:12:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:58.859 11:12:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:58.859 1 00:20:58.859 11:12:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:58.859 11:12:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:20:58.859 11:12:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:58.859 11:12:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:20:58.859 11:12:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:58.859 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:20:58.859 11:12:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:58.859 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:58.859 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:58.859 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:58.859 request: 00:20:58.859 { 00:20:58.859 "name": "NVMe0", 00:20:58.859 "trtype": "tcp", 00:20:58.859 "traddr": "10.0.0.2", 00:20:58.859 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:58.859 "hostaddr": "10.0.0.2", 00:20:58.859 "hostsvcid": "60000", 00:20:58.859 "adrfam": "ipv4", 00:20:58.859 "trsvcid": "4420", 00:20:58.859 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:58.859 "method": "bdev_nvme_attach_controller", 00:20:58.859 "req_id": 1 00:20:58.859 } 00:20:58.859 Got JSON-RPC error response 00:20:58.859 response: 00:20:58.859 { 00:20:58.859 "code": -114, 00:20:58.859 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:58.859 } 00:20:58.859 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:20:58.859 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:20:58.859 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:58.859 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:58.859 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:58.859 11:12:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:58.859 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:20:58.860 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:58.860 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:20:58.860 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:58.860 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:20:58.860 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:58.860 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:58.860 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:58.860 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:58.860 request: 00:20:58.860 { 00:20:58.860 "name": "NVMe0", 00:20:58.860 "trtype": "tcp", 00:20:58.860 "traddr": "10.0.0.2", 00:20:58.860 "hostaddr": "10.0.0.2", 00:20:58.860 "hostsvcid": "60000", 00:20:58.860 "adrfam": "ipv4", 00:20:58.860 "trsvcid": "4420", 00:20:58.860 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:58.860 "method": "bdev_nvme_attach_controller", 00:20:58.860 "req_id": 1 00:20:58.860 } 00:20:58.860 Got JSON-RPC error response 00:20:58.860 response: 00:20:58.860 { 00:20:58.860 "code": -114, 00:20:58.860 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:58.860 } 00:20:58.860 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:20:58.860 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:20:58.860 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:58.860 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:58.860 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:58.860 11:12:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:58.860 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:20:58.860 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:58.860 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:20:58.860 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:58.860 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:20:58.860 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:58.860 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:58.860 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:58.860 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:58.860 request: 00:20:58.860 { 00:20:58.860 "name": "NVMe0", 00:20:58.860 "trtype": "tcp", 00:20:58.860 "traddr": "10.0.0.2", 00:20:58.860 "hostaddr": "10.0.0.2", 00:20:58.860 "hostsvcid": "60000", 00:20:58.860 "adrfam": "ipv4", 00:20:58.860 "trsvcid": "4420", 00:20:58.860 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:58.860 "multipath": "disable", 00:20:58.860 "method": "bdev_nvme_attach_controller", 00:20:58.860 "req_id": 1 00:20:58.860 } 00:20:58.860 Got JSON-RPC error response 00:20:58.860 response: 00:20:58.860 { 00:20:58.860 "code": -114, 00:20:58.860 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:20:58.860 } 00:20:58.860 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:20:58.860 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:20:58.860 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:58.860 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:58.860 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:58.860 11:12:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:58.860 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:20:58.860 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:58.860 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:20:58.860 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:58.860 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:20:58.860 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:58.860 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:58.860 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:58.860 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:58.860 request: 00:20:58.860 { 00:20:58.860 "name": "NVMe0", 00:20:58.860 "trtype": "tcp", 00:20:58.860 "traddr": "10.0.0.2", 00:20:58.860 "hostaddr": "10.0.0.2", 00:20:58.860 "hostsvcid": "60000", 00:20:58.860 "adrfam": "ipv4", 00:20:58.860 "trsvcid": "4420", 00:20:58.860 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:58.860 "multipath": "failover", 00:20:58.860 "method": "bdev_nvme_attach_controller", 00:20:58.860 "req_id": 1 00:20:58.860 } 00:20:58.860 Got JSON-RPC error response 00:20:58.860 response: 00:20:58.860 { 00:20:58.860 "code": -114, 00:20:58.860 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:58.860 } 00:20:58.860 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:20:58.860 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:20:58.860 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:58.860 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:58.860 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:58.860 11:12:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:58.860 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:58.860 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:59.119 00:20:59.119 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:59.119 11:12:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:59.119 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:59.119 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:59.119 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:59.119 11:12:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:59.120 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:59.120 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:59.379 00:20:59.379 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:59.379 11:12:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:59.379 11:12:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:59.379 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:59.379 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:59.379 11:12:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:59.379 11:12:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:59.379 11:12:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:00.758 0 00:21:00.758 11:12:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:00.758 11:12:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:00.758 11:12:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:00.758 11:12:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:00.758 11:12:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 2318135 00:21:00.758 11:12:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@947 -- # '[' -z 2318135 ']' 00:21:00.758 11:12:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # kill -0 2318135 00:21:00.758 11:12:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # uname 00:21:00.758 11:12:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:00.758 11:12:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2318135 00:21:00.758 11:12:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:21:00.758 11:12:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:21:00.758 11:12:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2318135' 00:21:00.758 killing process with pid 2318135 00:21:00.758 11:12:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # kill 2318135 00:21:00.758 11:12:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@971 -- # wait 2318135 00:21:00.758 11:12:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:00.758 11:12:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:00.758 11:12:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:00.758 11:12:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:00.758 11:12:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:00.758 11:12:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:00.758 11:12:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:00.758 11:12:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:00.758 11:12:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:21:00.758 11:12:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:00.758 11:12:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # read -r file 00:21:00.758 11:12:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:00.758 11:12:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # sort -u 00:21:00.758 11:12:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # cat 00:21:00.758 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:00.758 [2024-05-15 11:12:55.099349] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:21:00.758 [2024-05-15 11:12:55.099394] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2318135 ] 00:21:00.758 EAL: No free 2048 kB hugepages reported on node 1 00:21:00.758 [2024-05-15 11:12:55.153777] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.758 [2024-05-15 11:12:55.228801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:00.758 [2024-05-15 11:12:56.502897] bdev.c:4575:bdev_name_add: *ERROR*: Bdev name 88b30365-31f1-45a5-9831-d318fc7fe1d2 already exists 00:21:00.758 [2024-05-15 11:12:56.502924] bdev.c:7691:bdev_register: *ERROR*: Unable to add uuid:88b30365-31f1-45a5-9831-d318fc7fe1d2 alias for bdev NVMe1n1 00:21:00.758 [2024-05-15 11:12:56.502933] bdev_nvme.c:4297:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:00.758 Running I/O for 1 seconds... 00:21:00.758 00:21:00.758 Latency(us) 00:21:00.758 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:00.758 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:00.758 NVMe0n1 : 1.00 24199.61 94.53 0.00 0.00 5281.66 4616.01 11853.47 00:21:00.758 =================================================================================================================== 00:21:00.758 Total : 24199.61 94.53 0.00 0.00 5281.66 4616.01 11853.47 00:21:00.758 Received shutdown signal, test time was about 1.000000 seconds 00:21:00.758 00:21:00.758 Latency(us) 00:21:00.758 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:00.759 =================================================================================================================== 00:21:00.759 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:00.759 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:00.759 11:12:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1615 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:00.759 11:12:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # read -r file 00:21:00.759 11:12:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:21:00.759 11:12:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:00.759 11:12:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:21:00.759 11:12:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:00.759 11:12:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:21:00.759 11:12:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:00.759 11:12:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:00.759 rmmod nvme_tcp 00:21:00.759 rmmod nvme_fabrics 00:21:00.759 rmmod nvme_keyring 00:21:00.759 11:12:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:00.759 11:12:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:21:00.759 11:12:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:21:00.759 11:12:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 2317996 ']' 00:21:00.759 11:12:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 2317996 00:21:00.759 11:12:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@947 -- # '[' -z 2317996 ']' 00:21:00.759 11:12:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # kill -0 2317996 00:21:00.759 11:12:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # uname 00:21:00.759 11:12:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:00.759 11:12:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2317996 00:21:01.018 11:12:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:21:01.018 11:12:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:21:01.018 11:12:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2317996' 00:21:01.018 killing process with pid 2317996 00:21:01.019 11:12:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # kill 2317996 00:21:01.019 [2024-05-15 11:12:58.050304] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:01.019 11:12:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@971 -- # wait 2317996 00:21:01.278 11:12:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:01.278 11:12:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:01.278 11:12:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:01.278 11:12:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:01.278 11:12:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:01.278 11:12:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:01.278 11:12:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:01.278 11:12:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:03.183 11:13:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:03.183 00:21:03.183 real 0m11.699s 00:21:03.183 user 0m16.757s 00:21:03.183 sys 0m4.729s 00:21:03.183 11:13:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:03.183 11:13:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:03.183 ************************************ 00:21:03.183 END TEST nvmf_multicontroller 00:21:03.183 ************************************ 00:21:03.183 11:13:00 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:03.183 11:13:00 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:21:03.183 11:13:00 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:03.183 11:13:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:03.183 ************************************ 00:21:03.183 START TEST nvmf_aer 00:21:03.183 ************************************ 00:21:03.183 11:13:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:03.442 * Looking for test storage... 00:21:03.442 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:03.442 11:13:00 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:03.442 11:13:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:03.442 11:13:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:03.442 11:13:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:03.442 11:13:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:03.442 11:13:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:03.442 11:13:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:03.442 11:13:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:03.442 11:13:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:03.442 11:13:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:03.442 11:13:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:03.442 11:13:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:03.442 11:13:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:03.442 11:13:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:03.442 11:13:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:03.442 11:13:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:03.442 11:13:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:03.442 11:13:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:03.442 11:13:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:03.442 11:13:00 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:03.442 11:13:00 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:03.442 11:13:00 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:03.442 11:13:00 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.442 11:13:00 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.442 11:13:00 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.442 11:13:00 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:03.442 11:13:00 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.442 11:13:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:21:03.442 11:13:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:03.442 11:13:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:03.442 11:13:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:03.442 11:13:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:03.442 11:13:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:03.442 11:13:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:03.442 11:13:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:03.442 11:13:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:03.442 11:13:00 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:03.442 11:13:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:03.442 11:13:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:03.442 11:13:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:03.442 11:13:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:03.442 11:13:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:03.442 11:13:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:03.442 11:13:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:03.443 11:13:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:03.443 11:13:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:03.443 11:13:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:03.443 11:13:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:21:03.443 11:13:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:08.735 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:08.735 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:08.735 Found net devices under 0000:86:00.0: cvl_0_0 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:08.735 Found net devices under 0000:86:00.1: cvl_0_1 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:21:08.735 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:08.736 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:08.736 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:08.736 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:08.736 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:08.736 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:08.736 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:08.736 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:08.736 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:08.736 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:08.736 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:08.736 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:08.736 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:08.736 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:08.736 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:08.736 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:08.736 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:08.736 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:08.736 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:08.736 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:08.736 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:08.736 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:08.736 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:08.736 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:08.736 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:21:08.736 00:21:08.736 --- 10.0.0.2 ping statistics --- 00:21:08.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.736 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:21:08.736 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:08.736 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:08.736 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:21:08.736 00:21:08.736 --- 10.0.0.1 ping statistics --- 00:21:08.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.736 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:21:08.736 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:08.736 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:21:08.736 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:08.736 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:08.736 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:08.736 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:08.736 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:08.736 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:08.736 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:08.736 11:13:05 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:08.736 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:08.736 11:13:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@721 -- # xtrace_disable 00:21:08.736 11:13:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:08.736 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2322020 00:21:08.736 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2322020 00:21:08.736 11:13:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:08.736 11:13:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@828 -- # '[' -z 2322020 ']' 00:21:08.736 11:13:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:08.736 11:13:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:08.736 11:13:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:08.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:08.736 11:13:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:08.736 11:13:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:08.736 [2024-05-15 11:13:05.752516] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:21:08.736 [2024-05-15 11:13:05.752562] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:08.736 EAL: No free 2048 kB hugepages reported on node 1 00:21:08.736 [2024-05-15 11:13:05.808941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:08.736 [2024-05-15 11:13:05.889778] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:08.736 [2024-05-15 11:13:05.889810] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:08.736 [2024-05-15 11:13:05.889817] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:08.736 [2024-05-15 11:13:05.889823] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:08.736 [2024-05-15 11:13:05.889828] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:08.736 [2024-05-15 11:13:05.889869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:08.736 [2024-05-15 11:13:05.889963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:08.736 [2024-05-15 11:13:05.890046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:08.736 [2024-05-15 11:13:05.890048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:09.304 11:13:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:09.304 11:13:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@861 -- # return 0 00:21:09.304 11:13:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:09.304 11:13:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@727 -- # xtrace_disable 00:21:09.304 11:13:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:09.599 11:13:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:09.599 11:13:06 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:09.599 11:13:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:09.599 11:13:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:09.599 [2024-05-15 11:13:06.602151] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:09.599 11:13:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:09.599 11:13:06 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:09.599 11:13:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:09.599 11:13:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:09.599 Malloc0 00:21:09.599 11:13:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:09.599 11:13:06 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:09.599 11:13:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:09.599 11:13:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:09.599 11:13:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:09.599 11:13:06 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:09.599 11:13:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:09.599 11:13:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:09.599 11:13:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:09.599 11:13:06 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:09.599 11:13:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:09.599 11:13:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:09.599 [2024-05-15 11:13:06.653548] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:09.599 [2024-05-15 11:13:06.653786] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:09.599 11:13:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:09.599 11:13:06 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:09.599 11:13:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:09.599 11:13:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:09.599 [ 00:21:09.599 { 00:21:09.599 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:09.599 "subtype": "Discovery", 00:21:09.599 "listen_addresses": [], 00:21:09.599 "allow_any_host": true, 00:21:09.599 "hosts": [] 00:21:09.599 }, 00:21:09.599 { 00:21:09.599 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:09.599 "subtype": "NVMe", 00:21:09.599 "listen_addresses": [ 00:21:09.599 { 00:21:09.599 "trtype": "TCP", 00:21:09.599 "adrfam": "IPv4", 00:21:09.599 "traddr": "10.0.0.2", 00:21:09.599 "trsvcid": "4420" 00:21:09.599 } 00:21:09.599 ], 00:21:09.599 "allow_any_host": true, 00:21:09.599 "hosts": [], 00:21:09.599 "serial_number": "SPDK00000000000001", 00:21:09.599 "model_number": "SPDK bdev Controller", 00:21:09.599 "max_namespaces": 2, 00:21:09.599 "min_cntlid": 1, 00:21:09.599 "max_cntlid": 65519, 00:21:09.599 "namespaces": [ 00:21:09.599 { 00:21:09.599 "nsid": 1, 00:21:09.599 "bdev_name": "Malloc0", 00:21:09.599 "name": "Malloc0", 00:21:09.599 "nguid": "2742181813DD4CE2990B2A6C4A8268CD", 00:21:09.599 "uuid": "27421818-13dd-4ce2-990b-2a6c4a8268cd" 00:21:09.599 } 00:21:09.599 ] 00:21:09.599 } 00:21:09.599 ] 00:21:09.599 11:13:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:09.599 11:13:06 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:09.599 11:13:06 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:09.599 11:13:06 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=2322265 00:21:09.599 11:13:06 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:09.599 11:13:06 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:09.599 11:13:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # local i=0 00:21:09.599 11:13:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:09.599 11:13:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' 0 -lt 200 ']' 00:21:09.599 11:13:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # i=1 00:21:09.599 11:13:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # sleep 0.1 00:21:09.599 EAL: No free 2048 kB hugepages reported on node 1 00:21:09.599 11:13:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:09.600 11:13:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' 1 -lt 200 ']' 00:21:09.600 11:13:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # i=2 00:21:09.600 11:13:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # sleep 0.1 00:21:09.859 11:13:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:09.859 11:13:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' 2 -lt 200 ']' 00:21:09.859 11:13:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # i=3 00:21:09.859 11:13:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # sleep 0.1 00:21:09.859 11:13:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:09.859 11:13:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:09.859 11:13:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1273 -- # return 0 00:21:09.859 11:13:06 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:09.859 11:13:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:09.859 11:13:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:09.859 Malloc1 00:21:09.859 11:13:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:09.859 11:13:07 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:09.859 11:13:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:09.859 11:13:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:09.859 11:13:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:09.859 11:13:07 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:09.859 11:13:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:09.859 11:13:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:09.859 Asynchronous Event Request test 00:21:09.859 Attaching to 10.0.0.2 00:21:09.859 Attached to 10.0.0.2 00:21:09.859 Registering asynchronous event callbacks... 00:21:09.859 Starting namespace attribute notice tests for all controllers... 00:21:09.859 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:09.859 aer_cb - Changed Namespace 00:21:09.859 Cleaning up... 00:21:09.859 [ 00:21:09.859 { 00:21:09.859 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:09.859 "subtype": "Discovery", 00:21:09.859 "listen_addresses": [], 00:21:09.859 "allow_any_host": true, 00:21:09.859 "hosts": [] 00:21:09.859 }, 00:21:09.859 { 00:21:09.859 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:09.859 "subtype": "NVMe", 00:21:09.859 "listen_addresses": [ 00:21:09.859 { 00:21:09.859 "trtype": "TCP", 00:21:09.859 "adrfam": "IPv4", 00:21:09.859 "traddr": "10.0.0.2", 00:21:09.859 "trsvcid": "4420" 00:21:09.859 } 00:21:09.859 ], 00:21:09.859 "allow_any_host": true, 00:21:09.859 "hosts": [], 00:21:09.859 "serial_number": "SPDK00000000000001", 00:21:09.859 "model_number": "SPDK bdev Controller", 00:21:09.859 "max_namespaces": 2, 00:21:09.859 "min_cntlid": 1, 00:21:09.859 "max_cntlid": 65519, 00:21:09.859 "namespaces": [ 00:21:09.859 { 00:21:09.859 "nsid": 1, 00:21:09.859 "bdev_name": "Malloc0", 00:21:09.859 "name": "Malloc0", 00:21:09.859 "nguid": "2742181813DD4CE2990B2A6C4A8268CD", 00:21:09.859 "uuid": "27421818-13dd-4ce2-990b-2a6c4a8268cd" 00:21:09.859 }, 00:21:09.859 { 00:21:09.859 "nsid": 2, 00:21:09.859 "bdev_name": "Malloc1", 00:21:09.859 "name": "Malloc1", 00:21:09.859 "nguid": "DAFE06CD5F864DF28448A346922F01AA", 00:21:09.859 "uuid": "dafe06cd-5f86-4df2-8448-a346922f01aa" 00:21:09.859 } 00:21:09.859 ] 00:21:09.859 } 00:21:09.859 ] 00:21:09.859 11:13:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:09.859 11:13:07 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 2322265 00:21:09.859 11:13:07 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:09.859 11:13:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:09.859 11:13:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:09.859 11:13:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:09.859 11:13:07 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:09.859 11:13:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:09.859 11:13:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:09.859 11:13:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:09.859 11:13:07 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:09.859 11:13:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:09.859 11:13:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:09.859 11:13:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:09.859 11:13:07 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:09.859 11:13:07 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:09.859 11:13:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:09.859 11:13:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:21:10.118 11:13:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:10.118 11:13:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:21:10.118 11:13:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:10.118 11:13:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:10.118 rmmod nvme_tcp 00:21:10.118 rmmod nvme_fabrics 00:21:10.118 rmmod nvme_keyring 00:21:10.118 11:13:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:10.118 11:13:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:21:10.118 11:13:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:21:10.118 11:13:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2322020 ']' 00:21:10.118 11:13:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2322020 00:21:10.118 11:13:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@947 -- # '[' -z 2322020 ']' 00:21:10.118 11:13:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # kill -0 2322020 00:21:10.118 11:13:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # uname 00:21:10.118 11:13:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:10.118 11:13:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2322020 00:21:10.118 11:13:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:21:10.118 11:13:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:21:10.118 11:13:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2322020' 00:21:10.118 killing process with pid 2322020 00:21:10.118 11:13:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # kill 2322020 00:21:10.118 [2024-05-15 11:13:07.235451] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:10.118 11:13:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@971 -- # wait 2322020 00:21:10.377 11:13:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:10.377 11:13:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:10.377 11:13:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:10.377 11:13:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:10.377 11:13:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:10.377 11:13:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.377 11:13:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:10.377 11:13:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:12.281 11:13:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:12.281 00:21:12.281 real 0m9.072s 00:21:12.281 user 0m7.605s 00:21:12.281 sys 0m4.315s 00:21:12.281 11:13:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:12.281 11:13:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:12.281 ************************************ 00:21:12.281 END TEST nvmf_aer 00:21:12.281 ************************************ 00:21:12.540 11:13:09 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:12.540 11:13:09 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:21:12.540 11:13:09 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:12.540 11:13:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:12.540 ************************************ 00:21:12.540 START TEST nvmf_async_init 00:21:12.540 ************************************ 00:21:12.540 11:13:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:12.540 * Looking for test storage... 00:21:12.540 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:12.540 11:13:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:12.540 11:13:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:12.540 11:13:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:12.540 11:13:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:12.540 11:13:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:12.540 11:13:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:12.540 11:13:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:12.540 11:13:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:12.540 11:13:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:12.540 11:13:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:12.540 11:13:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:12.540 11:13:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:12.540 11:13:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:12.540 11:13:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:12.540 11:13:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:12.540 11:13:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:12.540 11:13:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:12.540 11:13:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:12.540 11:13:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:12.540 11:13:09 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:12.540 11:13:09 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:12.540 11:13:09 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:12.540 11:13:09 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.540 11:13:09 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.540 11:13:09 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.540 11:13:09 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:12.541 11:13:09 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.541 11:13:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:21:12.541 11:13:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:12.541 11:13:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:12.541 11:13:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:12.541 11:13:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:12.541 11:13:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:12.541 11:13:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:12.541 11:13:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:12.541 11:13:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:12.541 11:13:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:12.541 11:13:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:12.541 11:13:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:12.541 11:13:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:12.541 11:13:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:12.541 11:13:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:12.541 11:13:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=ea490e6e7c3a4a5cbd49f577ddcd0fb8 00:21:12.541 11:13:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:12.541 11:13:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:12.541 11:13:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:12.541 11:13:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:12.541 11:13:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:12.541 11:13:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:12.541 11:13:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:12.541 11:13:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:12.541 11:13:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:12.541 11:13:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:12.541 11:13:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:12.541 11:13:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:21:12.541 11:13:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:17.813 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:17.813 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:21:17.813 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:17.813 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:17.813 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:17.813 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:17.813 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:17.813 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:21:17.813 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:17.813 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:21:17.813 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:21:17.813 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:21:17.813 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:21:17.813 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:21:17.813 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:21:17.813 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:17.813 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:17.813 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:17.813 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:17.813 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:17.813 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:17.813 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:17.813 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:17.813 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:17.813 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:17.813 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:17.813 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:17.813 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:17.813 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:17.813 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:17.813 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:17.813 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:17.813 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:17.813 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:17.813 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:17.813 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:17.813 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:17.813 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:17.813 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:17.813 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:17.813 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:17.813 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:17.813 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:17.813 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:17.813 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:17.813 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:17.813 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:17.813 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:17.813 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:17.814 Found net devices under 0000:86:00.0: cvl_0_0 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:17.814 Found net devices under 0000:86:00.1: cvl_0_1 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:17.814 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:17.814 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:21:17.814 00:21:17.814 --- 10.0.0.2 ping statistics --- 00:21:17.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:17.814 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:17.814 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:17.814 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:21:17.814 00:21:17.814 --- 10.0.0.1 ping statistics --- 00:21:17.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:17.814 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@721 -- # xtrace_disable 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2325664 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2325664 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@828 -- # '[' -z 2325664 ']' 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:17.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:17.814 11:13:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:17.814 [2024-05-15 11:13:14.662577] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:21:17.814 [2024-05-15 11:13:14.662619] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:17.814 EAL: No free 2048 kB hugepages reported on node 1 00:21:17.814 [2024-05-15 11:13:14.718418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.814 [2024-05-15 11:13:14.796744] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:17.814 [2024-05-15 11:13:14.796782] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:17.814 [2024-05-15 11:13:14.796789] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:17.814 [2024-05-15 11:13:14.796795] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:17.814 [2024-05-15 11:13:14.796800] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:17.814 [2024-05-15 11:13:14.796818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:18.379 11:13:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:18.379 11:13:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@861 -- # return 0 00:21:18.379 11:13:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:18.379 11:13:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@727 -- # xtrace_disable 00:21:18.379 11:13:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.379 11:13:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:18.379 11:13:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:18.379 11:13:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.379 11:13:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.379 [2024-05-15 11:13:15.512936] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:18.379 11:13:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.379 11:13:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:18.379 11:13:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.379 11:13:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.379 null0 00:21:18.379 11:13:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.379 11:13:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:18.379 11:13:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.379 11:13:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.379 11:13:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.379 11:13:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:18.379 11:13:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.379 11:13:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.379 11:13:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.379 11:13:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g ea490e6e7c3a4a5cbd49f577ddcd0fb8 00:21:18.379 11:13:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.379 11:13:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.379 11:13:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.379 11:13:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:18.379 11:13:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.379 11:13:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.379 [2024-05-15 11:13:15.552989] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:18.379 [2024-05-15 11:13:15.553155] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:18.379 11:13:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.379 11:13:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:18.379 11:13:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.379 11:13:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.637 nvme0n1 00:21:18.637 11:13:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.637 11:13:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:18.637 11:13:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.637 11:13:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.637 [ 00:21:18.637 { 00:21:18.637 "name": "nvme0n1", 00:21:18.637 "aliases": [ 00:21:18.637 "ea490e6e-7c3a-4a5c-bd49-f577ddcd0fb8" 00:21:18.637 ], 00:21:18.637 "product_name": "NVMe disk", 00:21:18.637 "block_size": 512, 00:21:18.637 "num_blocks": 2097152, 00:21:18.637 "uuid": "ea490e6e-7c3a-4a5c-bd49-f577ddcd0fb8", 00:21:18.637 "assigned_rate_limits": { 00:21:18.637 "rw_ios_per_sec": 0, 00:21:18.637 "rw_mbytes_per_sec": 0, 00:21:18.637 "r_mbytes_per_sec": 0, 00:21:18.637 "w_mbytes_per_sec": 0 00:21:18.637 }, 00:21:18.637 "claimed": false, 00:21:18.637 "zoned": false, 00:21:18.637 "supported_io_types": { 00:21:18.637 "read": true, 00:21:18.637 "write": true, 00:21:18.637 "unmap": false, 00:21:18.637 "write_zeroes": true, 00:21:18.637 "flush": true, 00:21:18.637 "reset": true, 00:21:18.637 "compare": true, 00:21:18.637 "compare_and_write": true, 00:21:18.637 "abort": true, 00:21:18.637 "nvme_admin": true, 00:21:18.637 "nvme_io": true 00:21:18.637 }, 00:21:18.637 "memory_domains": [ 00:21:18.637 { 00:21:18.637 "dma_device_id": "system", 00:21:18.637 "dma_device_type": 1 00:21:18.637 } 00:21:18.637 ], 00:21:18.637 "driver_specific": { 00:21:18.637 "nvme": [ 00:21:18.637 { 00:21:18.637 "trid": { 00:21:18.637 "trtype": "TCP", 00:21:18.637 "adrfam": "IPv4", 00:21:18.637 "traddr": "10.0.0.2", 00:21:18.637 "trsvcid": "4420", 00:21:18.637 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:18.637 }, 00:21:18.637 "ctrlr_data": { 00:21:18.637 "cntlid": 1, 00:21:18.637 "vendor_id": "0x8086", 00:21:18.637 "model_number": "SPDK bdev Controller", 00:21:18.637 "serial_number": "00000000000000000000", 00:21:18.637 "firmware_revision": "24.05", 00:21:18.637 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:18.637 "oacs": { 00:21:18.637 "security": 0, 00:21:18.637 "format": 0, 00:21:18.637 "firmware": 0, 00:21:18.637 "ns_manage": 0 00:21:18.637 }, 00:21:18.637 "multi_ctrlr": true, 00:21:18.637 "ana_reporting": false 00:21:18.637 }, 00:21:18.637 "vs": { 00:21:18.637 "nvme_version": "1.3" 00:21:18.637 }, 00:21:18.637 "ns_data": { 00:21:18.637 "id": 1, 00:21:18.637 "can_share": true 00:21:18.637 } 00:21:18.637 } 00:21:18.637 ], 00:21:18.637 "mp_policy": "active_passive" 00:21:18.637 } 00:21:18.637 } 00:21:18.637 ] 00:21:18.637 11:13:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.637 11:13:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:18.637 11:13:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.637 11:13:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.637 [2024-05-15 11:13:15.801665] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:18.637 [2024-05-15 11:13:15.801716] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be8260 (9): Bad file descriptor 00:21:18.895 [2024-05-15 11:13:15.933249] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:18.895 11:13:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.895 11:13:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:18.895 11:13:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.895 11:13:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.895 [ 00:21:18.895 { 00:21:18.895 "name": "nvme0n1", 00:21:18.895 "aliases": [ 00:21:18.895 "ea490e6e-7c3a-4a5c-bd49-f577ddcd0fb8" 00:21:18.895 ], 00:21:18.895 "product_name": "NVMe disk", 00:21:18.895 "block_size": 512, 00:21:18.895 "num_blocks": 2097152, 00:21:18.895 "uuid": "ea490e6e-7c3a-4a5c-bd49-f577ddcd0fb8", 00:21:18.895 "assigned_rate_limits": { 00:21:18.895 "rw_ios_per_sec": 0, 00:21:18.895 "rw_mbytes_per_sec": 0, 00:21:18.895 "r_mbytes_per_sec": 0, 00:21:18.895 "w_mbytes_per_sec": 0 00:21:18.895 }, 00:21:18.895 "claimed": false, 00:21:18.895 "zoned": false, 00:21:18.895 "supported_io_types": { 00:21:18.895 "read": true, 00:21:18.895 "write": true, 00:21:18.895 "unmap": false, 00:21:18.895 "write_zeroes": true, 00:21:18.895 "flush": true, 00:21:18.895 "reset": true, 00:21:18.895 "compare": true, 00:21:18.895 "compare_and_write": true, 00:21:18.895 "abort": true, 00:21:18.895 "nvme_admin": true, 00:21:18.895 "nvme_io": true 00:21:18.895 }, 00:21:18.895 "memory_domains": [ 00:21:18.895 { 00:21:18.895 "dma_device_id": "system", 00:21:18.895 "dma_device_type": 1 00:21:18.895 } 00:21:18.895 ], 00:21:18.895 "driver_specific": { 00:21:18.895 "nvme": [ 00:21:18.895 { 00:21:18.895 "trid": { 00:21:18.895 "trtype": "TCP", 00:21:18.895 "adrfam": "IPv4", 00:21:18.895 "traddr": "10.0.0.2", 00:21:18.895 "trsvcid": "4420", 00:21:18.895 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:18.895 }, 00:21:18.895 "ctrlr_data": { 00:21:18.895 "cntlid": 2, 00:21:18.895 "vendor_id": "0x8086", 00:21:18.895 "model_number": "SPDK bdev Controller", 00:21:18.895 "serial_number": "00000000000000000000", 00:21:18.895 "firmware_revision": "24.05", 00:21:18.895 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:18.895 "oacs": { 00:21:18.895 "security": 0, 00:21:18.895 "format": 0, 00:21:18.895 "firmware": 0, 00:21:18.895 "ns_manage": 0 00:21:18.895 }, 00:21:18.895 "multi_ctrlr": true, 00:21:18.895 "ana_reporting": false 00:21:18.895 }, 00:21:18.895 "vs": { 00:21:18.895 "nvme_version": "1.3" 00:21:18.895 }, 00:21:18.895 "ns_data": { 00:21:18.895 "id": 1, 00:21:18.895 "can_share": true 00:21:18.895 } 00:21:18.895 } 00:21:18.895 ], 00:21:18.895 "mp_policy": "active_passive" 00:21:18.895 } 00:21:18.895 } 00:21:18.895 ] 00:21:18.895 11:13:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.895 11:13:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:18.895 11:13:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.895 11:13:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.895 11:13:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.896 11:13:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:21:18.896 11:13:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.kRs4s0uTEY 00:21:18.896 11:13:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:18.896 11:13:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.kRs4s0uTEY 00:21:18.896 11:13:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:18.896 11:13:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.896 11:13:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.896 11:13:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.896 11:13:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:18.896 11:13:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.896 11:13:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.896 [2024-05-15 11:13:15.982233] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:18.896 [2024-05-15 11:13:15.982342] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:18.896 11:13:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.896 11:13:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kRs4s0uTEY 00:21:18.896 11:13:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.896 11:13:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.896 [2024-05-15 11:13:15.990244] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:18.896 11:13:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.896 11:13:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kRs4s0uTEY 00:21:18.896 11:13:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.896 11:13:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.896 [2024-05-15 11:13:15.998267] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:18.896 [2024-05-15 11:13:15.998303] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:18.896 nvme0n1 00:21:18.896 11:13:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.896 11:13:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:18.896 11:13:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.896 11:13:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.896 [ 00:21:18.896 { 00:21:18.896 "name": "nvme0n1", 00:21:18.896 "aliases": [ 00:21:18.896 "ea490e6e-7c3a-4a5c-bd49-f577ddcd0fb8" 00:21:18.896 ], 00:21:18.896 "product_name": "NVMe disk", 00:21:18.896 "block_size": 512, 00:21:18.896 "num_blocks": 2097152, 00:21:18.896 "uuid": "ea490e6e-7c3a-4a5c-bd49-f577ddcd0fb8", 00:21:18.896 "assigned_rate_limits": { 00:21:18.896 "rw_ios_per_sec": 0, 00:21:18.896 "rw_mbytes_per_sec": 0, 00:21:18.896 "r_mbytes_per_sec": 0, 00:21:18.896 "w_mbytes_per_sec": 0 00:21:18.896 }, 00:21:18.896 "claimed": false, 00:21:18.896 "zoned": false, 00:21:18.896 "supported_io_types": { 00:21:18.896 "read": true, 00:21:18.896 "write": true, 00:21:18.896 "unmap": false, 00:21:18.896 "write_zeroes": true, 00:21:18.896 "flush": true, 00:21:18.896 "reset": true, 00:21:18.896 "compare": true, 00:21:18.896 "compare_and_write": true, 00:21:18.896 "abort": true, 00:21:18.896 "nvme_admin": true, 00:21:18.896 "nvme_io": true 00:21:18.896 }, 00:21:18.896 "memory_domains": [ 00:21:18.896 { 00:21:18.896 "dma_device_id": "system", 00:21:18.896 "dma_device_type": 1 00:21:18.896 } 00:21:18.896 ], 00:21:18.896 "driver_specific": { 00:21:18.896 "nvme": [ 00:21:18.896 { 00:21:18.896 "trid": { 00:21:18.896 "trtype": "TCP", 00:21:18.896 "adrfam": "IPv4", 00:21:18.896 "traddr": "10.0.0.2", 00:21:18.896 "trsvcid": "4421", 00:21:18.896 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:18.896 }, 00:21:18.896 "ctrlr_data": { 00:21:18.896 "cntlid": 3, 00:21:18.896 "vendor_id": "0x8086", 00:21:18.896 "model_number": "SPDK bdev Controller", 00:21:18.896 "serial_number": "00000000000000000000", 00:21:18.896 "firmware_revision": "24.05", 00:21:18.896 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:18.896 "oacs": { 00:21:18.896 "security": 0, 00:21:18.896 "format": 0, 00:21:18.896 "firmware": 0, 00:21:18.896 "ns_manage": 0 00:21:18.896 }, 00:21:18.896 "multi_ctrlr": true, 00:21:18.896 "ana_reporting": false 00:21:18.896 }, 00:21:18.896 "vs": { 00:21:18.896 "nvme_version": "1.3" 00:21:18.896 }, 00:21:18.896 "ns_data": { 00:21:18.896 "id": 1, 00:21:18.896 "can_share": true 00:21:18.896 } 00:21:18.896 } 00:21:18.896 ], 00:21:18.896 "mp_policy": "active_passive" 00:21:18.896 } 00:21:18.896 } 00:21:18.896 ] 00:21:18.896 11:13:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.896 11:13:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:18.896 11:13:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.896 11:13:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.896 11:13:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.896 11:13:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.kRs4s0uTEY 00:21:18.896 11:13:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:21:18.896 11:13:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:21:18.896 11:13:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:18.896 11:13:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:21:18.896 11:13:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:18.896 11:13:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:21:18.896 11:13:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:18.896 11:13:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:18.896 rmmod nvme_tcp 00:21:18.896 rmmod nvme_fabrics 00:21:18.896 rmmod nvme_keyring 00:21:18.896 11:13:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:18.896 11:13:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:21:18.896 11:13:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:21:18.896 11:13:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2325664 ']' 00:21:18.896 11:13:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2325664 00:21:18.896 11:13:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@947 -- # '[' -z 2325664 ']' 00:21:18.896 11:13:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # kill -0 2325664 00:21:18.896 11:13:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # uname 00:21:19.154 11:13:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:19.154 11:13:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2325664 00:21:19.154 11:13:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:21:19.154 11:13:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:21:19.154 11:13:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2325664' 00:21:19.154 killing process with pid 2325664 00:21:19.154 11:13:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # kill 2325664 00:21:19.154 [2024-05-15 11:13:16.204495] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:19.154 [2024-05-15 11:13:16.204522] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:19.154 [2024-05-15 11:13:16.204529] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:19.154 11:13:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@971 -- # wait 2325664 00:21:19.154 11:13:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:19.154 11:13:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:19.154 11:13:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:19.155 11:13:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:19.155 11:13:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:19.155 11:13:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.155 11:13:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:19.155 11:13:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.687 11:13:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:21.687 00:21:21.687 real 0m8.868s 00:21:21.687 user 0m3.269s 00:21:21.687 sys 0m3.952s 00:21:21.687 11:13:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:21.687 11:13:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:21.687 ************************************ 00:21:21.687 END TEST nvmf_async_init 00:21:21.687 ************************************ 00:21:21.687 11:13:18 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:21.687 11:13:18 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:21:21.687 11:13:18 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:21.687 11:13:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:21.687 ************************************ 00:21:21.687 START TEST dma 00:21:21.687 ************************************ 00:21:21.687 11:13:18 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:21.687 * Looking for test storage... 00:21:21.687 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:21.687 11:13:18 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:21.687 11:13:18 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:21:21.687 11:13:18 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:21.687 11:13:18 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:21.687 11:13:18 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:21.687 11:13:18 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:21.687 11:13:18 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:21.687 11:13:18 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:21.687 11:13:18 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:21.687 11:13:18 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:21.687 11:13:18 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:21.687 11:13:18 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:21.687 11:13:18 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:21.687 11:13:18 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:21.687 11:13:18 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:21.687 11:13:18 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:21.687 11:13:18 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:21.687 11:13:18 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:21.687 11:13:18 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:21.687 11:13:18 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:21.687 11:13:18 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:21.687 11:13:18 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:21.687 11:13:18 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.687 11:13:18 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.687 11:13:18 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.687 11:13:18 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:21:21.687 11:13:18 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.687 11:13:18 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:21:21.687 11:13:18 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:21.687 11:13:18 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:21.687 11:13:18 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:21.687 11:13:18 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:21.687 11:13:18 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:21.687 11:13:18 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:21.687 11:13:18 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:21.687 11:13:18 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:21.687 11:13:18 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:21.687 11:13:18 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:21:21.687 00:21:21.687 real 0m0.119s 00:21:21.687 user 0m0.062s 00:21:21.687 sys 0m0.065s 00:21:21.687 11:13:18 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:21.687 11:13:18 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:21:21.687 ************************************ 00:21:21.687 END TEST dma 00:21:21.687 ************************************ 00:21:21.687 11:13:18 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:21.687 11:13:18 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:21:21.687 11:13:18 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:21.687 11:13:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:21.687 ************************************ 00:21:21.687 START TEST nvmf_identify 00:21:21.687 ************************************ 00:21:21.687 11:13:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:21.687 * Looking for test storage... 00:21:21.687 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:21.687 11:13:18 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:21.687 11:13:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:21:21.687 11:13:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:21.687 11:13:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:21.687 11:13:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:21.687 11:13:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:21.687 11:13:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:21.687 11:13:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:21.687 11:13:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:21.687 11:13:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:21.687 11:13:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:21.687 11:13:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:21.687 11:13:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:21.687 11:13:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:21.687 11:13:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:21.687 11:13:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:21.687 11:13:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:21.687 11:13:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:21.687 11:13:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:21.687 11:13:18 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:21.687 11:13:18 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:21.687 11:13:18 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:21.688 11:13:18 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.688 11:13:18 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.688 11:13:18 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.688 11:13:18 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:21:21.688 11:13:18 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.688 11:13:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:21:21.688 11:13:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:21.688 11:13:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:21.688 11:13:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:21.688 11:13:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:21.688 11:13:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:21.688 11:13:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:21.688 11:13:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:21.688 11:13:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:21.688 11:13:18 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:21.688 11:13:18 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:21.688 11:13:18 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:21:21.688 11:13:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:21.688 11:13:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:21.688 11:13:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:21.688 11:13:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:21.688 11:13:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:21.688 11:13:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.688 11:13:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:21.688 11:13:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.688 11:13:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:21.688 11:13:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:21.688 11:13:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:21:21.688 11:13:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:26.952 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:26.952 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:21:26.952 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:26.952 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:26.952 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:26.952 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:26.952 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:26.952 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:21:26.952 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:26.952 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:21:26.952 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:21:26.952 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:21:26.952 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:21:26.952 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:21:26.952 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:21:26.952 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:26.952 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:26.952 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:26.952 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:26.952 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:26.952 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:26.952 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:26.952 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:26.952 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:26.952 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:26.952 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:26.952 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:26.952 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:26.952 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:26.952 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:26.952 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:26.952 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:26.952 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:26.952 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:26.952 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:26.952 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:26.952 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:26.952 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:26.952 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:26.952 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:26.952 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:26.952 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:26.952 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:26.952 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:26.952 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:26.952 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:26.952 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:26.952 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:26.952 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:26.952 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:26.952 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:26.952 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:26.953 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:26.953 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:26.953 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:26.953 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:26.953 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:26.953 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:26.953 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:26.953 Found net devices under 0000:86:00.0: cvl_0_0 00:21:26.953 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:26.953 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:26.953 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:26.953 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:26.953 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:26.953 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:26.953 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:26.953 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:26.953 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:26.953 Found net devices under 0000:86:00.1: cvl_0_1 00:21:26.953 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:26.953 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:26.953 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:21:26.953 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:26.953 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:26.953 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:26.953 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:26.953 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:26.953 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:26.953 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:26.953 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:26.953 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:26.953 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:26.953 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:26.953 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:26.953 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:26.953 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:26.953 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:26.953 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:26.953 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:26.953 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:26.953 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:26.953 11:13:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:26.953 11:13:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:26.953 11:13:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:26.953 11:13:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:26.953 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:26.953 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:21:26.953 00:21:26.953 --- 10.0.0.2 ping statistics --- 00:21:26.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:26.953 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:21:26.953 11:13:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:26.953 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:26.953 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:21:26.953 00:21:26.953 --- 10.0.0.1 ping statistics --- 00:21:26.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:26.953 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:21:26.953 11:13:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:26.953 11:13:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:21:26.953 11:13:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:26.953 11:13:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:26.953 11:13:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:26.953 11:13:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:26.953 11:13:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:26.953 11:13:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:26.953 11:13:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:26.953 11:13:24 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:26.953 11:13:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@721 -- # xtrace_disable 00:21:26.953 11:13:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:26.953 11:13:24 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2329374 00:21:26.953 11:13:24 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:26.953 11:13:24 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:26.953 11:13:24 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2329374 00:21:26.953 11:13:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@828 -- # '[' -z 2329374 ']' 00:21:26.953 11:13:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:26.953 11:13:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:26.953 11:13:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:26.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:26.953 11:13:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:26.953 11:13:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:26.953 [2024-05-15 11:13:24.152818] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:21:26.953 [2024-05-15 11:13:24.152859] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:26.953 EAL: No free 2048 kB hugepages reported on node 1 00:21:26.953 [2024-05-15 11:13:24.208334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:27.211 [2024-05-15 11:13:24.290303] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:27.211 [2024-05-15 11:13:24.290350] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:27.211 [2024-05-15 11:13:24.290357] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:27.211 [2024-05-15 11:13:24.290365] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:27.211 [2024-05-15 11:13:24.290369] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:27.211 [2024-05-15 11:13:24.290428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:27.211 [2024-05-15 11:13:24.290523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:27.211 [2024-05-15 11:13:24.290608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:27.211 [2024-05-15 11:13:24.290609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:27.775 11:13:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:27.775 11:13:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@861 -- # return 0 00:21:27.775 11:13:24 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:27.775 11:13:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:27.775 11:13:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:27.775 [2024-05-15 11:13:24.967046] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:27.775 11:13:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:27.775 11:13:24 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:27.775 11:13:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@727 -- # xtrace_disable 00:21:27.775 11:13:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:27.775 11:13:25 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:27.775 11:13:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:27.775 11:13:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:27.775 Malloc0 00:21:27.775 11:13:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:27.775 11:13:25 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:27.775 11:13:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:27.775 11:13:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:28.036 11:13:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:28.036 11:13:25 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:28.036 11:13:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:28.036 11:13:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:28.036 11:13:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:28.036 11:13:25 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:28.036 11:13:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:28.036 11:13:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:28.036 [2024-05-15 11:13:25.054901] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:28.036 [2024-05-15 11:13:25.055132] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:28.036 11:13:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:28.036 11:13:25 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:28.036 11:13:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:28.036 11:13:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:28.036 11:13:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:28.036 11:13:25 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:28.036 11:13:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:28.036 11:13:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:28.036 [ 00:21:28.036 { 00:21:28.036 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:28.036 "subtype": "Discovery", 00:21:28.036 "listen_addresses": [ 00:21:28.036 { 00:21:28.036 "trtype": "TCP", 00:21:28.036 "adrfam": "IPv4", 00:21:28.036 "traddr": "10.0.0.2", 00:21:28.036 "trsvcid": "4420" 00:21:28.036 } 00:21:28.036 ], 00:21:28.036 "allow_any_host": true, 00:21:28.036 "hosts": [] 00:21:28.036 }, 00:21:28.036 { 00:21:28.036 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:28.036 "subtype": "NVMe", 00:21:28.036 "listen_addresses": [ 00:21:28.036 { 00:21:28.036 "trtype": "TCP", 00:21:28.036 "adrfam": "IPv4", 00:21:28.036 "traddr": "10.0.0.2", 00:21:28.036 "trsvcid": "4420" 00:21:28.036 } 00:21:28.036 ], 00:21:28.036 "allow_any_host": true, 00:21:28.036 "hosts": [], 00:21:28.036 "serial_number": "SPDK00000000000001", 00:21:28.036 "model_number": "SPDK bdev Controller", 00:21:28.036 "max_namespaces": 32, 00:21:28.036 "min_cntlid": 1, 00:21:28.036 "max_cntlid": 65519, 00:21:28.036 "namespaces": [ 00:21:28.036 { 00:21:28.036 "nsid": 1, 00:21:28.036 "bdev_name": "Malloc0", 00:21:28.036 "name": "Malloc0", 00:21:28.036 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:28.036 "eui64": "ABCDEF0123456789", 00:21:28.036 "uuid": "5300c41c-eb57-4321-8fea-cbdc6a6ea0a0" 00:21:28.036 } 00:21:28.036 ] 00:21:28.036 } 00:21:28.036 ] 00:21:28.036 11:13:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:28.036 11:13:25 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:28.036 [2024-05-15 11:13:25.106730] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:21:28.036 [2024-05-15 11:13:25.106776] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2329621 ] 00:21:28.036 EAL: No free 2048 kB hugepages reported on node 1 00:21:28.036 [2024-05-15 11:13:25.135714] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:21:28.036 [2024-05-15 11:13:25.135759] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:28.036 [2024-05-15 11:13:25.135764] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:28.036 [2024-05-15 11:13:25.135774] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:28.036 [2024-05-15 11:13:25.135781] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:28.036 [2024-05-15 11:13:25.136103] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:21:28.036 [2024-05-15 11:13:25.136132] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1240c30 0 00:21:28.036 [2024-05-15 11:13:25.150170] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:28.036 [2024-05-15 11:13:25.150187] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:28.036 [2024-05-15 11:13:25.150192] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:28.036 [2024-05-15 11:13:25.150194] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:28.036 [2024-05-15 11:13:25.150230] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.036 [2024-05-15 11:13:25.150236] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.036 [2024-05-15 11:13:25.150240] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1240c30) 00:21:28.036 [2024-05-15 11:13:25.150254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:28.036 [2024-05-15 11:13:25.150269] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8980, cid 0, qid 0 00:21:28.036 [2024-05-15 11:13:25.158175] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.036 [2024-05-15 11:13:25.158184] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.036 [2024-05-15 11:13:25.158187] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.036 [2024-05-15 11:13:25.158191] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8980) on tqpair=0x1240c30 00:21:28.036 [2024-05-15 11:13:25.158202] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:28.037 [2024-05-15 11:13:25.158208] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:21:28.037 [2024-05-15 11:13:25.158213] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:21:28.037 [2024-05-15 11:13:25.158227] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.037 [2024-05-15 11:13:25.158231] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.037 [2024-05-15 11:13:25.158234] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1240c30) 00:21:28.037 [2024-05-15 11:13:25.158242] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.037 [2024-05-15 11:13:25.158254] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8980, cid 0, qid 0 00:21:28.037 [2024-05-15 11:13:25.158415] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.037 [2024-05-15 11:13:25.158421] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.037 [2024-05-15 11:13:25.158425] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.037 [2024-05-15 11:13:25.158428] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8980) on tqpair=0x1240c30 00:21:28.037 [2024-05-15 11:13:25.158433] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:21:28.037 [2024-05-15 11:13:25.158439] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:21:28.037 [2024-05-15 11:13:25.158446] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.037 [2024-05-15 11:13:25.158449] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.037 [2024-05-15 11:13:25.158452] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1240c30) 00:21:28.037 [2024-05-15 11:13:25.158458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.037 [2024-05-15 11:13:25.158468] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8980, cid 0, qid 0 00:21:28.037 [2024-05-15 11:13:25.158536] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.037 [2024-05-15 11:13:25.158542] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.037 [2024-05-15 11:13:25.158545] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.037 [2024-05-15 11:13:25.158548] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8980) on tqpair=0x1240c30 00:21:28.037 [2024-05-15 11:13:25.158554] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:21:28.037 [2024-05-15 11:13:25.158561] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:21:28.037 [2024-05-15 11:13:25.158566] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.037 [2024-05-15 11:13:25.158570] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.037 [2024-05-15 11:13:25.158573] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1240c30) 00:21:28.037 [2024-05-15 11:13:25.158578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.037 [2024-05-15 11:13:25.158587] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8980, cid 0, qid 0 00:21:28.037 [2024-05-15 11:13:25.158667] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.037 [2024-05-15 11:13:25.158672] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.037 [2024-05-15 11:13:25.158675] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.037 [2024-05-15 11:13:25.158678] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8980) on tqpair=0x1240c30 00:21:28.037 [2024-05-15 11:13:25.158683] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:28.037 [2024-05-15 11:13:25.158692] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.037 [2024-05-15 11:13:25.158696] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.037 [2024-05-15 11:13:25.158701] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1240c30) 00:21:28.037 [2024-05-15 11:13:25.158706] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.037 [2024-05-15 11:13:25.158716] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8980, cid 0, qid 0 00:21:28.037 [2024-05-15 11:13:25.158780] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.037 [2024-05-15 11:13:25.158785] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.037 [2024-05-15 11:13:25.158788] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.037 [2024-05-15 11:13:25.158791] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8980) on tqpair=0x1240c30 00:21:28.037 [2024-05-15 11:13:25.158796] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:21:28.037 [2024-05-15 11:13:25.158800] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:21:28.037 [2024-05-15 11:13:25.158806] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:28.037 [2024-05-15 11:13:25.158911] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:21:28.037 [2024-05-15 11:13:25.158915] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:28.037 [2024-05-15 11:13:25.158924] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.037 [2024-05-15 11:13:25.158927] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.037 [2024-05-15 11:13:25.158930] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1240c30) 00:21:28.037 [2024-05-15 11:13:25.158935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.037 [2024-05-15 11:13:25.158944] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8980, cid 0, qid 0 00:21:28.037 [2024-05-15 11:13:25.159008] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.037 [2024-05-15 11:13:25.159013] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.037 [2024-05-15 11:13:25.159016] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.037 [2024-05-15 11:13:25.159019] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8980) on tqpair=0x1240c30 00:21:28.037 [2024-05-15 11:13:25.159024] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:28.037 [2024-05-15 11:13:25.159032] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.037 [2024-05-15 11:13:25.159035] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.037 [2024-05-15 11:13:25.159038] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1240c30) 00:21:28.037 [2024-05-15 11:13:25.159044] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.037 [2024-05-15 11:13:25.159053] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8980, cid 0, qid 0 00:21:28.037 [2024-05-15 11:13:25.159117] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.037 [2024-05-15 11:13:25.159123] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.037 [2024-05-15 11:13:25.159126] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.037 [2024-05-15 11:13:25.159129] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8980) on tqpair=0x1240c30 00:21:28.037 [2024-05-15 11:13:25.159134] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:28.037 [2024-05-15 11:13:25.159139] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:21:28.037 [2024-05-15 11:13:25.159146] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:21:28.037 [2024-05-15 11:13:25.159153] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:21:28.037 [2024-05-15 11:13:25.159161] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.037 [2024-05-15 11:13:25.159170] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1240c30) 00:21:28.037 [2024-05-15 11:13:25.159176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.037 [2024-05-15 11:13:25.159186] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8980, cid 0, qid 0 00:21:28.037 [2024-05-15 11:13:25.159275] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:28.037 [2024-05-15 11:13:25.159280] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:28.037 [2024-05-15 11:13:25.159283] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:28.037 [2024-05-15 11:13:25.159287] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1240c30): datao=0, datal=4096, cccid=0 00:21:28.037 [2024-05-15 11:13:25.159291] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12a8980) on tqpair(0x1240c30): expected_datao=0, payload_size=4096 00:21:28.037 [2024-05-15 11:13:25.159295] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.037 [2024-05-15 11:13:25.159315] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:28.037 [2024-05-15 11:13:25.159320] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:28.037 [2024-05-15 11:13:25.159361] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.037 [2024-05-15 11:13:25.159366] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.037 [2024-05-15 11:13:25.159369] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.037 [2024-05-15 11:13:25.159372] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8980) on tqpair=0x1240c30 00:21:28.037 [2024-05-15 11:13:25.159379] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:21:28.037 [2024-05-15 11:13:25.159384] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:21:28.037 [2024-05-15 11:13:25.159388] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:21:28.037 [2024-05-15 11:13:25.159392] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:21:28.037 [2024-05-15 11:13:25.159396] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:21:28.037 [2024-05-15 11:13:25.159400] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:21:28.037 [2024-05-15 11:13:25.159410] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:21:28.037 [2024-05-15 11:13:25.159417] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.037 [2024-05-15 11:13:25.159421] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.037 [2024-05-15 11:13:25.159424] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1240c30) 00:21:28.037 [2024-05-15 11:13:25.159430] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:28.037 [2024-05-15 11:13:25.159440] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8980, cid 0, qid 0 00:21:28.037 [2024-05-15 11:13:25.159510] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.037 [2024-05-15 11:13:25.159518] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.038 [2024-05-15 11:13:25.159521] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.038 [2024-05-15 11:13:25.159524] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8980) on tqpair=0x1240c30 00:21:28.038 [2024-05-15 11:13:25.159534] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.038 [2024-05-15 11:13:25.159537] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.038 [2024-05-15 11:13:25.159540] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1240c30) 00:21:28.038 [2024-05-15 11:13:25.159545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:28.038 [2024-05-15 11:13:25.159551] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.038 [2024-05-15 11:13:25.159554] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.038 [2024-05-15 11:13:25.159557] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1240c30) 00:21:28.038 [2024-05-15 11:13:25.159562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:28.038 [2024-05-15 11:13:25.159567] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.038 [2024-05-15 11:13:25.159570] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.038 [2024-05-15 11:13:25.159573] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1240c30) 00:21:28.038 [2024-05-15 11:13:25.159578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:28.038 [2024-05-15 11:13:25.159583] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.038 [2024-05-15 11:13:25.159586] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.038 [2024-05-15 11:13:25.159589] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1240c30) 00:21:28.038 [2024-05-15 11:13:25.159594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:28.038 [2024-05-15 11:13:25.159597] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:21:28.038 [2024-05-15 11:13:25.159605] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:28.038 [2024-05-15 11:13:25.159611] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.038 [2024-05-15 11:13:25.159614] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1240c30) 00:21:28.038 [2024-05-15 11:13:25.159619] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.038 [2024-05-15 11:13:25.159630] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8980, cid 0, qid 0 00:21:28.038 [2024-05-15 11:13:25.159634] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8ae0, cid 1, qid 0 00:21:28.038 [2024-05-15 11:13:25.159638] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8c40, cid 2, qid 0 00:21:28.038 [2024-05-15 11:13:25.159642] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8da0, cid 3, qid 0 00:21:28.038 [2024-05-15 11:13:25.159646] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8f00, cid 4, qid 0 00:21:28.038 [2024-05-15 11:13:25.159749] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.038 [2024-05-15 11:13:25.159755] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.038 [2024-05-15 11:13:25.159757] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.038 [2024-05-15 11:13:25.159761] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8f00) on tqpair=0x1240c30 00:21:28.038 [2024-05-15 11:13:25.159768] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:21:28.038 [2024-05-15 11:13:25.159774] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:21:28.038 [2024-05-15 11:13:25.159783] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.038 [2024-05-15 11:13:25.159787] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1240c30) 00:21:28.038 [2024-05-15 11:13:25.159792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.038 [2024-05-15 11:13:25.159801] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8f00, cid 4, qid 0 00:21:28.038 [2024-05-15 11:13:25.159879] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:28.038 [2024-05-15 11:13:25.159885] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:28.038 [2024-05-15 11:13:25.159888] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:28.038 [2024-05-15 11:13:25.159890] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1240c30): datao=0, datal=4096, cccid=4 00:21:28.038 [2024-05-15 11:13:25.159894] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12a8f00) on tqpair(0x1240c30): expected_datao=0, payload_size=4096 00:21:28.038 [2024-05-15 11:13:25.159898] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.038 [2024-05-15 11:13:25.159903] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:28.038 [2024-05-15 11:13:25.159907] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:28.038 [2024-05-15 11:13:25.159919] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.038 [2024-05-15 11:13:25.159924] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.038 [2024-05-15 11:13:25.159927] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.038 [2024-05-15 11:13:25.159930] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8f00) on tqpair=0x1240c30 00:21:28.038 [2024-05-15 11:13:25.159941] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:21:28.038 [2024-05-15 11:13:25.159964] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.038 [2024-05-15 11:13:25.159968] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1240c30) 00:21:28.038 [2024-05-15 11:13:25.159973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.038 [2024-05-15 11:13:25.159979] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.038 [2024-05-15 11:13:25.159982] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.038 [2024-05-15 11:13:25.159985] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1240c30) 00:21:28.038 [2024-05-15 11:13:25.159990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:28.038 [2024-05-15 11:13:25.160004] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8f00, cid 4, qid 0 00:21:28.038 [2024-05-15 11:13:25.160008] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a9060, cid 5, qid 0 00:21:28.038 [2024-05-15 11:13:25.160105] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:28.038 [2024-05-15 11:13:25.160111] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:28.038 [2024-05-15 11:13:25.160114] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:28.038 [2024-05-15 11:13:25.160117] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1240c30): datao=0, datal=1024, cccid=4 00:21:28.038 [2024-05-15 11:13:25.160121] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12a8f00) on tqpair(0x1240c30): expected_datao=0, payload_size=1024 00:21:28.038 [2024-05-15 11:13:25.160124] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.038 [2024-05-15 11:13:25.160130] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:28.038 [2024-05-15 11:13:25.160133] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:28.038 [2024-05-15 11:13:25.160141] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.038 [2024-05-15 11:13:25.160146] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.038 [2024-05-15 11:13:25.160149] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.038 [2024-05-15 11:13:25.160152] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a9060) on tqpair=0x1240c30 00:21:28.038 [2024-05-15 11:13:25.204171] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.038 [2024-05-15 11:13:25.204183] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.038 [2024-05-15 11:13:25.204186] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.038 [2024-05-15 11:13:25.204190] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8f00) on tqpair=0x1240c30 00:21:28.038 [2024-05-15 11:13:25.204201] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.038 [2024-05-15 11:13:25.204205] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1240c30) 00:21:28.038 [2024-05-15 11:13:25.204212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.038 [2024-05-15 11:13:25.204227] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8f00, cid 4, qid 0 00:21:28.038 [2024-05-15 11:13:25.204389] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:28.038 [2024-05-15 11:13:25.204395] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:28.038 [2024-05-15 11:13:25.204397] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:28.038 [2024-05-15 11:13:25.204401] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1240c30): datao=0, datal=3072, cccid=4 00:21:28.038 [2024-05-15 11:13:25.204404] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12a8f00) on tqpair(0x1240c30): expected_datao=0, payload_size=3072 00:21:28.038 [2024-05-15 11:13:25.204408] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.038 [2024-05-15 11:13:25.204427] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:28.038 [2024-05-15 11:13:25.204431] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:28.038 [2024-05-15 11:13:25.204470] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.038 [2024-05-15 11:13:25.204475] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.038 [2024-05-15 11:13:25.204478] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.038 [2024-05-15 11:13:25.204481] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8f00) on tqpair=0x1240c30 00:21:28.038 [2024-05-15 11:13:25.204490] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.038 [2024-05-15 11:13:25.204493] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1240c30) 00:21:28.038 [2024-05-15 11:13:25.204499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.038 [2024-05-15 11:13:25.204511] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8f00, cid 4, qid 0 00:21:28.038 [2024-05-15 11:13:25.204586] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:28.038 [2024-05-15 11:13:25.204591] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:28.038 [2024-05-15 11:13:25.204594] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:28.038 [2024-05-15 11:13:25.204597] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1240c30): datao=0, datal=8, cccid=4 00:21:28.039 [2024-05-15 11:13:25.204601] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12a8f00) on tqpair(0x1240c30): expected_datao=0, payload_size=8 00:21:28.039 [2024-05-15 11:13:25.204605] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.039 [2024-05-15 11:13:25.204610] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:28.039 [2024-05-15 11:13:25.204613] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:28.039 [2024-05-15 11:13:25.245306] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.039 [2024-05-15 11:13:25.245316] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.039 [2024-05-15 11:13:25.245319] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.039 [2024-05-15 11:13:25.245322] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8f00) on tqpair=0x1240c30 00:21:28.039 ===================================================== 00:21:28.039 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:28.039 ===================================================== 00:21:28.039 Controller Capabilities/Features 00:21:28.039 ================================ 00:21:28.039 Vendor ID: 0000 00:21:28.039 Subsystem Vendor ID: 0000 00:21:28.039 Serial Number: .................... 00:21:28.039 Model Number: ........................................ 00:21:28.039 Firmware Version: 24.05 00:21:28.039 Recommended Arb Burst: 0 00:21:28.039 IEEE OUI Identifier: 00 00 00 00:21:28.039 Multi-path I/O 00:21:28.039 May have multiple subsystem ports: No 00:21:28.039 May have multiple controllers: No 00:21:28.039 Associated with SR-IOV VF: No 00:21:28.039 Max Data Transfer Size: 131072 00:21:28.039 Max Number of Namespaces: 0 00:21:28.039 Max Number of I/O Queues: 1024 00:21:28.039 NVMe Specification Version (VS): 1.3 00:21:28.039 NVMe Specification Version (Identify): 1.3 00:21:28.039 Maximum Queue Entries: 128 00:21:28.039 Contiguous Queues Required: Yes 00:21:28.039 Arbitration Mechanisms Supported 00:21:28.039 Weighted Round Robin: Not Supported 00:21:28.039 Vendor Specific: Not Supported 00:21:28.039 Reset Timeout: 15000 ms 00:21:28.039 Doorbell Stride: 4 bytes 00:21:28.039 NVM Subsystem Reset: Not Supported 00:21:28.039 Command Sets Supported 00:21:28.039 NVM Command Set: Supported 00:21:28.039 Boot Partition: Not Supported 00:21:28.039 Memory Page Size Minimum: 4096 bytes 00:21:28.039 Memory Page Size Maximum: 4096 bytes 00:21:28.039 Persistent Memory Region: Not Supported 00:21:28.039 Optional Asynchronous Events Supported 00:21:28.039 Namespace Attribute Notices: Not Supported 00:21:28.039 Firmware Activation Notices: Not Supported 00:21:28.039 ANA Change Notices: Not Supported 00:21:28.039 PLE Aggregate Log Change Notices: Not Supported 00:21:28.039 LBA Status Info Alert Notices: Not Supported 00:21:28.039 EGE Aggregate Log Change Notices: Not Supported 00:21:28.039 Normal NVM Subsystem Shutdown event: Not Supported 00:21:28.039 Zone Descriptor Change Notices: Not Supported 00:21:28.039 Discovery Log Change Notices: Supported 00:21:28.039 Controller Attributes 00:21:28.039 128-bit Host Identifier: Not Supported 00:21:28.039 Non-Operational Permissive Mode: Not Supported 00:21:28.039 NVM Sets: Not Supported 00:21:28.039 Read Recovery Levels: Not Supported 00:21:28.039 Endurance Groups: Not Supported 00:21:28.039 Predictable Latency Mode: Not Supported 00:21:28.039 Traffic Based Keep ALive: Not Supported 00:21:28.039 Namespace Granularity: Not Supported 00:21:28.039 SQ Associations: Not Supported 00:21:28.039 UUID List: Not Supported 00:21:28.039 Multi-Domain Subsystem: Not Supported 00:21:28.039 Fixed Capacity Management: Not Supported 00:21:28.039 Variable Capacity Management: Not Supported 00:21:28.039 Delete Endurance Group: Not Supported 00:21:28.039 Delete NVM Set: Not Supported 00:21:28.039 Extended LBA Formats Supported: Not Supported 00:21:28.039 Flexible Data Placement Supported: Not Supported 00:21:28.039 00:21:28.039 Controller Memory Buffer Support 00:21:28.039 ================================ 00:21:28.039 Supported: No 00:21:28.039 00:21:28.039 Persistent Memory Region Support 00:21:28.039 ================================ 00:21:28.039 Supported: No 00:21:28.039 00:21:28.039 Admin Command Set Attributes 00:21:28.039 ============================ 00:21:28.039 Security Send/Receive: Not Supported 00:21:28.039 Format NVM: Not Supported 00:21:28.039 Firmware Activate/Download: Not Supported 00:21:28.039 Namespace Management: Not Supported 00:21:28.039 Device Self-Test: Not Supported 00:21:28.039 Directives: Not Supported 00:21:28.039 NVMe-MI: Not Supported 00:21:28.039 Virtualization Management: Not Supported 00:21:28.039 Doorbell Buffer Config: Not Supported 00:21:28.039 Get LBA Status Capability: Not Supported 00:21:28.039 Command & Feature Lockdown Capability: Not Supported 00:21:28.039 Abort Command Limit: 1 00:21:28.039 Async Event Request Limit: 4 00:21:28.039 Number of Firmware Slots: N/A 00:21:28.039 Firmware Slot 1 Read-Only: N/A 00:21:28.039 Firmware Activation Without Reset: N/A 00:21:28.039 Multiple Update Detection Support: N/A 00:21:28.039 Firmware Update Granularity: No Information Provided 00:21:28.039 Per-Namespace SMART Log: No 00:21:28.039 Asymmetric Namespace Access Log Page: Not Supported 00:21:28.039 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:28.039 Command Effects Log Page: Not Supported 00:21:28.039 Get Log Page Extended Data: Supported 00:21:28.039 Telemetry Log Pages: Not Supported 00:21:28.039 Persistent Event Log Pages: Not Supported 00:21:28.039 Supported Log Pages Log Page: May Support 00:21:28.039 Commands Supported & Effects Log Page: Not Supported 00:21:28.039 Feature Identifiers & Effects Log Page:May Support 00:21:28.039 NVMe-MI Commands & Effects Log Page: May Support 00:21:28.039 Data Area 4 for Telemetry Log: Not Supported 00:21:28.039 Error Log Page Entries Supported: 128 00:21:28.039 Keep Alive: Not Supported 00:21:28.039 00:21:28.039 NVM Command Set Attributes 00:21:28.039 ========================== 00:21:28.039 Submission Queue Entry Size 00:21:28.039 Max: 1 00:21:28.039 Min: 1 00:21:28.039 Completion Queue Entry Size 00:21:28.039 Max: 1 00:21:28.039 Min: 1 00:21:28.039 Number of Namespaces: 0 00:21:28.039 Compare Command: Not Supported 00:21:28.039 Write Uncorrectable Command: Not Supported 00:21:28.039 Dataset Management Command: Not Supported 00:21:28.039 Write Zeroes Command: Not Supported 00:21:28.039 Set Features Save Field: Not Supported 00:21:28.039 Reservations: Not Supported 00:21:28.039 Timestamp: Not Supported 00:21:28.039 Copy: Not Supported 00:21:28.039 Volatile Write Cache: Not Present 00:21:28.039 Atomic Write Unit (Normal): 1 00:21:28.039 Atomic Write Unit (PFail): 1 00:21:28.039 Atomic Compare & Write Unit: 1 00:21:28.039 Fused Compare & Write: Supported 00:21:28.039 Scatter-Gather List 00:21:28.039 SGL Command Set: Supported 00:21:28.039 SGL Keyed: Supported 00:21:28.039 SGL Bit Bucket Descriptor: Not Supported 00:21:28.039 SGL Metadata Pointer: Not Supported 00:21:28.039 Oversized SGL: Not Supported 00:21:28.039 SGL Metadata Address: Not Supported 00:21:28.039 SGL Offset: Supported 00:21:28.039 Transport SGL Data Block: Not Supported 00:21:28.039 Replay Protected Memory Block: Not Supported 00:21:28.039 00:21:28.039 Firmware Slot Information 00:21:28.039 ========================= 00:21:28.039 Active slot: 0 00:21:28.039 00:21:28.039 00:21:28.039 Error Log 00:21:28.039 ========= 00:21:28.039 00:21:28.039 Active Namespaces 00:21:28.039 ================= 00:21:28.039 Discovery Log Page 00:21:28.039 ================== 00:21:28.039 Generation Counter: 2 00:21:28.039 Number of Records: 2 00:21:28.039 Record Format: 0 00:21:28.039 00:21:28.039 Discovery Log Entry 0 00:21:28.039 ---------------------- 00:21:28.039 Transport Type: 3 (TCP) 00:21:28.039 Address Family: 1 (IPv4) 00:21:28.039 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:28.039 Entry Flags: 00:21:28.039 Duplicate Returned Information: 1 00:21:28.039 Explicit Persistent Connection Support for Discovery: 1 00:21:28.039 Transport Requirements: 00:21:28.039 Secure Channel: Not Required 00:21:28.039 Port ID: 0 (0x0000) 00:21:28.039 Controller ID: 65535 (0xffff) 00:21:28.039 Admin Max SQ Size: 128 00:21:28.039 Transport Service Identifier: 4420 00:21:28.039 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:28.039 Transport Address: 10.0.0.2 00:21:28.039 Discovery Log Entry 1 00:21:28.039 ---------------------- 00:21:28.040 Transport Type: 3 (TCP) 00:21:28.040 Address Family: 1 (IPv4) 00:21:28.040 Subsystem Type: 2 (NVM Subsystem) 00:21:28.040 Entry Flags: 00:21:28.040 Duplicate Returned Information: 0 00:21:28.040 Explicit Persistent Connection Support for Discovery: 0 00:21:28.040 Transport Requirements: 00:21:28.040 Secure Channel: Not Required 00:21:28.040 Port ID: 0 (0x0000) 00:21:28.040 Controller ID: 65535 (0xffff) 00:21:28.040 Admin Max SQ Size: 128 00:21:28.040 Transport Service Identifier: 4420 00:21:28.040 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:28.040 Transport Address: 10.0.0.2 [2024-05-15 11:13:25.245400] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:21:28.040 [2024-05-15 11:13:25.245413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.040 [2024-05-15 11:13:25.245419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.040 [2024-05-15 11:13:25.245424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.040 [2024-05-15 11:13:25.245430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.040 [2024-05-15 11:13:25.245437] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.040 [2024-05-15 11:13:25.245441] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.040 [2024-05-15 11:13:25.245444] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1240c30) 00:21:28.040 [2024-05-15 11:13:25.245450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.040 [2024-05-15 11:13:25.245463] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8da0, cid 3, qid 0 00:21:28.040 [2024-05-15 11:13:25.245524] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.040 [2024-05-15 11:13:25.245530] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.040 [2024-05-15 11:13:25.245533] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.040 [2024-05-15 11:13:25.245537] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8da0) on tqpair=0x1240c30 00:21:28.040 [2024-05-15 11:13:25.245543] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.040 [2024-05-15 11:13:25.245546] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.040 [2024-05-15 11:13:25.245549] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1240c30) 00:21:28.040 [2024-05-15 11:13:25.245555] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.040 [2024-05-15 11:13:25.245567] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8da0, cid 3, qid 0 00:21:28.040 [2024-05-15 11:13:25.245642] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.040 [2024-05-15 11:13:25.245648] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.040 [2024-05-15 11:13:25.245651] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.040 [2024-05-15 11:13:25.245654] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8da0) on tqpair=0x1240c30 00:21:28.040 [2024-05-15 11:13:25.245659] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:21:28.040 [2024-05-15 11:13:25.245663] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:21:28.040 [2024-05-15 11:13:25.245671] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.040 [2024-05-15 11:13:25.245674] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.040 [2024-05-15 11:13:25.245677] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1240c30) 00:21:28.040 [2024-05-15 11:13:25.245682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.040 [2024-05-15 11:13:25.245691] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8da0, cid 3, qid 0 00:21:28.040 [2024-05-15 11:13:25.245757] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.040 [2024-05-15 11:13:25.245763] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.040 [2024-05-15 11:13:25.245766] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.040 [2024-05-15 11:13:25.245769] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8da0) on tqpair=0x1240c30 00:21:28.040 [2024-05-15 11:13:25.245778] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.040 [2024-05-15 11:13:25.245781] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.040 [2024-05-15 11:13:25.245784] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1240c30) 00:21:28.040 [2024-05-15 11:13:25.245790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.040 [2024-05-15 11:13:25.245799] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8da0, cid 3, qid 0 00:21:28.040 [2024-05-15 11:13:25.245883] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.040 [2024-05-15 11:13:25.245888] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.040 [2024-05-15 11:13:25.245891] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.040 [2024-05-15 11:13:25.245894] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8da0) on tqpair=0x1240c30 00:21:28.040 [2024-05-15 11:13:25.245903] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.040 [2024-05-15 11:13:25.245907] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.040 [2024-05-15 11:13:25.245910] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1240c30) 00:21:28.040 [2024-05-15 11:13:25.245915] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.040 [2024-05-15 11:13:25.245925] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8da0, cid 3, qid 0 00:21:28.040 [2024-05-15 11:13:25.245992] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.040 [2024-05-15 11:13:25.245997] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.040 [2024-05-15 11:13:25.246000] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.040 [2024-05-15 11:13:25.246003] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8da0) on tqpair=0x1240c30 00:21:28.040 [2024-05-15 11:13:25.246012] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.040 [2024-05-15 11:13:25.246015] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.040 [2024-05-15 11:13:25.246018] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1240c30) 00:21:28.040 [2024-05-15 11:13:25.246024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.040 [2024-05-15 11:13:25.246033] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8da0, cid 3, qid 0 00:21:28.040 [2024-05-15 11:13:25.246102] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.040 [2024-05-15 11:13:25.246107] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.040 [2024-05-15 11:13:25.246110] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.040 [2024-05-15 11:13:25.246113] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8da0) on tqpair=0x1240c30 00:21:28.040 [2024-05-15 11:13:25.246122] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.040 [2024-05-15 11:13:25.246126] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.040 [2024-05-15 11:13:25.246128] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1240c30) 00:21:28.040 [2024-05-15 11:13:25.246134] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.040 [2024-05-15 11:13:25.246143] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8da0, cid 3, qid 0 00:21:28.040 [2024-05-15 11:13:25.246215] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.040 [2024-05-15 11:13:25.246222] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.040 [2024-05-15 11:13:25.246225] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.040 [2024-05-15 11:13:25.246229] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8da0) on tqpair=0x1240c30 00:21:28.040 [2024-05-15 11:13:25.246237] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.040 [2024-05-15 11:13:25.246241] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.040 [2024-05-15 11:13:25.246244] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1240c30) 00:21:28.040 [2024-05-15 11:13:25.246249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.040 [2024-05-15 11:13:25.246259] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8da0, cid 3, qid 0 00:21:28.040 [2024-05-15 11:13:25.246323] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.040 [2024-05-15 11:13:25.246328] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.040 [2024-05-15 11:13:25.246331] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.040 [2024-05-15 11:13:25.246334] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8da0) on tqpair=0x1240c30 00:21:28.040 [2024-05-15 11:13:25.246343] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.040 [2024-05-15 11:13:25.246346] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.040 [2024-05-15 11:13:25.246349] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1240c30) 00:21:28.040 [2024-05-15 11:13:25.246354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.040 [2024-05-15 11:13:25.246363] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8da0, cid 3, qid 0 00:21:28.040 [2024-05-15 11:13:25.246439] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.040 [2024-05-15 11:13:25.246444] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.040 [2024-05-15 11:13:25.246447] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.040 [2024-05-15 11:13:25.246450] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8da0) on tqpair=0x1240c30 00:21:28.040 [2024-05-15 11:13:25.246459] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.040 [2024-05-15 11:13:25.246463] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.040 [2024-05-15 11:13:25.246466] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1240c30) 00:21:28.040 [2024-05-15 11:13:25.246471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.040 [2024-05-15 11:13:25.246480] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8da0, cid 3, qid 0 00:21:28.041 [2024-05-15 11:13:25.246546] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.041 [2024-05-15 11:13:25.246552] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.041 [2024-05-15 11:13:25.246555] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.041 [2024-05-15 11:13:25.246558] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8da0) on tqpair=0x1240c30 00:21:28.041 [2024-05-15 11:13:25.246566] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.041 [2024-05-15 11:13:25.246570] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.041 [2024-05-15 11:13:25.246573] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1240c30) 00:21:28.041 [2024-05-15 11:13:25.246578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.041 [2024-05-15 11:13:25.246587] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8da0, cid 3, qid 0 00:21:28.041 [2024-05-15 11:13:25.246651] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.041 [2024-05-15 11:13:25.246657] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.041 [2024-05-15 11:13:25.246662] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.041 [2024-05-15 11:13:25.246665] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8da0) on tqpair=0x1240c30 00:21:28.041 [2024-05-15 11:13:25.246673] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.041 [2024-05-15 11:13:25.246677] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.041 [2024-05-15 11:13:25.246680] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1240c30) 00:21:28.041 [2024-05-15 11:13:25.246685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.041 [2024-05-15 11:13:25.246694] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8da0, cid 3, qid 0 00:21:28.041 [2024-05-15 11:13:25.246758] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.041 [2024-05-15 11:13:25.246763] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.041 [2024-05-15 11:13:25.246766] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.041 [2024-05-15 11:13:25.246769] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8da0) on tqpair=0x1240c30 00:21:28.041 [2024-05-15 11:13:25.246778] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.041 [2024-05-15 11:13:25.246781] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.041 [2024-05-15 11:13:25.246784] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1240c30) 00:21:28.041 [2024-05-15 11:13:25.246790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.041 [2024-05-15 11:13:25.246798] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8da0, cid 3, qid 0 00:21:28.041 [2024-05-15 11:13:25.246874] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.041 [2024-05-15 11:13:25.246879] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.041 [2024-05-15 11:13:25.246882] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.041 [2024-05-15 11:13:25.246885] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8da0) on tqpair=0x1240c30 00:21:28.041 [2024-05-15 11:13:25.246894] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.041 [2024-05-15 11:13:25.246898] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.041 [2024-05-15 11:13:25.246901] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1240c30) 00:21:28.041 [2024-05-15 11:13:25.246906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.041 [2024-05-15 11:13:25.246915] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8da0, cid 3, qid 0 00:21:28.041 [2024-05-15 11:13:25.246982] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.041 [2024-05-15 11:13:25.246987] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.041 [2024-05-15 11:13:25.246990] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.041 [2024-05-15 11:13:25.246993] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8da0) on tqpair=0x1240c30 00:21:28.041 [2024-05-15 11:13:25.247002] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.041 [2024-05-15 11:13:25.247005] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.041 [2024-05-15 11:13:25.247008] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1240c30) 00:21:28.041 [2024-05-15 11:13:25.247013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.041 [2024-05-15 11:13:25.247023] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8da0, cid 3, qid 0 00:21:28.041 [2024-05-15 11:13:25.247089] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.041 [2024-05-15 11:13:25.247094] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.041 [2024-05-15 11:13:25.247097] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.041 [2024-05-15 11:13:25.247102] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8da0) on tqpair=0x1240c30 00:21:28.041 [2024-05-15 11:13:25.247110] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.041 [2024-05-15 11:13:25.247114] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.041 [2024-05-15 11:13:25.247117] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1240c30) 00:21:28.041 [2024-05-15 11:13:25.247122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.041 [2024-05-15 11:13:25.247131] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8da0, cid 3, qid 0 00:21:28.041 [2024-05-15 11:13:25.247197] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.041 [2024-05-15 11:13:25.247202] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.041 [2024-05-15 11:13:25.247205] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.041 [2024-05-15 11:13:25.247208] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8da0) on tqpair=0x1240c30 00:21:28.041 [2024-05-15 11:13:25.247217] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.041 [2024-05-15 11:13:25.247220] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.041 [2024-05-15 11:13:25.247223] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1240c30) 00:21:28.041 [2024-05-15 11:13:25.247229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.041 [2024-05-15 11:13:25.247238] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8da0, cid 3, qid 0 00:21:28.041 [2024-05-15 11:13:25.247304] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.041 [2024-05-15 11:13:25.247310] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.041 [2024-05-15 11:13:25.247313] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.041 [2024-05-15 11:13:25.247316] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8da0) on tqpair=0x1240c30 00:21:28.041 [2024-05-15 11:13:25.247324] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.041 [2024-05-15 11:13:25.247328] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.041 [2024-05-15 11:13:25.247331] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1240c30) 00:21:28.041 [2024-05-15 11:13:25.247336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.041 [2024-05-15 11:13:25.247345] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8da0, cid 3, qid 0 00:21:28.041 [2024-05-15 11:13:25.247409] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.041 [2024-05-15 11:13:25.247415] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.041 [2024-05-15 11:13:25.247418] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.041 [2024-05-15 11:13:25.247421] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8da0) on tqpair=0x1240c30 00:21:28.041 [2024-05-15 11:13:25.247429] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.041 [2024-05-15 11:13:25.247432] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.041 [2024-05-15 11:13:25.247435] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1240c30) 00:21:28.041 [2024-05-15 11:13:25.247441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.041 [2024-05-15 11:13:25.247450] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8da0, cid 3, qid 0 00:21:28.041 [2024-05-15 11:13:25.247516] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.041 [2024-05-15 11:13:25.247521] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.041 [2024-05-15 11:13:25.247524] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.041 [2024-05-15 11:13:25.247527] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8da0) on tqpair=0x1240c30 00:21:28.041 [2024-05-15 11:13:25.247537] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.042 [2024-05-15 11:13:25.247541] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.042 [2024-05-15 11:13:25.247544] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1240c30) 00:21:28.042 [2024-05-15 11:13:25.247549] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.042 [2024-05-15 11:13:25.247558] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8da0, cid 3, qid 0 00:21:28.042 [2024-05-15 11:13:25.247625] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.042 [2024-05-15 11:13:25.247630] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.042 [2024-05-15 11:13:25.247633] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.042 [2024-05-15 11:13:25.247636] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8da0) on tqpair=0x1240c30 00:21:28.042 [2024-05-15 11:13:25.247645] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.042 [2024-05-15 11:13:25.247648] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.042 [2024-05-15 11:13:25.247651] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1240c30) 00:21:28.042 [2024-05-15 11:13:25.247656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.042 [2024-05-15 11:13:25.247665] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8da0, cid 3, qid 0 00:21:28.042 [2024-05-15 11:13:25.247734] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.042 [2024-05-15 11:13:25.247739] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.042 [2024-05-15 11:13:25.247742] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.042 [2024-05-15 11:13:25.247745] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8da0) on tqpair=0x1240c30 00:21:28.042 [2024-05-15 11:13:25.247753] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.042 [2024-05-15 11:13:25.247757] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.042 [2024-05-15 11:13:25.247760] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1240c30) 00:21:28.042 [2024-05-15 11:13:25.247765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.042 [2024-05-15 11:13:25.247774] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8da0, cid 3, qid 0 00:21:28.042 [2024-05-15 11:13:25.247840] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.042 [2024-05-15 11:13:25.247846] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.042 [2024-05-15 11:13:25.247849] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.042 [2024-05-15 11:13:25.247852] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8da0) on tqpair=0x1240c30 00:21:28.042 [2024-05-15 11:13:25.247860] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.042 [2024-05-15 11:13:25.247863] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.042 [2024-05-15 11:13:25.247866] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1240c30) 00:21:28.042 [2024-05-15 11:13:25.247872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.042 [2024-05-15 11:13:25.247881] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8da0, cid 3, qid 0 00:21:28.042 [2024-05-15 11:13:25.247947] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.042 [2024-05-15 11:13:25.247953] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.042 [2024-05-15 11:13:25.247955] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.042 [2024-05-15 11:13:25.247958] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8da0) on tqpair=0x1240c30 00:21:28.042 [2024-05-15 11:13:25.247967] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.042 [2024-05-15 11:13:25.247973] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.042 [2024-05-15 11:13:25.247977] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1240c30) 00:21:28.042 [2024-05-15 11:13:25.247982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.042 [2024-05-15 11:13:25.247991] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8da0, cid 3, qid 0 00:21:28.042 [2024-05-15 11:13:25.248056] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.042 [2024-05-15 11:13:25.248061] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.042 [2024-05-15 11:13:25.248064] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.042 [2024-05-15 11:13:25.248067] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8da0) on tqpair=0x1240c30 00:21:28.042 [2024-05-15 11:13:25.248076] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.042 [2024-05-15 11:13:25.248079] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.042 [2024-05-15 11:13:25.248082] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1240c30) 00:21:28.042 [2024-05-15 11:13:25.248087] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.042 [2024-05-15 11:13:25.248096] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8da0, cid 3, qid 0 00:21:28.042 [2024-05-15 11:13:25.252171] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.042 [2024-05-15 11:13:25.252179] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.042 [2024-05-15 11:13:25.252182] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.042 [2024-05-15 11:13:25.252185] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8da0) on tqpair=0x1240c30 00:21:28.042 [2024-05-15 11:13:25.252195] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.042 [2024-05-15 11:13:25.252198] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.042 [2024-05-15 11:13:25.252201] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1240c30) 00:21:28.042 [2024-05-15 11:13:25.252207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.042 [2024-05-15 11:13:25.252218] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8da0, cid 3, qid 0 00:21:28.042 [2024-05-15 11:13:25.252368] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.042 [2024-05-15 11:13:25.252374] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.042 [2024-05-15 11:13:25.252377] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.042 [2024-05-15 11:13:25.252380] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8da0) on tqpair=0x1240c30 00:21:28.042 [2024-05-15 11:13:25.252387] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:21:28.042 00:21:28.042 11:13:25 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:28.042 [2024-05-15 11:13:25.288356] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:21:28.042 [2024-05-15 11:13:25.288401] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2329623 ] 00:21:28.042 EAL: No free 2048 kB hugepages reported on node 1 00:21:28.307 [2024-05-15 11:13:25.316082] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:21:28.307 [2024-05-15 11:13:25.316123] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:28.307 [2024-05-15 11:13:25.316128] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:28.307 [2024-05-15 11:13:25.316139] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:28.307 [2024-05-15 11:13:25.316146] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:28.307 [2024-05-15 11:13:25.320189] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:21:28.307 [2024-05-15 11:13:25.320209] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x7eec30 0 00:21:28.307 [2024-05-15 11:13:25.331173] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:28.307 [2024-05-15 11:13:25.331194] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:28.307 [2024-05-15 11:13:25.331198] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:28.307 [2024-05-15 11:13:25.331201] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:28.307 [2024-05-15 11:13:25.331228] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.307 [2024-05-15 11:13:25.331233] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.307 [2024-05-15 11:13:25.331236] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7eec30) 00:21:28.307 [2024-05-15 11:13:25.331246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:28.307 [2024-05-15 11:13:25.331261] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856980, cid 0, qid 0 00:21:28.307 [2024-05-15 11:13:25.342175] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.307 [2024-05-15 11:13:25.342185] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.307 [2024-05-15 11:13:25.342188] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.307 [2024-05-15 11:13:25.342192] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x856980) on tqpair=0x7eec30 00:21:28.307 [2024-05-15 11:13:25.342204] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:28.307 [2024-05-15 11:13:25.342210] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:21:28.307 [2024-05-15 11:13:25.342216] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:21:28.307 [2024-05-15 11:13:25.342225] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.307 [2024-05-15 11:13:25.342229] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.307 [2024-05-15 11:13:25.342232] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7eec30) 00:21:28.307 [2024-05-15 11:13:25.342239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.307 [2024-05-15 11:13:25.342252] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856980, cid 0, qid 0 00:21:28.307 [2024-05-15 11:13:25.342416] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.307 [2024-05-15 11:13:25.342422] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.307 [2024-05-15 11:13:25.342425] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.307 [2024-05-15 11:13:25.342429] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x856980) on tqpair=0x7eec30 00:21:28.307 [2024-05-15 11:13:25.342433] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:21:28.307 [2024-05-15 11:13:25.342439] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:21:28.307 [2024-05-15 11:13:25.342446] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.307 [2024-05-15 11:13:25.342449] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.307 [2024-05-15 11:13:25.342455] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7eec30) 00:21:28.307 [2024-05-15 11:13:25.342461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.307 [2024-05-15 11:13:25.342471] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856980, cid 0, qid 0 00:21:28.307 [2024-05-15 11:13:25.342536] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.307 [2024-05-15 11:13:25.342541] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.307 [2024-05-15 11:13:25.342544] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.307 [2024-05-15 11:13:25.342547] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x856980) on tqpair=0x7eec30 00:21:28.307 [2024-05-15 11:13:25.342552] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:21:28.307 [2024-05-15 11:13:25.342558] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:21:28.307 [2024-05-15 11:13:25.342564] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.307 [2024-05-15 11:13:25.342567] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.307 [2024-05-15 11:13:25.342570] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7eec30) 00:21:28.307 [2024-05-15 11:13:25.342576] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.307 [2024-05-15 11:13:25.342585] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856980, cid 0, qid 0 00:21:28.307 [2024-05-15 11:13:25.342664] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.307 [2024-05-15 11:13:25.342670] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.307 [2024-05-15 11:13:25.342673] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.307 [2024-05-15 11:13:25.342676] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x856980) on tqpair=0x7eec30 00:21:28.307 [2024-05-15 11:13:25.342680] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:28.307 [2024-05-15 11:13:25.342688] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.307 [2024-05-15 11:13:25.342692] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.307 [2024-05-15 11:13:25.342695] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7eec30) 00:21:28.307 [2024-05-15 11:13:25.342701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.307 [2024-05-15 11:13:25.342710] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856980, cid 0, qid 0 00:21:28.307 [2024-05-15 11:13:25.342816] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.307 [2024-05-15 11:13:25.342822] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.307 [2024-05-15 11:13:25.342825] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.307 [2024-05-15 11:13:25.342829] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x856980) on tqpair=0x7eec30 00:21:28.307 [2024-05-15 11:13:25.342832] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:21:28.308 [2024-05-15 11:13:25.342836] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:21:28.308 [2024-05-15 11:13:25.342843] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:28.308 [2024-05-15 11:13:25.342948] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:21:28.308 [2024-05-15 11:13:25.342951] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:28.308 [2024-05-15 11:13:25.342960] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.308 [2024-05-15 11:13:25.342963] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.308 [2024-05-15 11:13:25.342966] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7eec30) 00:21:28.308 [2024-05-15 11:13:25.342971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.308 [2024-05-15 11:13:25.342981] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856980, cid 0, qid 0 00:21:28.308 [2024-05-15 11:13:25.343099] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.308 [2024-05-15 11:13:25.343104] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.308 [2024-05-15 11:13:25.343107] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.308 [2024-05-15 11:13:25.343111] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x856980) on tqpair=0x7eec30 00:21:28.308 [2024-05-15 11:13:25.343115] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:28.308 [2024-05-15 11:13:25.343122] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.308 [2024-05-15 11:13:25.343126] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.308 [2024-05-15 11:13:25.343129] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7eec30) 00:21:28.308 [2024-05-15 11:13:25.343135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.308 [2024-05-15 11:13:25.343144] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856980, cid 0, qid 0 00:21:28.308 [2024-05-15 11:13:25.343217] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.308 [2024-05-15 11:13:25.343223] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.308 [2024-05-15 11:13:25.343226] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.308 [2024-05-15 11:13:25.343229] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x856980) on tqpair=0x7eec30 00:21:28.308 [2024-05-15 11:13:25.343233] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:28.308 [2024-05-15 11:13:25.343237] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:21:28.308 [2024-05-15 11:13:25.343244] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:21:28.308 [2024-05-15 11:13:25.343254] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:21:28.308 [2024-05-15 11:13:25.343261] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.308 [2024-05-15 11:13:25.343265] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7eec30) 00:21:28.308 [2024-05-15 11:13:25.343271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.308 [2024-05-15 11:13:25.343281] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856980, cid 0, qid 0 00:21:28.308 [2024-05-15 11:13:25.343409] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:28.308 [2024-05-15 11:13:25.343415] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:28.308 [2024-05-15 11:13:25.343418] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:28.308 [2024-05-15 11:13:25.343422] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7eec30): datao=0, datal=4096, cccid=0 00:21:28.308 [2024-05-15 11:13:25.343425] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x856980) on tqpair(0x7eec30): expected_datao=0, payload_size=4096 00:21:28.308 [2024-05-15 11:13:25.343429] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.308 [2024-05-15 11:13:25.343438] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:28.308 [2024-05-15 11:13:25.343442] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:28.308 [2024-05-15 11:13:25.343476] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.308 [2024-05-15 11:13:25.343481] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.308 [2024-05-15 11:13:25.343484] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.308 [2024-05-15 11:13:25.343487] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x856980) on tqpair=0x7eec30 00:21:28.308 [2024-05-15 11:13:25.343493] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:21:28.308 [2024-05-15 11:13:25.343497] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:21:28.308 [2024-05-15 11:13:25.343501] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:21:28.308 [2024-05-15 11:13:25.343504] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:21:28.308 [2024-05-15 11:13:25.343508] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:21:28.308 [2024-05-15 11:13:25.343512] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:21:28.308 [2024-05-15 11:13:25.343522] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:21:28.308 [2024-05-15 11:13:25.343530] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.308 [2024-05-15 11:13:25.343533] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.308 [2024-05-15 11:13:25.343536] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7eec30) 00:21:28.308 [2024-05-15 11:13:25.343542] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:28.308 [2024-05-15 11:13:25.343552] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856980, cid 0, qid 0 00:21:28.308 [2024-05-15 11:13:25.343668] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.308 [2024-05-15 11:13:25.343674] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.308 [2024-05-15 11:13:25.343677] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.308 [2024-05-15 11:13:25.343680] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x856980) on tqpair=0x7eec30 00:21:28.308 [2024-05-15 11:13:25.343688] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.308 [2024-05-15 11:13:25.343691] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.308 [2024-05-15 11:13:25.343694] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7eec30) 00:21:28.308 [2024-05-15 11:13:25.343699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:28.308 [2024-05-15 11:13:25.343705] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.308 [2024-05-15 11:13:25.343708] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.308 [2024-05-15 11:13:25.343711] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x7eec30) 00:21:28.308 [2024-05-15 11:13:25.343715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:28.308 [2024-05-15 11:13:25.343721] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.308 [2024-05-15 11:13:25.343724] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.308 [2024-05-15 11:13:25.343727] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x7eec30) 00:21:28.308 [2024-05-15 11:13:25.343732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:28.308 [2024-05-15 11:13:25.343737] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.308 [2024-05-15 11:13:25.343741] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.308 [2024-05-15 11:13:25.343744] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7eec30) 00:21:28.308 [2024-05-15 11:13:25.343749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:28.308 [2024-05-15 11:13:25.343754] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:28.308 [2024-05-15 11:13:25.343761] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:28.308 [2024-05-15 11:13:25.343767] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.308 [2024-05-15 11:13:25.343770] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7eec30) 00:21:28.308 [2024-05-15 11:13:25.343775] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.308 [2024-05-15 11:13:25.343786] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856980, cid 0, qid 0 00:21:28.308 [2024-05-15 11:13:25.343791] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856ae0, cid 1, qid 0 00:21:28.308 [2024-05-15 11:13:25.343795] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856c40, cid 2, qid 0 00:21:28.308 [2024-05-15 11:13:25.343799] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856da0, cid 3, qid 0 00:21:28.308 [2024-05-15 11:13:25.343803] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856f00, cid 4, qid 0 00:21:28.309 [2024-05-15 11:13:25.343924] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.309 [2024-05-15 11:13:25.343929] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.309 [2024-05-15 11:13:25.343932] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.309 [2024-05-15 11:13:25.343935] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x856f00) on tqpair=0x7eec30 00:21:28.309 [2024-05-15 11:13:25.343941] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:21:28.309 [2024-05-15 11:13:25.343946] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:28.309 [2024-05-15 11:13:25.343953] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:21:28.309 [2024-05-15 11:13:25.343959] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:28.309 [2024-05-15 11:13:25.343964] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.309 [2024-05-15 11:13:25.343968] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.309 [2024-05-15 11:13:25.343971] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7eec30) 00:21:28.309 [2024-05-15 11:13:25.343976] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:28.309 [2024-05-15 11:13:25.343985] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856f00, cid 4, qid 0 00:21:28.309 [2024-05-15 11:13:25.344073] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.309 [2024-05-15 11:13:25.344079] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.309 [2024-05-15 11:13:25.344082] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.309 [2024-05-15 11:13:25.344085] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x856f00) on tqpair=0x7eec30 00:21:28.309 [2024-05-15 11:13:25.344129] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:21:28.309 [2024-05-15 11:13:25.344138] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:28.309 [2024-05-15 11:13:25.344146] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.309 [2024-05-15 11:13:25.344150] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7eec30) 00:21:28.309 [2024-05-15 11:13:25.344156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.309 [2024-05-15 11:13:25.344171] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856f00, cid 4, qid 0 00:21:28.309 [2024-05-15 11:13:25.344243] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:28.309 [2024-05-15 11:13:25.344249] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:28.309 [2024-05-15 11:13:25.344252] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:28.309 [2024-05-15 11:13:25.344255] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7eec30): datao=0, datal=4096, cccid=4 00:21:28.309 [2024-05-15 11:13:25.344259] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x856f00) on tqpair(0x7eec30): expected_datao=0, payload_size=4096 00:21:28.309 [2024-05-15 11:13:25.344262] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.309 [2024-05-15 11:13:25.344302] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:28.309 [2024-05-15 11:13:25.344306] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:28.309 [2024-05-15 11:13:25.344376] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.309 [2024-05-15 11:13:25.344382] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.309 [2024-05-15 11:13:25.344385] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.309 [2024-05-15 11:13:25.344388] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x856f00) on tqpair=0x7eec30 00:21:28.309 [2024-05-15 11:13:25.344398] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:21:28.309 [2024-05-15 11:13:25.344406] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:21:28.309 [2024-05-15 11:13:25.344414] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:21:28.309 [2024-05-15 11:13:25.344420] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.309 [2024-05-15 11:13:25.344424] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7eec30) 00:21:28.309 [2024-05-15 11:13:25.344429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.309 [2024-05-15 11:13:25.344439] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856f00, cid 4, qid 0 00:21:28.309 [2024-05-15 11:13:25.344512] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:28.309 [2024-05-15 11:13:25.344518] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:28.309 [2024-05-15 11:13:25.344521] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:28.309 [2024-05-15 11:13:25.344524] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7eec30): datao=0, datal=4096, cccid=4 00:21:28.309 [2024-05-15 11:13:25.344528] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x856f00) on tqpair(0x7eec30): expected_datao=0, payload_size=4096 00:21:28.309 [2024-05-15 11:13:25.344531] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.309 [2024-05-15 11:13:25.344541] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:28.309 [2024-05-15 11:13:25.344545] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:28.309 [2024-05-15 11:13:25.344627] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.309 [2024-05-15 11:13:25.344633] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.309 [2024-05-15 11:13:25.344636] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.309 [2024-05-15 11:13:25.344639] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x856f00) on tqpair=0x7eec30 00:21:28.309 [2024-05-15 11:13:25.344652] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:28.309 [2024-05-15 11:13:25.344660] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:28.309 [2024-05-15 11:13:25.344666] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.309 [2024-05-15 11:13:25.344670] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7eec30) 00:21:28.309 [2024-05-15 11:13:25.344675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.309 [2024-05-15 11:13:25.344686] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856f00, cid 4, qid 0 00:21:28.309 [2024-05-15 11:13:25.344757] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:28.309 [2024-05-15 11:13:25.344762] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:28.309 [2024-05-15 11:13:25.344765] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:28.309 [2024-05-15 11:13:25.344768] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7eec30): datao=0, datal=4096, cccid=4 00:21:28.309 [2024-05-15 11:13:25.344772] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x856f00) on tqpair(0x7eec30): expected_datao=0, payload_size=4096 00:21:28.309 [2024-05-15 11:13:25.344776] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.309 [2024-05-15 11:13:25.344794] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:28.309 [2024-05-15 11:13:25.344798] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:28.309 [2024-05-15 11:13:25.344879] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.309 [2024-05-15 11:13:25.344885] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.309 [2024-05-15 11:13:25.344888] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.309 [2024-05-15 11:13:25.344891] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x856f00) on tqpair=0x7eec30 00:21:28.309 [2024-05-15 11:13:25.344900] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:28.309 [2024-05-15 11:13:25.344907] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:21:28.309 [2024-05-15 11:13:25.344913] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:21:28.309 [2024-05-15 11:13:25.344918] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:28.309 [2024-05-15 11:13:25.344923] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:21:28.309 [2024-05-15 11:13:25.344927] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:21:28.309 [2024-05-15 11:13:25.344931] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:21:28.309 [2024-05-15 11:13:25.344935] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:21:28.309 [2024-05-15 11:13:25.344949] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.309 [2024-05-15 11:13:25.344953] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7eec30) 00:21:28.309 [2024-05-15 11:13:25.344959] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.309 [2024-05-15 11:13:25.344965] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.309 [2024-05-15 11:13:25.344968] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.309 [2024-05-15 11:13:25.344972] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7eec30) 00:21:28.309 [2024-05-15 11:13:25.344978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:28.309 [2024-05-15 11:13:25.344990] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856f00, cid 4, qid 0 00:21:28.310 [2024-05-15 11:13:25.344995] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x857060, cid 5, qid 0 00:21:28.310 [2024-05-15 11:13:25.345075] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.310 [2024-05-15 11:13:25.345081] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.310 [2024-05-15 11:13:25.345083] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.310 [2024-05-15 11:13:25.345087] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x856f00) on tqpair=0x7eec30 00:21:28.310 [2024-05-15 11:13:25.345092] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.310 [2024-05-15 11:13:25.345097] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.310 [2024-05-15 11:13:25.345100] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.310 [2024-05-15 11:13:25.345103] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x857060) on tqpair=0x7eec30 00:21:28.310 [2024-05-15 11:13:25.345112] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.310 [2024-05-15 11:13:25.345115] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7eec30) 00:21:28.310 [2024-05-15 11:13:25.345121] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.310 [2024-05-15 11:13:25.345130] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x857060, cid 5, qid 0 00:21:28.310 [2024-05-15 11:13:25.345216] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.310 [2024-05-15 11:13:25.345222] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.310 [2024-05-15 11:13:25.345225] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.310 [2024-05-15 11:13:25.345228] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x857060) on tqpair=0x7eec30 00:21:28.310 [2024-05-15 11:13:25.345235] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.310 [2024-05-15 11:13:25.345239] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7eec30) 00:21:28.310 [2024-05-15 11:13:25.345244] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.310 [2024-05-15 11:13:25.345253] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x857060, cid 5, qid 0 00:21:28.310 [2024-05-15 11:13:25.345366] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.310 [2024-05-15 11:13:25.345371] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.310 [2024-05-15 11:13:25.345374] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.310 [2024-05-15 11:13:25.345378] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x857060) on tqpair=0x7eec30 00:21:28.310 [2024-05-15 11:13:25.345385] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.310 [2024-05-15 11:13:25.345389] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7eec30) 00:21:28.310 [2024-05-15 11:13:25.345394] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.310 [2024-05-15 11:13:25.345402] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x857060, cid 5, qid 0 00:21:28.310 [2024-05-15 11:13:25.345518] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.310 [2024-05-15 11:13:25.345524] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.310 [2024-05-15 11:13:25.345527] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.310 [2024-05-15 11:13:25.345530] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x857060) on tqpair=0x7eec30 00:21:28.310 [2024-05-15 11:13:25.345541] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.310 [2024-05-15 11:13:25.345545] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7eec30) 00:21:28.310 [2024-05-15 11:13:25.345550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.310 [2024-05-15 11:13:25.345556] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.310 [2024-05-15 11:13:25.345559] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7eec30) 00:21:28.310 [2024-05-15 11:13:25.345565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.310 [2024-05-15 11:13:25.345570] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.310 [2024-05-15 11:13:25.345573] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x7eec30) 00:21:28.310 [2024-05-15 11:13:25.345579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.310 [2024-05-15 11:13:25.345587] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.310 [2024-05-15 11:13:25.345591] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x7eec30) 00:21:28.310 [2024-05-15 11:13:25.345596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.310 [2024-05-15 11:13:25.345606] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x857060, cid 5, qid 0 00:21:28.310 [2024-05-15 11:13:25.345610] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856f00, cid 4, qid 0 00:21:28.310 [2024-05-15 11:13:25.345614] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8571c0, cid 6, qid 0 00:21:28.310 [2024-05-15 11:13:25.345618] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x857320, cid 7, qid 0 00:21:28.310 [2024-05-15 11:13:25.345783] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:28.310 [2024-05-15 11:13:25.345789] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:28.310 [2024-05-15 11:13:25.345792] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:28.310 [2024-05-15 11:13:25.345795] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7eec30): datao=0, datal=8192, cccid=5 00:21:28.310 [2024-05-15 11:13:25.345799] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x857060) on tqpair(0x7eec30): expected_datao=0, payload_size=8192 00:21:28.310 [2024-05-15 11:13:25.345803] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.310 [2024-05-15 11:13:25.345818] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:28.310 [2024-05-15 11:13:25.345822] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:28.310 [2024-05-15 11:13:25.345827] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:28.310 [2024-05-15 11:13:25.345831] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:28.310 [2024-05-15 11:13:25.345834] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:28.310 [2024-05-15 11:13:25.345837] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7eec30): datao=0, datal=512, cccid=4 00:21:28.310 [2024-05-15 11:13:25.345841] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x856f00) on tqpair(0x7eec30): expected_datao=0, payload_size=512 00:21:28.310 [2024-05-15 11:13:25.345845] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.310 [2024-05-15 11:13:25.345850] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:28.310 [2024-05-15 11:13:25.345853] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:28.310 [2024-05-15 11:13:25.345858] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:28.310 [2024-05-15 11:13:25.345862] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:28.310 [2024-05-15 11:13:25.345867] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:28.310 [2024-05-15 11:13:25.345870] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7eec30): datao=0, datal=512, cccid=6 00:21:28.310 [2024-05-15 11:13:25.345874] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8571c0) on tqpair(0x7eec30): expected_datao=0, payload_size=512 00:21:28.310 [2024-05-15 11:13:25.345877] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.310 [2024-05-15 11:13:25.345882] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:28.310 [2024-05-15 11:13:25.345886] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:28.310 [2024-05-15 11:13:25.345890] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:28.310 [2024-05-15 11:13:25.345895] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:28.310 [2024-05-15 11:13:25.345898] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:28.310 [2024-05-15 11:13:25.345901] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7eec30): datao=0, datal=4096, cccid=7 00:21:28.310 [2024-05-15 11:13:25.345905] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x857320) on tqpair(0x7eec30): expected_datao=0, payload_size=4096 00:21:28.310 [2024-05-15 11:13:25.345908] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.310 [2024-05-15 11:13:25.345914] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:28.310 [2024-05-15 11:13:25.345916] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:28.310 [2024-05-15 11:13:25.387175] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.310 [2024-05-15 11:13:25.387186] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.310 [2024-05-15 11:13:25.387189] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.310 [2024-05-15 11:13:25.387193] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x857060) on tqpair=0x7eec30 00:21:28.310 [2024-05-15 11:13:25.387205] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.310 [2024-05-15 11:13:25.387210] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.310 [2024-05-15 11:13:25.387213] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.310 [2024-05-15 11:13:25.387216] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x856f00) on tqpair=0x7eec30 00:21:28.310 [2024-05-15 11:13:25.387223] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.310 [2024-05-15 11:13:25.387228] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.310 [2024-05-15 11:13:25.387231] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.310 [2024-05-15 11:13:25.387235] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8571c0) on tqpair=0x7eec30 00:21:28.310 [2024-05-15 11:13:25.387242] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.310 [2024-05-15 11:13:25.387247] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.310 [2024-05-15 11:13:25.387250] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.310 [2024-05-15 11:13:25.387253] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x857320) on tqpair=0x7eec30 00:21:28.310 ===================================================== 00:21:28.310 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:28.310 ===================================================== 00:21:28.310 Controller Capabilities/Features 00:21:28.310 ================================ 00:21:28.310 Vendor ID: 8086 00:21:28.310 Subsystem Vendor ID: 8086 00:21:28.310 Serial Number: SPDK00000000000001 00:21:28.310 Model Number: SPDK bdev Controller 00:21:28.310 Firmware Version: 24.05 00:21:28.310 Recommended Arb Burst: 6 00:21:28.310 IEEE OUI Identifier: e4 d2 5c 00:21:28.310 Multi-path I/O 00:21:28.310 May have multiple subsystem ports: Yes 00:21:28.310 May have multiple controllers: Yes 00:21:28.310 Associated with SR-IOV VF: No 00:21:28.310 Max Data Transfer Size: 131072 00:21:28.310 Max Number of Namespaces: 32 00:21:28.310 Max Number of I/O Queues: 127 00:21:28.310 NVMe Specification Version (VS): 1.3 00:21:28.310 NVMe Specification Version (Identify): 1.3 00:21:28.310 Maximum Queue Entries: 128 00:21:28.311 Contiguous Queues Required: Yes 00:21:28.311 Arbitration Mechanisms Supported 00:21:28.311 Weighted Round Robin: Not Supported 00:21:28.311 Vendor Specific: Not Supported 00:21:28.311 Reset Timeout: 15000 ms 00:21:28.311 Doorbell Stride: 4 bytes 00:21:28.311 NVM Subsystem Reset: Not Supported 00:21:28.311 Command Sets Supported 00:21:28.311 NVM Command Set: Supported 00:21:28.311 Boot Partition: Not Supported 00:21:28.311 Memory Page Size Minimum: 4096 bytes 00:21:28.311 Memory Page Size Maximum: 4096 bytes 00:21:28.311 Persistent Memory Region: Not Supported 00:21:28.311 Optional Asynchronous Events Supported 00:21:28.311 Namespace Attribute Notices: Supported 00:21:28.311 Firmware Activation Notices: Not Supported 00:21:28.311 ANA Change Notices: Not Supported 00:21:28.311 PLE Aggregate Log Change Notices: Not Supported 00:21:28.311 LBA Status Info Alert Notices: Not Supported 00:21:28.311 EGE Aggregate Log Change Notices: Not Supported 00:21:28.311 Normal NVM Subsystem Shutdown event: Not Supported 00:21:28.311 Zone Descriptor Change Notices: Not Supported 00:21:28.311 Discovery Log Change Notices: Not Supported 00:21:28.311 Controller Attributes 00:21:28.311 128-bit Host Identifier: Supported 00:21:28.311 Non-Operational Permissive Mode: Not Supported 00:21:28.311 NVM Sets: Not Supported 00:21:28.311 Read Recovery Levels: Not Supported 00:21:28.311 Endurance Groups: Not Supported 00:21:28.311 Predictable Latency Mode: Not Supported 00:21:28.311 Traffic Based Keep ALive: Not Supported 00:21:28.311 Namespace Granularity: Not Supported 00:21:28.311 SQ Associations: Not Supported 00:21:28.311 UUID List: Not Supported 00:21:28.311 Multi-Domain Subsystem: Not Supported 00:21:28.311 Fixed Capacity Management: Not Supported 00:21:28.311 Variable Capacity Management: Not Supported 00:21:28.311 Delete Endurance Group: Not Supported 00:21:28.311 Delete NVM Set: Not Supported 00:21:28.311 Extended LBA Formats Supported: Not Supported 00:21:28.311 Flexible Data Placement Supported: Not Supported 00:21:28.311 00:21:28.311 Controller Memory Buffer Support 00:21:28.311 ================================ 00:21:28.311 Supported: No 00:21:28.311 00:21:28.311 Persistent Memory Region Support 00:21:28.311 ================================ 00:21:28.311 Supported: No 00:21:28.311 00:21:28.311 Admin Command Set Attributes 00:21:28.311 ============================ 00:21:28.311 Security Send/Receive: Not Supported 00:21:28.311 Format NVM: Not Supported 00:21:28.311 Firmware Activate/Download: Not Supported 00:21:28.311 Namespace Management: Not Supported 00:21:28.311 Device Self-Test: Not Supported 00:21:28.311 Directives: Not Supported 00:21:28.311 NVMe-MI: Not Supported 00:21:28.311 Virtualization Management: Not Supported 00:21:28.311 Doorbell Buffer Config: Not Supported 00:21:28.311 Get LBA Status Capability: Not Supported 00:21:28.311 Command & Feature Lockdown Capability: Not Supported 00:21:28.311 Abort Command Limit: 4 00:21:28.311 Async Event Request Limit: 4 00:21:28.311 Number of Firmware Slots: N/A 00:21:28.311 Firmware Slot 1 Read-Only: N/A 00:21:28.311 Firmware Activation Without Reset: N/A 00:21:28.311 Multiple Update Detection Support: N/A 00:21:28.311 Firmware Update Granularity: No Information Provided 00:21:28.311 Per-Namespace SMART Log: No 00:21:28.311 Asymmetric Namespace Access Log Page: Not Supported 00:21:28.311 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:28.311 Command Effects Log Page: Supported 00:21:28.311 Get Log Page Extended Data: Supported 00:21:28.311 Telemetry Log Pages: Not Supported 00:21:28.311 Persistent Event Log Pages: Not Supported 00:21:28.311 Supported Log Pages Log Page: May Support 00:21:28.311 Commands Supported & Effects Log Page: Not Supported 00:21:28.311 Feature Identifiers & Effects Log Page:May Support 00:21:28.311 NVMe-MI Commands & Effects Log Page: May Support 00:21:28.311 Data Area 4 for Telemetry Log: Not Supported 00:21:28.311 Error Log Page Entries Supported: 128 00:21:28.311 Keep Alive: Supported 00:21:28.311 Keep Alive Granularity: 10000 ms 00:21:28.311 00:21:28.311 NVM Command Set Attributes 00:21:28.311 ========================== 00:21:28.311 Submission Queue Entry Size 00:21:28.311 Max: 64 00:21:28.311 Min: 64 00:21:28.311 Completion Queue Entry Size 00:21:28.311 Max: 16 00:21:28.311 Min: 16 00:21:28.311 Number of Namespaces: 32 00:21:28.311 Compare Command: Supported 00:21:28.311 Write Uncorrectable Command: Not Supported 00:21:28.311 Dataset Management Command: Supported 00:21:28.311 Write Zeroes Command: Supported 00:21:28.311 Set Features Save Field: Not Supported 00:21:28.311 Reservations: Supported 00:21:28.311 Timestamp: Not Supported 00:21:28.311 Copy: Supported 00:21:28.311 Volatile Write Cache: Present 00:21:28.311 Atomic Write Unit (Normal): 1 00:21:28.311 Atomic Write Unit (PFail): 1 00:21:28.311 Atomic Compare & Write Unit: 1 00:21:28.311 Fused Compare & Write: Supported 00:21:28.311 Scatter-Gather List 00:21:28.311 SGL Command Set: Supported 00:21:28.311 SGL Keyed: Supported 00:21:28.311 SGL Bit Bucket Descriptor: Not Supported 00:21:28.311 SGL Metadata Pointer: Not Supported 00:21:28.311 Oversized SGL: Not Supported 00:21:28.311 SGL Metadata Address: Not Supported 00:21:28.311 SGL Offset: Supported 00:21:28.311 Transport SGL Data Block: Not Supported 00:21:28.311 Replay Protected Memory Block: Not Supported 00:21:28.311 00:21:28.311 Firmware Slot Information 00:21:28.311 ========================= 00:21:28.311 Active slot: 1 00:21:28.311 Slot 1 Firmware Revision: 24.05 00:21:28.311 00:21:28.311 00:21:28.311 Commands Supported and Effects 00:21:28.311 ============================== 00:21:28.311 Admin Commands 00:21:28.311 -------------- 00:21:28.311 Get Log Page (02h): Supported 00:21:28.311 Identify (06h): Supported 00:21:28.311 Abort (08h): Supported 00:21:28.311 Set Features (09h): Supported 00:21:28.311 Get Features (0Ah): Supported 00:21:28.311 Asynchronous Event Request (0Ch): Supported 00:21:28.311 Keep Alive (18h): Supported 00:21:28.311 I/O Commands 00:21:28.311 ------------ 00:21:28.311 Flush (00h): Supported LBA-Change 00:21:28.311 Write (01h): Supported LBA-Change 00:21:28.311 Read (02h): Supported 00:21:28.311 Compare (05h): Supported 00:21:28.311 Write Zeroes (08h): Supported LBA-Change 00:21:28.311 Dataset Management (09h): Supported LBA-Change 00:21:28.311 Copy (19h): Supported LBA-Change 00:21:28.311 Unknown (79h): Supported LBA-Change 00:21:28.311 Unknown (7Ah): Supported 00:21:28.311 00:21:28.311 Error Log 00:21:28.311 ========= 00:21:28.311 00:21:28.311 Arbitration 00:21:28.311 =========== 00:21:28.311 Arbitration Burst: 1 00:21:28.311 00:21:28.311 Power Management 00:21:28.311 ================ 00:21:28.311 Number of Power States: 1 00:21:28.312 Current Power State: Power State #0 00:21:28.312 Power State #0: 00:21:28.312 Max Power: 0.00 W 00:21:28.312 Non-Operational State: Operational 00:21:28.312 Entry Latency: Not Reported 00:21:28.312 Exit Latency: Not Reported 00:21:28.312 Relative Read Throughput: 0 00:21:28.312 Relative Read Latency: 0 00:21:28.312 Relative Write Throughput: 0 00:21:28.312 Relative Write Latency: 0 00:21:28.312 Idle Power: Not Reported 00:21:28.312 Active Power: Not Reported 00:21:28.312 Non-Operational Permissive Mode: Not Supported 00:21:28.312 00:21:28.312 Health Information 00:21:28.312 ================== 00:21:28.312 Critical Warnings: 00:21:28.312 Available Spare Space: OK 00:21:28.312 Temperature: OK 00:21:28.312 Device Reliability: OK 00:21:28.312 Read Only: No 00:21:28.312 Volatile Memory Backup: OK 00:21:28.312 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:28.312 Temperature Threshold: [2024-05-15 11:13:25.387338] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.312 [2024-05-15 11:13:25.387343] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x7eec30) 00:21:28.312 [2024-05-15 11:13:25.387349] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.312 [2024-05-15 11:13:25.387362] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x857320, cid 7, qid 0 00:21:28.312 [2024-05-15 11:13:25.387431] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.312 [2024-05-15 11:13:25.387437] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.312 [2024-05-15 11:13:25.387440] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.312 [2024-05-15 11:13:25.387443] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x857320) on tqpair=0x7eec30 00:21:28.312 [2024-05-15 11:13:25.387470] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:21:28.312 [2024-05-15 11:13:25.387482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.312 [2024-05-15 11:13:25.387487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.312 [2024-05-15 11:13:25.387493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.312 [2024-05-15 11:13:25.387498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.312 [2024-05-15 11:13:25.387504] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.312 [2024-05-15 11:13:25.387508] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.312 [2024-05-15 11:13:25.387511] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7eec30) 00:21:28.312 [2024-05-15 11:13:25.387517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.312 [2024-05-15 11:13:25.387528] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856da0, cid 3, qid 0 00:21:28.312 [2024-05-15 11:13:25.387593] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.312 [2024-05-15 11:13:25.387599] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.312 [2024-05-15 11:13:25.387602] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.312 [2024-05-15 11:13:25.387605] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x856da0) on tqpair=0x7eec30 00:21:28.312 [2024-05-15 11:13:25.387611] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.312 [2024-05-15 11:13:25.387614] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.312 [2024-05-15 11:13:25.387617] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7eec30) 00:21:28.312 [2024-05-15 11:13:25.387623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.312 [2024-05-15 11:13:25.387634] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856da0, cid 3, qid 0 00:21:28.312 [2024-05-15 11:13:25.387719] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.312 [2024-05-15 11:13:25.387725] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.312 [2024-05-15 11:13:25.387728] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.312 [2024-05-15 11:13:25.387731] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x856da0) on tqpair=0x7eec30 00:21:28.312 [2024-05-15 11:13:25.387735] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:21:28.312 [2024-05-15 11:13:25.387738] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:21:28.312 [2024-05-15 11:13:25.387746] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.312 [2024-05-15 11:13:25.387750] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.312 [2024-05-15 11:13:25.387753] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7eec30) 00:21:28.312 [2024-05-15 11:13:25.387758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.312 [2024-05-15 11:13:25.387767] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856da0, cid 3, qid 0 00:21:28.312 [2024-05-15 11:13:25.387836] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.312 [2024-05-15 11:13:25.387842] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.312 [2024-05-15 11:13:25.387845] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.312 [2024-05-15 11:13:25.387848] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x856da0) on tqpair=0x7eec30 00:21:28.312 [2024-05-15 11:13:25.387857] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.312 [2024-05-15 11:13:25.387862] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.312 [2024-05-15 11:13:25.387865] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7eec30) 00:21:28.312 [2024-05-15 11:13:25.387870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.312 [2024-05-15 11:13:25.387879] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856da0, cid 3, qid 0 00:21:28.312 [2024-05-15 11:13:25.387953] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.312 [2024-05-15 11:13:25.387959] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.312 [2024-05-15 11:13:25.387962] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.312 [2024-05-15 11:13:25.387965] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x856da0) on tqpair=0x7eec30 00:21:28.312 [2024-05-15 11:13:25.387973] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.312 [2024-05-15 11:13:25.387976] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.312 [2024-05-15 11:13:25.387979] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7eec30) 00:21:28.312 [2024-05-15 11:13:25.387985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.312 [2024-05-15 11:13:25.387994] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856da0, cid 3, qid 0 00:21:28.312 [2024-05-15 11:13:25.388060] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.313 [2024-05-15 11:13:25.388066] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.313 [2024-05-15 11:13:25.388069] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.313 [2024-05-15 11:13:25.388072] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x856da0) on tqpair=0x7eec30 00:21:28.313 [2024-05-15 11:13:25.388080] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.313 [2024-05-15 11:13:25.388084] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.313 [2024-05-15 11:13:25.388087] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7eec30) 00:21:28.313 [2024-05-15 11:13:25.388093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.313 [2024-05-15 11:13:25.388101] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856da0, cid 3, qid 0 00:21:28.313 [2024-05-15 11:13:25.388162] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.313 [2024-05-15 11:13:25.388174] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.313 [2024-05-15 11:13:25.388177] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.313 [2024-05-15 11:13:25.388180] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x856da0) on tqpair=0x7eec30 00:21:28.313 [2024-05-15 11:13:25.388188] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.313 [2024-05-15 11:13:25.388192] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.313 [2024-05-15 11:13:25.388195] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7eec30) 00:21:28.313 [2024-05-15 11:13:25.388200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.313 [2024-05-15 11:13:25.388209] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856da0, cid 3, qid 0 00:21:28.313 [2024-05-15 11:13:25.388280] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.313 [2024-05-15 11:13:25.388286] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.313 [2024-05-15 11:13:25.388289] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.313 [2024-05-15 11:13:25.388292] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x856da0) on tqpair=0x7eec30 00:21:28.313 [2024-05-15 11:13:25.388300] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.313 [2024-05-15 11:13:25.388304] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.313 [2024-05-15 11:13:25.388308] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7eec30) 00:21:28.313 [2024-05-15 11:13:25.388314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.313 [2024-05-15 11:13:25.388323] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856da0, cid 3, qid 0 00:21:28.313 [2024-05-15 11:13:25.388398] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.313 [2024-05-15 11:13:25.388403] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.313 [2024-05-15 11:13:25.388406] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.313 [2024-05-15 11:13:25.388409] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x856da0) on tqpair=0x7eec30 00:21:28.313 [2024-05-15 11:13:25.388417] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.313 [2024-05-15 11:13:25.388421] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.313 [2024-05-15 11:13:25.388424] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7eec30) 00:21:28.313 [2024-05-15 11:13:25.388429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.313 [2024-05-15 11:13:25.388438] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856da0, cid 3, qid 0 00:21:28.313 [2024-05-15 11:13:25.388502] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.313 [2024-05-15 11:13:25.388507] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.313 [2024-05-15 11:13:25.388510] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.313 [2024-05-15 11:13:25.388514] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x856da0) on tqpair=0x7eec30 00:21:28.313 [2024-05-15 11:13:25.388522] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.313 [2024-05-15 11:13:25.388525] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.313 [2024-05-15 11:13:25.388528] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7eec30) 00:21:28.313 [2024-05-15 11:13:25.388534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.313 [2024-05-15 11:13:25.388543] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856da0, cid 3, qid 0 00:21:28.313 [2024-05-15 11:13:25.388608] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.313 [2024-05-15 11:13:25.388614] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.313 [2024-05-15 11:13:25.388617] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.313 [2024-05-15 11:13:25.388620] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x856da0) on tqpair=0x7eec30 00:21:28.313 [2024-05-15 11:13:25.388628] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.313 [2024-05-15 11:13:25.388631] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.313 [2024-05-15 11:13:25.388635] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7eec30) 00:21:28.313 [2024-05-15 11:13:25.388640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.313 [2024-05-15 11:13:25.388649] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856da0, cid 3, qid 0 00:21:28.313 [2024-05-15 11:13:25.388725] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.313 [2024-05-15 11:13:25.388731] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.313 [2024-05-15 11:13:25.388734] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.313 [2024-05-15 11:13:25.388737] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x856da0) on tqpair=0x7eec30 00:21:28.313 [2024-05-15 11:13:25.388745] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.313 [2024-05-15 11:13:25.388748] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.313 [2024-05-15 11:13:25.388752] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7eec30) 00:21:28.313 [2024-05-15 11:13:25.388759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.313 [2024-05-15 11:13:25.388767] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856da0, cid 3, qid 0 00:21:28.313 [2024-05-15 11:13:25.388842] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.313 [2024-05-15 11:13:25.388848] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.313 [2024-05-15 11:13:25.388851] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.313 [2024-05-15 11:13:25.388854] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x856da0) on tqpair=0x7eec30 00:21:28.313 [2024-05-15 11:13:25.388862] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.313 [2024-05-15 11:13:25.388865] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.313 [2024-05-15 11:13:25.388868] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7eec30) 00:21:28.313 [2024-05-15 11:13:25.388874] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.313 [2024-05-15 11:13:25.388883] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856da0, cid 3, qid 0 00:21:28.313 [2024-05-15 11:13:25.388944] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.313 [2024-05-15 11:13:25.388950] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.313 [2024-05-15 11:13:25.388953] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.313 [2024-05-15 11:13:25.388956] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x856da0) on tqpair=0x7eec30 00:21:28.313 [2024-05-15 11:13:25.388964] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.313 [2024-05-15 11:13:25.388968] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.313 [2024-05-15 11:13:25.388971] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7eec30) 00:21:28.313 [2024-05-15 11:13:25.388976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.313 [2024-05-15 11:13:25.388985] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856da0, cid 3, qid 0 00:21:28.313 [2024-05-15 11:13:25.389048] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.313 [2024-05-15 11:13:25.389054] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.313 [2024-05-15 11:13:25.389056] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.313 [2024-05-15 11:13:25.389059] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x856da0) on tqpair=0x7eec30 00:21:28.313 [2024-05-15 11:13:25.389067] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.313 [2024-05-15 11:13:25.389071] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.313 [2024-05-15 11:13:25.389074] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7eec30) 00:21:28.313 [2024-05-15 11:13:25.389079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.313 [2024-05-15 11:13:25.389088] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856da0, cid 3, qid 0 00:21:28.313 [2024-05-15 11:13:25.389158] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.313 [2024-05-15 11:13:25.389169] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.313 [2024-05-15 11:13:25.389172] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.313 [2024-05-15 11:13:25.389175] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x856da0) on tqpair=0x7eec30 00:21:28.313 [2024-05-15 11:13:25.389183] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.313 [2024-05-15 11:13:25.389187] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.313 [2024-05-15 11:13:25.389190] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7eec30) 00:21:28.313 [2024-05-15 11:13:25.389195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.313 [2024-05-15 11:13:25.389206] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856da0, cid 3, qid 0 00:21:28.314 [2024-05-15 11:13:25.389269] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.314 [2024-05-15 11:13:25.389274] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.314 [2024-05-15 11:13:25.389277] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.314 [2024-05-15 11:13:25.389280] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x856da0) on tqpair=0x7eec30 00:21:28.314 [2024-05-15 11:13:25.389288] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.314 [2024-05-15 11:13:25.389292] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.314 [2024-05-15 11:13:25.389295] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7eec30) 00:21:28.314 [2024-05-15 11:13:25.389300] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.314 [2024-05-15 11:13:25.389309] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856da0, cid 3, qid 0 00:21:28.314 [2024-05-15 11:13:25.389373] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.314 [2024-05-15 11:13:25.389379] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.314 [2024-05-15 11:13:25.389382] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.314 [2024-05-15 11:13:25.389385] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x856da0) on tqpair=0x7eec30 00:21:28.314 [2024-05-15 11:13:25.389392] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.314 [2024-05-15 11:13:25.389396] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.314 [2024-05-15 11:13:25.389399] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7eec30) 00:21:28.314 [2024-05-15 11:13:25.389404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.314 [2024-05-15 11:13:25.389413] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856da0, cid 3, qid 0 00:21:28.314 [2024-05-15 11:13:25.389476] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.314 [2024-05-15 11:13:25.389481] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.314 [2024-05-15 11:13:25.389484] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.314 [2024-05-15 11:13:25.389487] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x856da0) on tqpair=0x7eec30 00:21:28.314 [2024-05-15 11:13:25.389495] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.314 [2024-05-15 11:13:25.389499] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.314 [2024-05-15 11:13:25.389502] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7eec30) 00:21:28.314 [2024-05-15 11:13:25.389507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.314 [2024-05-15 11:13:25.389516] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856da0, cid 3, qid 0 00:21:28.314 [2024-05-15 11:13:25.389585] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.314 [2024-05-15 11:13:25.389590] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.314 [2024-05-15 11:13:25.389594] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.314 [2024-05-15 11:13:25.389597] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x856da0) on tqpair=0x7eec30 00:21:28.314 [2024-05-15 11:13:25.389604] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.314 [2024-05-15 11:13:25.389608] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.314 [2024-05-15 11:13:25.389611] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7eec30) 00:21:28.314 [2024-05-15 11:13:25.389616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.314 [2024-05-15 11:13:25.389627] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856da0, cid 3, qid 0 00:21:28.314 [2024-05-15 11:13:25.389693] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.314 [2024-05-15 11:13:25.389699] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.314 [2024-05-15 11:13:25.389702] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.314 [2024-05-15 11:13:25.389705] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x856da0) on tqpair=0x7eec30 00:21:28.314 [2024-05-15 11:13:25.389713] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.314 [2024-05-15 11:13:25.389716] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.314 [2024-05-15 11:13:25.389719] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7eec30) 00:21:28.314 [2024-05-15 11:13:25.389725] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.314 [2024-05-15 11:13:25.389734] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856da0, cid 3, qid 0 00:21:28.314 [2024-05-15 11:13:25.389803] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.314 [2024-05-15 11:13:25.389808] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.314 [2024-05-15 11:13:25.389811] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.314 [2024-05-15 11:13:25.389814] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x856da0) on tqpair=0x7eec30 00:21:28.314 [2024-05-15 11:13:25.389822] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.314 [2024-05-15 11:13:25.389825] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.314 [2024-05-15 11:13:25.389829] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7eec30) 00:21:28.314 [2024-05-15 11:13:25.389834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.314 [2024-05-15 11:13:25.389843] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856da0, cid 3, qid 0 00:21:28.314 [2024-05-15 11:13:25.389912] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.314 [2024-05-15 11:13:25.389918] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.314 [2024-05-15 11:13:25.389920] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.314 [2024-05-15 11:13:25.389923] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x856da0) on tqpair=0x7eec30 00:21:28.314 [2024-05-15 11:13:25.389931] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.314 [2024-05-15 11:13:25.389935] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.314 [2024-05-15 11:13:25.389938] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7eec30) 00:21:28.314 [2024-05-15 11:13:25.389944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.314 [2024-05-15 11:13:25.389952] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856da0, cid 3, qid 0 00:21:28.314 [2024-05-15 11:13:25.390021] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.314 [2024-05-15 11:13:25.390027] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.314 [2024-05-15 11:13:25.390030] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.314 [2024-05-15 11:13:25.390033] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x856da0) on tqpair=0x7eec30 00:21:28.314 [2024-05-15 11:13:25.390041] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.314 [2024-05-15 11:13:25.390044] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.314 [2024-05-15 11:13:25.390047] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7eec30) 00:21:28.314 [2024-05-15 11:13:25.390053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.314 [2024-05-15 11:13:25.390061] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856da0, cid 3, qid 0 00:21:28.314 [2024-05-15 11:13:25.390130] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.314 [2024-05-15 11:13:25.390136] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.314 [2024-05-15 11:13:25.390139] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.314 [2024-05-15 11:13:25.390142] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x856da0) on tqpair=0x7eec30 00:21:28.314 [2024-05-15 11:13:25.390150] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.314 [2024-05-15 11:13:25.390153] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.314 [2024-05-15 11:13:25.390156] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7eec30) 00:21:28.314 [2024-05-15 11:13:25.390162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.314 [2024-05-15 11:13:25.390176] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856da0, cid 3, qid 0 00:21:28.314 [2024-05-15 11:13:25.390241] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.314 [2024-05-15 11:13:25.390247] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.314 [2024-05-15 11:13:25.390250] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.314 [2024-05-15 11:13:25.390253] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x856da0) on tqpair=0x7eec30 00:21:28.314 [2024-05-15 11:13:25.390261] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.314 [2024-05-15 11:13:25.390264] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.314 [2024-05-15 11:13:25.390267] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7eec30) 00:21:28.314 [2024-05-15 11:13:25.390273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.314 [2024-05-15 11:13:25.390282] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856da0, cid 3, qid 0 00:21:28.314 [2024-05-15 11:13:25.390353] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.314 [2024-05-15 11:13:25.390359] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.314 [2024-05-15 11:13:25.390362] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.314 [2024-05-15 11:13:25.390365] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x856da0) on tqpair=0x7eec30 00:21:28.314 [2024-05-15 11:13:25.390373] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.314 [2024-05-15 11:13:25.390376] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.315 [2024-05-15 11:13:25.390379] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7eec30) 00:21:28.315 [2024-05-15 11:13:25.390384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.315 [2024-05-15 11:13:25.390393] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856da0, cid 3, qid 0 00:21:28.315 [2024-05-15 11:13:25.390462] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.315 [2024-05-15 11:13:25.390467] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.315 [2024-05-15 11:13:25.390470] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.315 [2024-05-15 11:13:25.390474] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x856da0) on tqpair=0x7eec30 00:21:28.315 [2024-05-15 11:13:25.390482] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.315 [2024-05-15 11:13:25.390485] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.315 [2024-05-15 11:13:25.390488] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7eec30) 00:21:28.315 [2024-05-15 11:13:25.390493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.315 [2024-05-15 11:13:25.390502] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856da0, cid 3, qid 0 00:21:28.315 [2024-05-15 11:13:25.390570] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.315 [2024-05-15 11:13:25.390577] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.315 [2024-05-15 11:13:25.390580] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.315 [2024-05-15 11:13:25.390583] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x856da0) on tqpair=0x7eec30 00:21:28.315 [2024-05-15 11:13:25.390591] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.315 [2024-05-15 11:13:25.390595] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.315 [2024-05-15 11:13:25.390598] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7eec30) 00:21:28.315 [2024-05-15 11:13:25.390603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.315 [2024-05-15 11:13:25.390613] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856da0, cid 3, qid 0 00:21:28.315 [2024-05-15 11:13:25.390677] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.315 [2024-05-15 11:13:25.390683] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.315 [2024-05-15 11:13:25.390686] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.315 [2024-05-15 11:13:25.390689] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x856da0) on tqpair=0x7eec30 00:21:28.315 [2024-05-15 11:13:25.390697] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.315 [2024-05-15 11:13:25.390700] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.315 [2024-05-15 11:13:25.390703] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7eec30) 00:21:28.315 [2024-05-15 11:13:25.390709] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.315 [2024-05-15 11:13:25.390718] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856da0, cid 3, qid 0 00:21:28.315 [2024-05-15 11:13:25.390788] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.315 [2024-05-15 11:13:25.390794] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.315 [2024-05-15 11:13:25.390797] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.315 [2024-05-15 11:13:25.390800] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x856da0) on tqpair=0x7eec30 00:21:28.315 [2024-05-15 11:13:25.390808] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.315 [2024-05-15 11:13:25.390811] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.315 [2024-05-15 11:13:25.390814] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7eec30) 00:21:28.315 [2024-05-15 11:13:25.390820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.315 [2024-05-15 11:13:25.390828] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856da0, cid 3, qid 0 00:21:28.315 [2024-05-15 11:13:25.390898] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.315 [2024-05-15 11:13:25.390903] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.315 [2024-05-15 11:13:25.390906] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.315 [2024-05-15 11:13:25.390909] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x856da0) on tqpair=0x7eec30 00:21:28.315 [2024-05-15 11:13:25.390917] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.315 [2024-05-15 11:13:25.390921] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.315 [2024-05-15 11:13:25.390924] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7eec30) 00:21:28.315 [2024-05-15 11:13:25.390929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.315 [2024-05-15 11:13:25.390938] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856da0, cid 3, qid 0 00:21:28.315 [2024-05-15 11:13:25.391006] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.315 [2024-05-15 11:13:25.391012] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.315 [2024-05-15 11:13:25.391016] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.315 [2024-05-15 11:13:25.391020] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x856da0) on tqpair=0x7eec30 00:21:28.315 [2024-05-15 11:13:25.391028] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.315 [2024-05-15 11:13:25.391031] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.315 [2024-05-15 11:13:25.391034] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7eec30) 00:21:28.315 [2024-05-15 11:13:25.391040] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.315 [2024-05-15 11:13:25.391048] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856da0, cid 3, qid 0 00:21:28.315 [2024-05-15 11:13:25.391115] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.315 [2024-05-15 11:13:25.391121] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.315 [2024-05-15 11:13:25.391123] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.315 [2024-05-15 11:13:25.391127] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x856da0) on tqpair=0x7eec30 00:21:28.315 [2024-05-15 11:13:25.391135] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.315 [2024-05-15 11:13:25.391138] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.315 [2024-05-15 11:13:25.391141] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7eec30) 00:21:28.315 [2024-05-15 11:13:25.391147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.315 [2024-05-15 11:13:25.391155] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856da0, cid 3, qid 0 00:21:28.315 [2024-05-15 11:13:25.395172] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.315 [2024-05-15 11:13:25.395180] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.315 [2024-05-15 11:13:25.395183] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.315 [2024-05-15 11:13:25.395186] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x856da0) on tqpair=0x7eec30 00:21:28.315 [2024-05-15 11:13:25.395196] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:28.315 [2024-05-15 11:13:25.395199] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:28.315 [2024-05-15 11:13:25.395202] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7eec30) 00:21:28.315 [2024-05-15 11:13:25.395208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.315 [2024-05-15 11:13:25.395219] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x856da0, cid 3, qid 0 00:21:28.315 [2024-05-15 11:13:25.395292] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:28.315 [2024-05-15 11:13:25.395298] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:28.315 [2024-05-15 11:13:25.395301] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:28.315 [2024-05-15 11:13:25.395304] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x856da0) on tqpair=0x7eec30 00:21:28.315 [2024-05-15 11:13:25.395310] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:21:28.315 0 Kelvin (-273 Celsius) 00:21:28.315 Available Spare: 0% 00:21:28.315 Available Spare Threshold: 0% 00:21:28.315 Life Percentage Used: 0% 00:21:28.315 Data Units Read: 0 00:21:28.315 Data Units Written: 0 00:21:28.315 Host Read Commands: 0 00:21:28.315 Host Write Commands: 0 00:21:28.315 Controller Busy Time: 0 minutes 00:21:28.315 Power Cycles: 0 00:21:28.315 Power On Hours: 0 hours 00:21:28.315 Unsafe Shutdowns: 0 00:21:28.315 Unrecoverable Media Errors: 0 00:21:28.315 Lifetime Error Log Entries: 0 00:21:28.315 Warning Temperature Time: 0 minutes 00:21:28.315 Critical Temperature Time: 0 minutes 00:21:28.315 00:21:28.315 Number of Queues 00:21:28.315 ================ 00:21:28.315 Number of I/O Submission Queues: 127 00:21:28.315 Number of I/O Completion Queues: 127 00:21:28.315 00:21:28.315 Active Namespaces 00:21:28.315 ================= 00:21:28.315 Namespace ID:1 00:21:28.315 Error Recovery Timeout: Unlimited 00:21:28.315 Command Set Identifier: NVM (00h) 00:21:28.315 Deallocate: Supported 00:21:28.315 Deallocated/Unwritten Error: Not Supported 00:21:28.315 Deallocated Read Value: Unknown 00:21:28.315 Deallocate in Write Zeroes: Not Supported 00:21:28.315 Deallocated Guard Field: 0xFFFF 00:21:28.315 Flush: Supported 00:21:28.315 Reservation: Supported 00:21:28.315 Namespace Sharing Capabilities: Multiple Controllers 00:21:28.315 Size (in LBAs): 131072 (0GiB) 00:21:28.315 Capacity (in LBAs): 131072 (0GiB) 00:21:28.315 Utilization (in LBAs): 131072 (0GiB) 00:21:28.315 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:28.315 EUI64: ABCDEF0123456789 00:21:28.315 UUID: 5300c41c-eb57-4321-8fea-cbdc6a6ea0a0 00:21:28.315 Thin Provisioning: Not Supported 00:21:28.315 Per-NS Atomic Units: Yes 00:21:28.315 Atomic Boundary Size (Normal): 0 00:21:28.315 Atomic Boundary Size (PFail): 0 00:21:28.315 Atomic Boundary Offset: 0 00:21:28.315 Maximum Single Source Range Length: 65535 00:21:28.315 Maximum Copy Length: 65535 00:21:28.316 Maximum Source Range Count: 1 00:21:28.316 NGUID/EUI64 Never Reused: No 00:21:28.316 Namespace Write Protected: No 00:21:28.316 Number of LBA Formats: 1 00:21:28.316 Current LBA Format: LBA Format #00 00:21:28.316 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:28.316 00:21:28.316 11:13:25 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:21:28.316 11:13:25 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:28.316 11:13:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:28.316 11:13:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:28.316 11:13:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:28.316 11:13:25 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:28.316 11:13:25 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:21:28.316 11:13:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:28.316 11:13:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:21:28.316 11:13:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:28.316 11:13:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:21:28.316 11:13:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:28.316 11:13:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:28.316 rmmod nvme_tcp 00:21:28.316 rmmod nvme_fabrics 00:21:28.316 rmmod nvme_keyring 00:21:28.316 11:13:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:28.316 11:13:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:21:28.316 11:13:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:21:28.316 11:13:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2329374 ']' 00:21:28.316 11:13:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2329374 00:21:28.316 11:13:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@947 -- # '[' -z 2329374 ']' 00:21:28.316 11:13:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # kill -0 2329374 00:21:28.316 11:13:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # uname 00:21:28.316 11:13:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:28.316 11:13:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2329374 00:21:28.316 11:13:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:21:28.316 11:13:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:21:28.316 11:13:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2329374' 00:21:28.316 killing process with pid 2329374 00:21:28.316 11:13:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # kill 2329374 00:21:28.316 [2024-05-15 11:13:25.514302] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:28.316 11:13:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@971 -- # wait 2329374 00:21:28.573 11:13:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:28.573 11:13:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:28.573 11:13:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:28.573 11:13:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:28.573 11:13:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:28.573 11:13:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:28.573 11:13:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:28.573 11:13:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.102 11:13:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:31.102 00:21:31.102 real 0m9.070s 00:21:31.102 user 0m7.274s 00:21:31.102 sys 0m4.316s 00:21:31.102 11:13:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:31.102 11:13:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:31.102 ************************************ 00:21:31.102 END TEST nvmf_identify 00:21:31.102 ************************************ 00:21:31.102 11:13:27 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:31.102 11:13:27 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:21:31.102 11:13:27 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:31.102 11:13:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:31.102 ************************************ 00:21:31.102 START TEST nvmf_perf 00:21:31.102 ************************************ 00:21:31.102 11:13:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:31.102 * Looking for test storage... 00:21:31.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:31.102 11:13:27 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:31.102 11:13:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:21:31.102 11:13:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:31.102 11:13:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:31.102 11:13:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:31.102 11:13:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:31.102 11:13:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:31.102 11:13:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:31.102 11:13:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:31.102 11:13:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:31.102 11:13:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:31.102 11:13:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:31.102 11:13:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:31.102 11:13:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:31.102 11:13:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:31.102 11:13:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:31.102 11:13:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:31.102 11:13:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:31.102 11:13:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:31.102 11:13:27 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:31.102 11:13:27 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:31.102 11:13:27 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:31.103 11:13:27 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.103 11:13:27 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.103 11:13:27 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.103 11:13:27 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:21:31.103 11:13:27 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.103 11:13:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:21:31.103 11:13:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:31.103 11:13:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:31.103 11:13:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:31.103 11:13:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:31.103 11:13:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:31.103 11:13:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:31.103 11:13:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:31.103 11:13:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:31.103 11:13:27 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:31.103 11:13:27 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:31.103 11:13:27 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:31.103 11:13:27 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:21:31.103 11:13:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:31.103 11:13:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:31.103 11:13:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:31.103 11:13:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:31.103 11:13:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:31.103 11:13:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.103 11:13:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:31.103 11:13:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.103 11:13:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:31.103 11:13:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:31.103 11:13:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:21:31.103 11:13:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:36.360 11:13:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:36.360 11:13:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:21:36.360 11:13:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:36.360 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:36.360 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:36.360 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:36.360 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:36.360 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:21:36.360 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:36.360 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:21:36.360 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:21:36.360 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:21:36.360 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:21:36.360 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:21:36.360 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:21:36.360 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:36.360 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:36.360 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:36.360 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:36.360 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:36.360 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:36.360 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:36.360 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:36.360 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:36.360 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:36.360 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:36.360 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:36.360 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:36.360 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:36.360 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:36.360 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:36.360 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:36.360 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:36.360 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:36.360 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:36.360 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:36.360 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:36.361 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:36.361 Found net devices under 0000:86:00.0: cvl_0_0 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:36.361 Found net devices under 0000:86:00.1: cvl_0_1 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:36.361 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:36.361 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:21:36.361 00:21:36.361 --- 10.0.0.2 ping statistics --- 00:21:36.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.361 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:36.361 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:36.361 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:21:36.361 00:21:36.361 --- 10.0.0.1 ping statistics --- 00:21:36.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.361 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@721 -- # xtrace_disable 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2333132 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2333132 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@828 -- # '[' -z 2333132 ']' 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:36.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:36.361 11:13:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:36.361 [2024-05-15 11:13:33.337796] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:21:36.361 [2024-05-15 11:13:33.337838] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:36.361 EAL: No free 2048 kB hugepages reported on node 1 00:21:36.361 [2024-05-15 11:13:33.399010] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:36.361 [2024-05-15 11:13:33.477005] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:36.361 [2024-05-15 11:13:33.477045] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:36.361 [2024-05-15 11:13:33.477052] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:36.361 [2024-05-15 11:13:33.477058] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:36.361 [2024-05-15 11:13:33.477063] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:36.361 [2024-05-15 11:13:33.477105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:36.361 [2024-05-15 11:13:33.477194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:36.361 [2024-05-15 11:13:33.477222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:36.361 [2024-05-15 11:13:33.477224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.924 11:13:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:36.924 11:13:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@861 -- # return 0 00:21:36.924 11:13:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:36.924 11:13:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@727 -- # xtrace_disable 00:21:36.924 11:13:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:36.924 11:13:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:36.924 11:13:34 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:21:36.924 11:13:34 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:21:40.191 11:13:37 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:21:40.191 11:13:37 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:21:40.191 11:13:37 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:21:40.191 11:13:37 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:40.449 11:13:37 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:21:40.449 11:13:37 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:21:40.449 11:13:37 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:21:40.449 11:13:37 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:21:40.449 11:13:37 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:40.706 [2024-05-15 11:13:37.749186] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:40.706 11:13:37 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:40.706 11:13:37 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:40.706 11:13:37 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:40.971 11:13:38 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:40.971 11:13:38 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:41.255 11:13:38 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:41.255 [2024-05-15 11:13:38.507854] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:41.255 [2024-05-15 11:13:38.508177] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:41.528 11:13:38 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:41.528 11:13:38 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:21:41.529 11:13:38 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:21:41.529 11:13:38 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:21:41.529 11:13:38 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:21:42.897 Initializing NVMe Controllers 00:21:42.897 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:21:42.897 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:21:42.897 Initialization complete. Launching workers. 00:21:42.897 ======================================================== 00:21:42.897 Latency(us) 00:21:42.897 Device Information : IOPS MiB/s Average min max 00:21:42.897 PCIE (0000:5e:00.0) NSID 1 from core 0: 97618.87 381.32 327.37 39.32 5212.35 00:21:42.897 ======================================================== 00:21:42.897 Total : 97618.87 381.32 327.37 39.32 5212.35 00:21:42.897 00:21:42.897 11:13:39 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:42.897 EAL: No free 2048 kB hugepages reported on node 1 00:21:44.267 Initializing NVMe Controllers 00:21:44.267 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:44.267 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:44.267 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:44.267 Initialization complete. Launching workers. 00:21:44.267 ======================================================== 00:21:44.267 Latency(us) 00:21:44.267 Device Information : IOPS MiB/s Average min max 00:21:44.267 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 97.66 0.38 10414.69 117.92 44723.27 00:21:44.267 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 46.83 0.18 21521.03 7810.83 47899.34 00:21:44.267 ======================================================== 00:21:44.267 Total : 144.49 0.56 14014.68 117.92 47899.34 00:21:44.267 00:21:44.267 11:13:41 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:44.267 EAL: No free 2048 kB hugepages reported on node 1 00:21:45.638 Initializing NVMe Controllers 00:21:45.638 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:45.638 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:45.638 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:45.638 Initialization complete. Launching workers. 00:21:45.638 ======================================================== 00:21:45.638 Latency(us) 00:21:45.638 Device Information : IOPS MiB/s Average min max 00:21:45.638 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10805.52 42.21 2967.41 345.17 9786.59 00:21:45.638 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3925.83 15.34 8187.37 6548.33 18743.41 00:21:45.638 ======================================================== 00:21:45.638 Total : 14731.35 57.54 4358.50 345.17 18743.41 00:21:45.638 00:21:45.638 11:13:42 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:21:45.638 11:13:42 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:21:45.638 11:13:42 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:45.638 EAL: No free 2048 kB hugepages reported on node 1 00:21:48.160 Initializing NVMe Controllers 00:21:48.160 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:48.160 Controller IO queue size 128, less than required. 00:21:48.160 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:48.160 Controller IO queue size 128, less than required. 00:21:48.160 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:48.160 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:48.160 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:48.160 Initialization complete. Launching workers. 00:21:48.160 ======================================================== 00:21:48.160 Latency(us) 00:21:48.160 Device Information : IOPS MiB/s Average min max 00:21:48.160 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1859.11 464.78 69902.90 40500.75 96650.66 00:21:48.160 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 599.88 149.97 220288.42 77455.75 330372.23 00:21:48.160 ======================================================== 00:21:48.160 Total : 2458.99 614.75 106589.75 40500.75 330372.23 00:21:48.160 00:21:48.160 11:13:45 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:21:48.160 EAL: No free 2048 kB hugepages reported on node 1 00:21:48.160 No valid NVMe controllers or AIO or URING devices found 00:21:48.160 Initializing NVMe Controllers 00:21:48.160 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:48.160 Controller IO queue size 128, less than required. 00:21:48.160 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:48.160 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:21:48.160 Controller IO queue size 128, less than required. 00:21:48.160 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:48.160 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:21:48.160 WARNING: Some requested NVMe devices were skipped 00:21:48.160 11:13:45 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:21:48.160 EAL: No free 2048 kB hugepages reported on node 1 00:21:50.682 Initializing NVMe Controllers 00:21:50.682 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:50.682 Controller IO queue size 128, less than required. 00:21:50.682 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:50.682 Controller IO queue size 128, less than required. 00:21:50.683 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:50.683 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:50.683 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:50.683 Initialization complete. Launching workers. 00:21:50.683 00:21:50.683 ==================== 00:21:50.683 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:21:50.683 TCP transport: 00:21:50.683 polls: 17031 00:21:50.683 idle_polls: 12062 00:21:50.683 sock_completions: 4969 00:21:50.683 nvme_completions: 6883 00:21:50.683 submitted_requests: 10328 00:21:50.683 queued_requests: 1 00:21:50.683 00:21:50.683 ==================== 00:21:50.683 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:21:50.683 TCP transport: 00:21:50.683 polls: 12948 00:21:50.683 idle_polls: 8020 00:21:50.683 sock_completions: 4928 00:21:50.683 nvme_completions: 7031 00:21:50.683 submitted_requests: 10546 00:21:50.683 queued_requests: 1 00:21:50.683 ======================================================== 00:21:50.683 Latency(us) 00:21:50.683 Device Information : IOPS MiB/s Average min max 00:21:50.683 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1718.09 429.52 76349.79 49437.46 119418.06 00:21:50.683 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1755.04 438.76 73531.28 32445.69 121039.83 00:21:50.683 ======================================================== 00:21:50.683 Total : 3473.13 868.28 74925.54 32445.69 121039.83 00:21:50.683 00:21:50.683 11:13:47 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:21:50.683 11:13:47 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:50.940 11:13:47 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:21:50.940 11:13:47 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:50.940 11:13:47 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:21:50.940 11:13:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:50.940 11:13:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:21:50.940 11:13:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:50.940 11:13:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:21:50.940 11:13:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:50.940 11:13:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:50.940 rmmod nvme_tcp 00:21:50.940 rmmod nvme_fabrics 00:21:50.940 rmmod nvme_keyring 00:21:50.940 11:13:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:50.940 11:13:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:21:50.940 11:13:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:21:50.940 11:13:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2333132 ']' 00:21:50.940 11:13:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2333132 00:21:50.940 11:13:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@947 -- # '[' -z 2333132 ']' 00:21:50.940 11:13:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # kill -0 2333132 00:21:50.940 11:13:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # uname 00:21:50.940 11:13:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:50.940 11:13:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2333132 00:21:50.940 11:13:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:21:50.940 11:13:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:21:50.940 11:13:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2333132' 00:21:50.940 killing process with pid 2333132 00:21:50.940 11:13:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # kill 2333132 00:21:50.940 [2024-05-15 11:13:48.065292] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:50.940 11:13:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@971 -- # wait 2333132 00:21:52.311 11:13:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:52.311 11:13:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:52.311 11:13:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:52.311 11:13:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:52.311 11:13:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:52.311 11:13:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.311 11:13:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:52.311 11:13:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.842 11:13:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:54.842 00:21:54.842 real 0m23.753s 00:21:54.842 user 1m3.971s 00:21:54.842 sys 0m7.362s 00:21:54.842 11:13:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:54.842 11:13:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:54.842 ************************************ 00:21:54.842 END TEST nvmf_perf 00:21:54.842 ************************************ 00:21:54.842 11:13:51 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:54.842 11:13:51 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:21:54.842 11:13:51 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:54.842 11:13:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:54.842 ************************************ 00:21:54.842 START TEST nvmf_fio_host 00:21:54.842 ************************************ 00:21:54.842 11:13:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:54.842 * Looking for test storage... 00:21:54.842 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:54.842 11:13:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:54.842 11:13:51 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:54.842 11:13:51 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:54.842 11:13:51 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:54.842 11:13:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.842 11:13:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.842 11:13:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.842 11:13:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:54.842 11:13:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.842 11:13:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:54.842 11:13:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:21:54.842 11:13:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:54.842 11:13:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:54.842 11:13:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:54.842 11:13:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:54.842 11:13:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:54.842 11:13:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:54.842 11:13:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:54.842 11:13:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:54.842 11:13:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:54.842 11:13:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:54.842 11:13:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:54.842 11:13:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:54.842 11:13:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:54.842 11:13:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:54.842 11:13:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:54.842 11:13:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:54.842 11:13:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:54.842 11:13:51 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:54.842 11:13:51 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:54.842 11:13:51 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:54.842 11:13:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.843 11:13:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.843 11:13:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.843 11:13:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:54.843 11:13:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.843 11:13:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:21:54.843 11:13:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:54.843 11:13:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:54.843 11:13:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:54.843 11:13:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:54.843 11:13:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:54.843 11:13:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:54.843 11:13:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:54.843 11:13:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:54.843 11:13:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # nvmftestinit 00:21:54.843 11:13:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:54.843 11:13:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:54.843 11:13:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:54.843 11:13:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:54.843 11:13:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:54.843 11:13:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.843 11:13:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:54.843 11:13:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.843 11:13:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:54.843 11:13:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:54.843 11:13:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:21:54.843 11:13:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.108 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:00.108 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:22:00.108 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:00.108 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:00.108 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:00.108 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:00.108 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:00.108 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:22:00.108 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:00.108 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:22:00.108 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:22:00.108 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:22:00.108 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:22:00.108 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:22:00.108 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:22:00.108 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:00.108 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:00.108 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:00.108 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:00.108 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:00.108 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:00.108 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:00.108 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:00.108 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:00.108 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:00.108 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:00.108 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:00.108 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:00.108 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:00.108 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:00.108 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:00.108 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:00.108 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:00.108 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:00.108 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:00.108 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:00.108 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:00.108 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:00.108 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:00.109 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:00.109 Found net devices under 0000:86:00.0: cvl_0_0 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:00.109 Found net devices under 0000:86:00.1: cvl_0_1 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:00.109 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:00.109 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:22:00.109 00:22:00.109 --- 10.0.0.2 ping statistics --- 00:22:00.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.109 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:00.109 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:00.109 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:22:00.109 00:22:00.109 --- 10.0.0.1 ping statistics --- 00:22:00.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.109 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # [[ y != y ]] 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@721 -- # xtrace_disable 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@22 -- # nvmfpid=2339072 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # waitforlisten 2339072 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@828 -- # '[' -z 2339072 ']' 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:00.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:00.109 11:13:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.109 [2024-05-15 11:13:56.863243] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:22:00.109 [2024-05-15 11:13:56.863284] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:00.109 EAL: No free 2048 kB hugepages reported on node 1 00:22:00.109 [2024-05-15 11:13:56.919848] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:00.109 [2024-05-15 11:13:57.000586] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:00.109 [2024-05-15 11:13:57.000622] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:00.109 [2024-05-15 11:13:57.000629] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:00.109 [2024-05-15 11:13:57.000635] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:00.109 [2024-05-15 11:13:57.000641] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:00.109 [2024-05-15 11:13:57.000677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:00.109 [2024-05-15 11:13:57.000699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:00.109 [2024-05-15 11:13:57.000786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:00.109 [2024-05-15 11:13:57.000788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:00.677 11:13:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:00.677 11:13:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@861 -- # return 0 00:22:00.677 11:13:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:00.677 11:13:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:00.677 11:13:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.677 [2024-05-15 11:13:57.683035] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:00.677 11:13:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:00.677 11:13:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:22:00.677 11:13:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:00.677 11:13:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.677 11:13:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:00.677 11:13:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:00.677 11:13:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.677 Malloc1 00:22:00.677 11:13:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:00.677 11:13:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:00.677 11:13:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:00.677 11:13:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.677 11:13:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:00.677 11:13:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:00.677 11:13:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:00.677 11:13:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.677 11:13:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:00.677 11:13:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:00.677 11:13:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:00.677 11:13:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.677 [2024-05-15 11:13:57.766556] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:00.677 [2024-05-15 11:13:57.766786] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:00.677 11:13:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:00.677 11:13:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:00.677 11:13:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:00.677 11:13:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.677 11:13:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:00.677 11:13:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:00.677 11:13:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:00.677 11:13:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1357 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:00.677 11:13:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:22:00.677 11:13:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:00.677 11:13:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local sanitizers 00:22:00.678 11:13:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:00.678 11:13:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # shift 00:22:00.678 11:13:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local asan_lib= 00:22:00.678 11:13:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:22:00.678 11:13:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:00.678 11:13:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libasan 00:22:00.678 11:13:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:22:00.678 11:13:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:22:00.678 11:13:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:22:00.678 11:13:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:22:00.678 11:13:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:00.678 11:13:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:22:00.678 11:13:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:22:00.678 11:13:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:22:00.678 11:13:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:22:00.678 11:13:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:00.678 11:13:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:00.936 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:00.936 fio-3.35 00:22:00.936 Starting 1 thread 00:22:00.936 EAL: No free 2048 kB hugepages reported on node 1 00:22:03.464 00:22:03.464 test: (groupid=0, jobs=1): err= 0: pid=2339446: Wed May 15 11:14:00 2024 00:22:03.464 read: IOPS=11.6k, BW=45.3MiB/s (47.5MB/s)(90.8MiB/2005msec) 00:22:03.464 slat (nsec): min=1653, max=217663, avg=1786.50, stdev=2017.10 00:22:03.464 clat (usec): min=2990, max=10952, avg=6108.72, stdev=469.35 00:22:03.464 lat (usec): min=3020, max=10954, avg=6110.51, stdev=469.27 00:22:03.464 clat percentiles (usec): 00:22:03.464 | 1.00th=[ 5014], 5.00th=[ 5342], 10.00th=[ 5538], 20.00th=[ 5735], 00:22:03.464 | 30.00th=[ 5866], 40.00th=[ 5997], 50.00th=[ 6128], 60.00th=[ 6194], 00:22:03.464 | 70.00th=[ 6325], 80.00th=[ 6456], 90.00th=[ 6652], 95.00th=[ 6849], 00:22:03.464 | 99.00th=[ 7177], 99.50th=[ 7308], 99.90th=[ 8356], 99.95th=[ 9765], 00:22:03.464 | 99.99th=[10421] 00:22:03.464 bw ( KiB/s): min=45384, max=46912, per=99.93%, avg=46316.00, stdev=660.10, samples=4 00:22:03.464 iops : min=11346, max=11728, avg=11579.00, stdev=165.03, samples=4 00:22:03.464 write: IOPS=11.5k, BW=44.9MiB/s (47.1MB/s)(90.1MiB/2005msec); 0 zone resets 00:22:03.464 slat (nsec): min=1706, max=210625, avg=1862.81, stdev=1578.67 00:22:03.464 clat (usec): min=2292, max=9656, avg=4918.49, stdev=387.06 00:22:03.464 lat (usec): min=2308, max=9658, avg=4920.36, stdev=387.02 00:22:03.464 clat percentiles (usec): 00:22:03.464 | 1.00th=[ 4015], 5.00th=[ 4293], 10.00th=[ 4490], 20.00th=[ 4621], 00:22:03.464 | 30.00th=[ 4752], 40.00th=[ 4817], 50.00th=[ 4948], 60.00th=[ 5014], 00:22:03.464 | 70.00th=[ 5080], 80.00th=[ 5211], 90.00th=[ 5342], 95.00th=[ 5473], 00:22:03.464 | 99.00th=[ 5735], 99.50th=[ 5997], 99.90th=[ 7570], 99.95th=[ 8717], 00:22:03.464 | 99.99th=[ 8979] 00:22:03.464 bw ( KiB/s): min=45712, max=46416, per=100.00%, avg=46034.00, stdev=362.88, samples=4 00:22:03.464 iops : min=11428, max=11604, avg=11508.50, stdev=90.72, samples=4 00:22:03.464 lat (msec) : 4=0.50%, 10=99.48%, 20=0.02% 00:22:03.464 cpu : usr=74.05%, sys=24.35%, ctx=97, majf=0, minf=4 00:22:03.464 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:03.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:03.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:03.464 issued rwts: total=23232,23070,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:03.464 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:03.464 00:22:03.464 Run status group 0 (all jobs): 00:22:03.464 READ: bw=45.3MiB/s (47.5MB/s), 45.3MiB/s-45.3MiB/s (47.5MB/s-47.5MB/s), io=90.8MiB (95.2MB), run=2005-2005msec 00:22:03.464 WRITE: bw=44.9MiB/s (47.1MB/s), 44.9MiB/s-44.9MiB/s (47.1MB/s-47.1MB/s), io=90.1MiB (94.5MB), run=2005-2005msec 00:22:03.464 11:14:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:03.464 11:14:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1357 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:03.464 11:14:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:22:03.464 11:14:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:03.464 11:14:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local sanitizers 00:22:03.464 11:14:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:03.464 11:14:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # shift 00:22:03.464 11:14:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local asan_lib= 00:22:03.464 11:14:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:22:03.464 11:14:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:03.464 11:14:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libasan 00:22:03.464 11:14:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:22:03.464 11:14:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:22:03.464 11:14:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:22:03.464 11:14:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:22:03.464 11:14:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:03.464 11:14:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:22:03.464 11:14:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:22:03.464 11:14:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:22:03.464 11:14:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:22:03.464 11:14:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:03.464 11:14:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:03.464 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:03.464 fio-3.35 00:22:03.464 Starting 1 thread 00:22:03.722 EAL: No free 2048 kB hugepages reported on node 1 00:22:06.253 00:22:06.253 test: (groupid=0, jobs=1): err= 0: pid=2339969: Wed May 15 11:14:03 2024 00:22:06.253 read: IOPS=10.8k, BW=169MiB/s (177MB/s)(338MiB/2006msec) 00:22:06.253 slat (nsec): min=2468, max=85794, avg=2912.21, stdev=1239.47 00:22:06.253 clat (usec): min=1778, max=14440, avg=6902.43, stdev=1596.01 00:22:06.253 lat (usec): min=1780, max=14443, avg=6905.35, stdev=1596.14 00:22:06.253 clat percentiles (usec): 00:22:06.253 | 1.00th=[ 3785], 5.00th=[ 4359], 10.00th=[ 4883], 20.00th=[ 5538], 00:22:06.253 | 30.00th=[ 5997], 40.00th=[ 6456], 50.00th=[ 6915], 60.00th=[ 7373], 00:22:06.253 | 70.00th=[ 7701], 80.00th=[ 8029], 90.00th=[ 8848], 95.00th=[ 9634], 00:22:06.253 | 99.00th=[11207], 99.50th=[11731], 99.90th=[13173], 99.95th=[13304], 00:22:06.253 | 99.99th=[13566] 00:22:06.253 bw ( KiB/s): min=79840, max=99104, per=50.03%, avg=86416.00, stdev=8921.27, samples=4 00:22:06.253 iops : min= 4990, max= 6194, avg=5401.00, stdev=557.58, samples=4 00:22:06.253 write: IOPS=6438, BW=101MiB/s (105MB/s)(177MiB/1757msec); 0 zone resets 00:22:06.253 slat (usec): min=28, max=384, avg=32.28, stdev= 6.89 00:22:06.253 clat (usec): min=2591, max=15894, avg=8779.84, stdev=1497.98 00:22:06.253 lat (usec): min=2622, max=15925, avg=8812.12, stdev=1499.41 00:22:06.253 clat percentiles (usec): 00:22:06.253 | 1.00th=[ 5932], 5.00th=[ 6652], 10.00th=[ 7046], 20.00th=[ 7504], 00:22:06.253 | 30.00th=[ 7898], 40.00th=[ 8291], 50.00th=[ 8586], 60.00th=[ 8979], 00:22:06.253 | 70.00th=[ 9372], 80.00th=[10028], 90.00th=[10945], 95.00th=[11600], 00:22:06.253 | 99.00th=[12387], 99.50th=[12780], 99.90th=[14746], 99.95th=[15008], 00:22:06.253 | 99.99th=[15795] 00:22:06.253 bw ( KiB/s): min=82304, max=103648, per=87.36%, avg=89992.00, stdev=9940.36, samples=4 00:22:06.253 iops : min= 5144, max= 6478, avg=5624.50, stdev=621.27, samples=4 00:22:06.253 lat (msec) : 2=0.02%, 4=1.49%, 10=89.12%, 20=9.38% 00:22:06.253 cpu : usr=86.18%, sys=12.87%, ctx=41, majf=0, minf=1 00:22:06.253 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:06.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.253 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:06.253 issued rwts: total=21655,11312,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:06.253 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:06.253 00:22:06.253 Run status group 0 (all jobs): 00:22:06.253 READ: bw=169MiB/s (177MB/s), 169MiB/s-169MiB/s (177MB/s-177MB/s), io=338MiB (355MB), run=2006-2006msec 00:22:06.253 WRITE: bw=101MiB/s (105MB/s), 101MiB/s-101MiB/s (105MB/s-105MB/s), io=177MiB (185MB), run=1757-1757msec 00:22:06.253 11:14:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:06.253 11:14:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:06.253 11:14:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.253 11:14:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:06.253 11:14:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:22:06.253 11:14:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:22:06.253 11:14:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:22:06.253 11:14:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@84 -- # nvmftestfini 00:22:06.253 11:14:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:06.253 11:14:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:22:06.253 11:14:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:06.253 11:14:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:22:06.253 11:14:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:06.253 11:14:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:06.253 rmmod nvme_tcp 00:22:06.253 rmmod nvme_fabrics 00:22:06.253 rmmod nvme_keyring 00:22:06.253 11:14:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:06.253 11:14:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:22:06.253 11:14:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:22:06.253 11:14:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2339072 ']' 00:22:06.253 11:14:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2339072 00:22:06.253 11:14:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@947 -- # '[' -z 2339072 ']' 00:22:06.253 11:14:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # kill -0 2339072 00:22:06.253 11:14:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # uname 00:22:06.253 11:14:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:06.253 11:14:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2339072 00:22:06.253 11:14:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:22:06.254 11:14:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:22:06.254 11:14:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2339072' 00:22:06.254 killing process with pid 2339072 00:22:06.254 11:14:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # kill 2339072 00:22:06.254 [2024-05-15 11:14:03.169921] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:06.254 11:14:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@971 -- # wait 2339072 00:22:06.254 11:14:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:06.254 11:14:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:06.254 11:14:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:06.254 11:14:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:06.254 11:14:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:06.254 11:14:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:06.254 11:14:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:06.254 11:14:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.783 11:14:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:08.783 00:22:08.783 real 0m13.765s 00:22:08.783 user 0m40.930s 00:22:08.783 sys 0m5.608s 00:22:08.783 11:14:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:08.783 11:14:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.783 ************************************ 00:22:08.783 END TEST nvmf_fio_host 00:22:08.783 ************************************ 00:22:08.783 11:14:05 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:08.783 11:14:05 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:22:08.783 11:14:05 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:08.783 11:14:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:08.783 ************************************ 00:22:08.783 START TEST nvmf_failover 00:22:08.784 ************************************ 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:08.784 * Looking for test storage... 00:22:08.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:22:08.784 11:14:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:14.146 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:14.146 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:14.146 Found net devices under 0000:86:00.0: cvl_0_0 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:14.146 Found net devices under 0000:86:00.1: cvl_0_1 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:14.146 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:14.147 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:14.147 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:14.147 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:14.147 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:14.147 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:14.147 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:14.147 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:14.147 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:22:14.147 00:22:14.147 --- 10.0.0.2 ping statistics --- 00:22:14.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.147 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:22:14.147 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:14.147 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:14.147 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:22:14.147 00:22:14.147 --- 10.0.0.1 ping statistics --- 00:22:14.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.147 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:22:14.147 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:14.147 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:22:14.147 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:14.147 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:14.147 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:14.147 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:14.147 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:14.147 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:14.147 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:14.147 11:14:10 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:14.147 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:14.147 11:14:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@721 -- # xtrace_disable 00:22:14.147 11:14:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:14.147 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2343829 00:22:14.147 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2343829 00:22:14.147 11:14:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:14.147 11:14:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@828 -- # '[' -z 2343829 ']' 00:22:14.147 11:14:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:14.147 11:14:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:14.147 11:14:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:14.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:14.147 11:14:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:14.147 11:14:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:14.147 [2024-05-15 11:14:11.000960] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:22:14.147 [2024-05-15 11:14:11.001002] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:14.147 EAL: No free 2048 kB hugepages reported on node 1 00:22:14.147 [2024-05-15 11:14:11.058406] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:14.147 [2024-05-15 11:14:11.136063] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:14.147 [2024-05-15 11:14:11.136101] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:14.147 [2024-05-15 11:14:11.136107] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:14.147 [2024-05-15 11:14:11.136114] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:14.147 [2024-05-15 11:14:11.136118] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:14.147 [2024-05-15 11:14:11.136228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:14.147 [2024-05-15 11:14:11.136332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:14.147 [2024-05-15 11:14:11.136334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:14.715 11:14:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:14.715 11:14:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@861 -- # return 0 00:22:14.715 11:14:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:14.715 11:14:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:14.715 11:14:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:14.715 11:14:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:14.715 11:14:11 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:14.973 [2024-05-15 11:14:11.998023] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:14.973 11:14:12 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:14.973 Malloc0 00:22:14.973 11:14:12 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:15.231 11:14:12 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:15.489 11:14:12 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:15.747 [2024-05-15 11:14:12.757157] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:15.747 [2024-05-15 11:14:12.757406] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:15.747 11:14:12 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:15.747 [2024-05-15 11:14:12.941862] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:15.747 11:14:12 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:16.006 [2024-05-15 11:14:13.122449] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:16.006 11:14:13 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2344198 00:22:16.006 11:14:13 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:16.006 11:14:13 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:16.006 11:14:13 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2344198 /var/tmp/bdevperf.sock 00:22:16.006 11:14:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@828 -- # '[' -z 2344198 ']' 00:22:16.006 11:14:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:16.006 11:14:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:16.006 11:14:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:16.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:16.006 11:14:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:16.006 11:14:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:16.941 11:14:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:16.941 11:14:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@861 -- # return 0 00:22:16.941 11:14:14 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:17.200 NVMe0n1 00:22:17.200 11:14:14 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:17.458 00:22:17.458 11:14:14 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:17.458 11:14:14 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2344426 00:22:17.458 11:14:14 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:22:18.393 11:14:15 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:18.652 [2024-05-15 11:14:15.741900] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faa010 is same with the state(5) to be set 00:22:18.652 11:14:15 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:22:21.938 11:14:18 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:21.938 00:22:22.197 11:14:19 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:22.197 [2024-05-15 11:14:19.391659] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faac20 is same with the state(5) to be set 00:22:22.197 [2024-05-15 11:14:19.391709] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faac20 is same with the state(5) to be set 00:22:22.197 [2024-05-15 11:14:19.391717] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faac20 is same with the state(5) to be set 00:22:22.197 [2024-05-15 11:14:19.391723] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faac20 is same with the state(5) to be set 00:22:22.197 [2024-05-15 11:14:19.391730] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faac20 is same with the state(5) to be set 00:22:22.197 [2024-05-15 11:14:19.391736] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faac20 is same with the state(5) to be set 00:22:22.197 [2024-05-15 11:14:19.391741] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faac20 is same with the state(5) to be set 00:22:22.197 [2024-05-15 11:14:19.391747] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faac20 is same with the state(5) to be set 00:22:22.197 [2024-05-15 11:14:19.391753] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faac20 is same with the state(5) to be set 00:22:22.197 [2024-05-15 11:14:19.391758] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faac20 is same with the state(5) to be set 00:22:22.198 [2024-05-15 11:14:19.391764] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faac20 is same with the state(5) to be set 00:22:22.198 [2024-05-15 11:14:19.391770] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faac20 is same with the state(5) to be set 00:22:22.198 [2024-05-15 11:14:19.391775] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faac20 is same with the state(5) to be set 00:22:22.198 11:14:19 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:22:25.485 11:14:22 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:25.485 [2024-05-15 11:14:22.591615] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:25.485 11:14:22 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:22:26.422 11:14:23 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:26.680 [2024-05-15 11:14:23.795702] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.680 [2024-05-15 11:14:23.795741] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.680 [2024-05-15 11:14:23.795748] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.680 [2024-05-15 11:14:23.795755] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.680 [2024-05-15 11:14:23.795761] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.680 [2024-05-15 11:14:23.795767] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.680 [2024-05-15 11:14:23.795773] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.680 [2024-05-15 11:14:23.795779] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.680 [2024-05-15 11:14:23.795785] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.680 [2024-05-15 11:14:23.795791] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.680 [2024-05-15 11:14:23.795797] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.680 [2024-05-15 11:14:23.795808] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.680 [2024-05-15 11:14:23.795814] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.680 [2024-05-15 11:14:23.795820] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.680 [2024-05-15 11:14:23.795831] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.680 [2024-05-15 11:14:23.795837] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.680 [2024-05-15 11:14:23.795843] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.680 [2024-05-15 11:14:23.795848] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.680 [2024-05-15 11:14:23.795854] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.680 [2024-05-15 11:14:23.795860] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.680 [2024-05-15 11:14:23.795866] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.680 [2024-05-15 11:14:23.795872] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.680 [2024-05-15 11:14:23.795878] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.680 [2024-05-15 11:14:23.795883] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.681 [2024-05-15 11:14:23.795889] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.681 [2024-05-15 11:14:23.795895] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.681 [2024-05-15 11:14:23.795901] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.681 [2024-05-15 11:14:23.795907] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.681 [2024-05-15 11:14:23.795912] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.681 [2024-05-15 11:14:23.795918] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.681 [2024-05-15 11:14:23.795924] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.681 [2024-05-15 11:14:23.795930] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.681 [2024-05-15 11:14:23.795936] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.681 [2024-05-15 11:14:23.795941] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.681 [2024-05-15 11:14:23.795948] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.681 [2024-05-15 11:14:23.795954] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.681 [2024-05-15 11:14:23.795959] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.681 [2024-05-15 11:14:23.795965] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.681 [2024-05-15 11:14:23.795972] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.681 [2024-05-15 11:14:23.795978] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.681 [2024-05-15 11:14:23.795983] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.681 [2024-05-15 11:14:23.795989] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.681 [2024-05-15 11:14:23.795995] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.681 [2024-05-15 11:14:23.796000] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.681 [2024-05-15 11:14:23.796006] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.681 [2024-05-15 11:14:23.796012] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.681 [2024-05-15 11:14:23.796017] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.681 [2024-05-15 11:14:23.796023] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.681 [2024-05-15 11:14:23.796028] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.681 [2024-05-15 11:14:23.796034] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.681 [2024-05-15 11:14:23.796040] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.681 [2024-05-15 11:14:23.796046] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.681 [2024-05-15 11:14:23.796051] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.681 [2024-05-15 11:14:23.796057] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.681 [2024-05-15 11:14:23.796063] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.681 [2024-05-15 11:14:23.796069] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.681 [2024-05-15 11:14:23.796075] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.681 [2024-05-15 11:14:23.796081] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.681 [2024-05-15 11:14:23.796086] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.681 [2024-05-15 11:14:23.796093] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e017f0 is same with the state(5) to be set 00:22:26.681 11:14:23 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 2344426 00:22:33.253 0 00:22:33.253 11:14:29 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 2344198 00:22:33.253 11:14:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # '[' -z 2344198 ']' 00:22:33.253 11:14:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # kill -0 2344198 00:22:33.253 11:14:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # uname 00:22:33.253 11:14:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:33.253 11:14:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2344198 00:22:33.253 11:14:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:22:33.253 11:14:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:22:33.253 11:14:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2344198' 00:22:33.253 killing process with pid 2344198 00:22:33.253 11:14:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # kill 2344198 00:22:33.253 11:14:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@971 -- # wait 2344198 00:22:33.253 11:14:29 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:33.253 [2024-05-15 11:14:13.193139] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:22:33.253 [2024-05-15 11:14:13.193197] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2344198 ] 00:22:33.253 EAL: No free 2048 kB hugepages reported on node 1 00:22:33.253 [2024-05-15 11:14:13.247375] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.253 [2024-05-15 11:14:13.322223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:33.253 Running I/O for 15 seconds... 00:22:33.253 [2024-05-15 11:14:15.742295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.253 [2024-05-15 11:14:15.742328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.253 [2024-05-15 11:14:15.742344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.253 [2024-05-15 11:14:15.742352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.253 [2024-05-15 11:14:15.742361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.253 [2024-05-15 11:14:15.742368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.253 [2024-05-15 11:14:15.742376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.253 [2024-05-15 11:14:15.742383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.253 [2024-05-15 11:14:15.742391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.253 [2024-05-15 11:14:15.742397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.253 [2024-05-15 11:14:15.742405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.253 [2024-05-15 11:14:15.742412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.253 [2024-05-15 11:14:15.742420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:93664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.253 [2024-05-15 11:14:15.742426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.253 [2024-05-15 11:14:15.742434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:93672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.253 [2024-05-15 11:14:15.742440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.254 [2024-05-15 11:14:15.742448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.254 [2024-05-15 11:14:15.742455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.254 [2024-05-15 11:14:15.742463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.254 [2024-05-15 11:14:15.742469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.254 [2024-05-15 11:14:15.742477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.254 [2024-05-15 11:14:15.742484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.254 [2024-05-15 11:14:15.742497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.254 [2024-05-15 11:14:15.742504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.254 [2024-05-15 11:14:15.742512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.254 [2024-05-15 11:14:15.742519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.254 [2024-05-15 11:14:15.742527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.254 [2024-05-15 11:14:15.742534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.254 [2024-05-15 11:14:15.742542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.254 [2024-05-15 11:14:15.742548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.254 [2024-05-15 11:14:15.742556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.254 [2024-05-15 11:14:15.742562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.254 [2024-05-15 11:14:15.742570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.254 [2024-05-15 11:14:15.742577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.254 [2024-05-15 11:14:15.742586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.254 [2024-05-15 11:14:15.742592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.254 [2024-05-15 11:14:15.742602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.254 [2024-05-15 11:14:15.742608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.254 [2024-05-15 11:14:15.742616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.254 [2024-05-15 11:14:15.742623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.254 [2024-05-15 11:14:15.742631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.254 [2024-05-15 11:14:15.742637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.254 [2024-05-15 11:14:15.742646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.254 [2024-05-15 11:14:15.742652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.254 [2024-05-15 11:14:15.742660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.254 [2024-05-15 11:14:15.742666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.254 [2024-05-15 11:14:15.742674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.254 [2024-05-15 11:14:15.742683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.254 [2024-05-15 11:14:15.742691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.254 [2024-05-15 11:14:15.742697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.254 [2024-05-15 11:14:15.742705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.254 [2024-05-15 11:14:15.742712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.254 [2024-05-15 11:14:15.742719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.254 [2024-05-15 11:14:15.742726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.254 [2024-05-15 11:14:15.742734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.254 [2024-05-15 11:14:15.742741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.254 [2024-05-15 11:14:15.742748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.254 [2024-05-15 11:14:15.742754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.254 [2024-05-15 11:14:15.742762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.254 [2024-05-15 11:14:15.742768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.254 [2024-05-15 11:14:15.742776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:93680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.254 [2024-05-15 11:14:15.742783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.254 [2024-05-15 11:14:15.742791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:93688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.254 [2024-05-15 11:14:15.742797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.254 [2024-05-15 11:14:15.742805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:93696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.254 [2024-05-15 11:14:15.742811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.254 [2024-05-15 11:14:15.742820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:93704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.254 [2024-05-15 11:14:15.742827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.254 [2024-05-15 11:14:15.742835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.254 [2024-05-15 11:14:15.742841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.254 [2024-05-15 11:14:15.742848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.254 [2024-05-15 11:14:15.742855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.254 [2024-05-15 11:14:15.742864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:93728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.254 [2024-05-15 11:14:15.742871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.254 [2024-05-15 11:14:15.742879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:93736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.254 [2024-05-15 11:14:15.742885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.254 [2024-05-15 11:14:15.742893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:93744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.254 [2024-05-15 11:14:15.742899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.254 [2024-05-15 11:14:15.742907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:93752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.254 [2024-05-15 11:14:15.742913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.254 [2024-05-15 11:14:15.742921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:93760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.254 [2024-05-15 11:14:15.742927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.254 [2024-05-15 11:14:15.742935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.254 [2024-05-15 11:14:15.742942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.254 [2024-05-15 11:14:15.742949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:93776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.254 [2024-05-15 11:14:15.742956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.254 [2024-05-15 11:14:15.742963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.254 [2024-05-15 11:14:15.742970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.254 [2024-05-15 11:14:15.742978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:93792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.254 [2024-05-15 11:14:15.742985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.254 [2024-05-15 11:14:15.742992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.254 [2024-05-15 11:14:15.742999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.254 [2024-05-15 11:14:15.743006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.254 [2024-05-15 11:14:15.743013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.254 [2024-05-15 11:14:15.743020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.254 [2024-05-15 11:14:15.743028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.254 [2024-05-15 11:14:15.743036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.254 [2024-05-15 11:14:15.743043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.255 [2024-05-15 11:14:15.743052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:93824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.255 [2024-05-15 11:14:15.743059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.255 [2024-05-15 11:14:15.743066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:93832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.255 [2024-05-15 11:14:15.743073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.255 [2024-05-15 11:14:15.743081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:93840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.255 [2024-05-15 11:14:15.743087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.255 [2024-05-15 11:14:15.743095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:93848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.255 [2024-05-15 11:14:15.743101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.255 [2024-05-15 11:14:15.743109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:93856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.255 [2024-05-15 11:14:15.743116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.255 [2024-05-15 11:14:15.743123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.255 [2024-05-15 11:14:15.743129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.255 [2024-05-15 11:14:15.743137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.255 [2024-05-15 11:14:15.743143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.255 [2024-05-15 11:14:15.743151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.255 [2024-05-15 11:14:15.743158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.255 [2024-05-15 11:14:15.743169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:93888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.255 [2024-05-15 11:14:15.743176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.255 [2024-05-15 11:14:15.743184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.255 [2024-05-15 11:14:15.743190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.255 [2024-05-15 11:14:15.743198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:93904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.255 [2024-05-15 11:14:15.743205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.255 [2024-05-15 11:14:15.743213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.255 [2024-05-15 11:14:15.743219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.255 [2024-05-15 11:14:15.743227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:93920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.255 [2024-05-15 11:14:15.743235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.255 [2024-05-15 11:14:15.743243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.255 [2024-05-15 11:14:15.743250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.255 [2024-05-15 11:14:15.743258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.255 [2024-05-15 11:14:15.743264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.255 [2024-05-15 11:14:15.743272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:93944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.255 [2024-05-15 11:14:15.743278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.255 [2024-05-15 11:14:15.743286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:93952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.255 [2024-05-15 11:14:15.743292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.255 [2024-05-15 11:14:15.743300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:93960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.255 [2024-05-15 11:14:15.743307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.255 [2024-05-15 11:14:15.743315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.255 [2024-05-15 11:14:15.743322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.255 [2024-05-15 11:14:15.743329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:93976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.255 [2024-05-15 11:14:15.743336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.255 [2024-05-15 11:14:15.743343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:93984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.255 [2024-05-15 11:14:15.743349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.255 [2024-05-15 11:14:15.743357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:93992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.255 [2024-05-15 11:14:15.743364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.255 [2024-05-15 11:14:15.743372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:94000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.255 [2024-05-15 11:14:15.743378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.255 [2024-05-15 11:14:15.743386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:94008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.255 [2024-05-15 11:14:15.743392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.255 [2024-05-15 11:14:15.743400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:94016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.255 [2024-05-15 11:14:15.743406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.255 [2024-05-15 11:14:15.743415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:94024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.255 [2024-05-15 11:14:15.743422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.255 [2024-05-15 11:14:15.743430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:94032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.255 [2024-05-15 11:14:15.743436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.255 [2024-05-15 11:14:15.743444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:94040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.255 [2024-05-15 11:14:15.743450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.255 [2024-05-15 11:14:15.743458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:94048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.255 [2024-05-15 11:14:15.743464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.255 [2024-05-15 11:14:15.743472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:94056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.255 [2024-05-15 11:14:15.743480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.255 [2024-05-15 11:14:15.743488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:94064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.255 [2024-05-15 11:14:15.743494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.255 [2024-05-15 11:14:15.743502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:94072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.255 [2024-05-15 11:14:15.743509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.255 [2024-05-15 11:14:15.743517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:94080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.255 [2024-05-15 11:14:15.743524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.255 [2024-05-15 11:14:15.743532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:94088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.255 [2024-05-15 11:14:15.743538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.255 [2024-05-15 11:14:15.743546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.255 [2024-05-15 11:14:15.743552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.255 [2024-05-15 11:14:15.743560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.255 [2024-05-15 11:14:15.743566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.255 [2024-05-15 11:14:15.743574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:94112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.255 [2024-05-15 11:14:15.743581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.255 [2024-05-15 11:14:15.743589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:94120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.255 [2024-05-15 11:14:15.743597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.255 [2024-05-15 11:14:15.743605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:94128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.255 [2024-05-15 11:14:15.743612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.255 [2024-05-15 11:14:15.743620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:94136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.255 [2024-05-15 11:14:15.743627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.255 [2024-05-15 11:14:15.743635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:94144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.256 [2024-05-15 11:14:15.743641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.256 [2024-05-15 11:14:15.743649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:94152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.256 [2024-05-15 11:14:15.743655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.256 [2024-05-15 11:14:15.743663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.256 [2024-05-15 11:14:15.743669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.256 [2024-05-15 11:14:15.743677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:94168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.256 [2024-05-15 11:14:15.743684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.256 [2024-05-15 11:14:15.743691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:94176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.256 [2024-05-15 11:14:15.743698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.256 [2024-05-15 11:14:15.743706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:94184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.256 [2024-05-15 11:14:15.743712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.256 [2024-05-15 11:14:15.743720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:94192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.256 [2024-05-15 11:14:15.743727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.256 [2024-05-15 11:14:15.743734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:94200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.256 [2024-05-15 11:14:15.743745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.256 [2024-05-15 11:14:15.743753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:94208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.256 [2024-05-15 11:14:15.743759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.256 [2024-05-15 11:14:15.743767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:94216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.256 [2024-05-15 11:14:15.743774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.256 [2024-05-15 11:14:15.743783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:94224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.256 [2024-05-15 11:14:15.743789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.256 [2024-05-15 11:14:15.743797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:94232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.256 [2024-05-15 11:14:15.743804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.256 [2024-05-15 11:14:15.743812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:94240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.256 [2024-05-15 11:14:15.743819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.256 [2024-05-15 11:14:15.743827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:94248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.256 [2024-05-15 11:14:15.743833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.256 [2024-05-15 11:14:15.743841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:94256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.256 [2024-05-15 11:14:15.743847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.256 [2024-05-15 11:14:15.743855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:94264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.256 [2024-05-15 11:14:15.743861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.256 [2024-05-15 11:14:15.743869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:94272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.256 [2024-05-15 11:14:15.743876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.256 [2024-05-15 11:14:15.743883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:94280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.256 [2024-05-15 11:14:15.743890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.256 [2024-05-15 11:14:15.743898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:94288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.256 [2024-05-15 11:14:15.743904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.256 [2024-05-15 11:14:15.743912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:94296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.256 [2024-05-15 11:14:15.743919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.256 [2024-05-15 11:14:15.743927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:94304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.256 [2024-05-15 11:14:15.743933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.256 [2024-05-15 11:14:15.743941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:94312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.256 [2024-05-15 11:14:15.743947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.256 [2024-05-15 11:14:15.743955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:94320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.256 [2024-05-15 11:14:15.743962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.256 [2024-05-15 11:14:15.743971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:94328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.256 [2024-05-15 11:14:15.743979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.256 [2024-05-15 11:14:15.743987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:94336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.256 [2024-05-15 11:14:15.743993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.256 [2024-05-15 11:14:15.744002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:94344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.256 [2024-05-15 11:14:15.744008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.256 [2024-05-15 11:14:15.744016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:94352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.256 [2024-05-15 11:14:15.744023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.256 [2024-05-15 11:14:15.744031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:94360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.256 [2024-05-15 11:14:15.744038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.256 [2024-05-15 11:14:15.744045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:94368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.256 [2024-05-15 11:14:15.744052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.256 [2024-05-15 11:14:15.744059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:94376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.256 [2024-05-15 11:14:15.744066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.256 [2024-05-15 11:14:15.744074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:94384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.256 [2024-05-15 11:14:15.744080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.256 [2024-05-15 11:14:15.744088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:94392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.256 [2024-05-15 11:14:15.744094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.256 [2024-05-15 11:14:15.744102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:94400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.256 [2024-05-15 11:14:15.744108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.256 [2024-05-15 11:14:15.744116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:94408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.256 [2024-05-15 11:14:15.744122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.256 [2024-05-15 11:14:15.744130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:94416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.256 [2024-05-15 11:14:15.744136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.256 [2024-05-15 11:14:15.744144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:94424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.256 [2024-05-15 11:14:15.744152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.256 [2024-05-15 11:14:15.744160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:94432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.256 [2024-05-15 11:14:15.744172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.256 [2024-05-15 11:14:15.744180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:94440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.256 [2024-05-15 11:14:15.744187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.256 [2024-05-15 11:14:15.744211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.256 [2024-05-15 11:14:15.744218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.256 [2024-05-15 11:14:15.744224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94448 len:8 PRP1 0x0 PRP2 0x0 00:22:33.256 [2024-05-15 11:14:15.744231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.256 [2024-05-15 11:14:15.744273] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12592f0 was disconnected and freed. reset controller. 00:22:33.256 [2024-05-15 11:14:15.744286] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:33.256 [2024-05-15 11:14:15.744307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.256 [2024-05-15 11:14:15.744314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.257 [2024-05-15 11:14:15.744322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.257 [2024-05-15 11:14:15.744328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.257 [2024-05-15 11:14:15.744335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.257 [2024-05-15 11:14:15.744341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.257 [2024-05-15 11:14:15.744348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.257 [2024-05-15 11:14:15.744355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.257 [2024-05-15 11:14:15.744361] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:33.257 [2024-05-15 11:14:15.747263] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:33.257 [2024-05-15 11:14:15.747291] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123a400 (9): Bad file descriptor 00:22:33.257 [2024-05-15 11:14:15.898462] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:33.257 [2024-05-15 11:14:19.393275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:59288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.257 [2024-05-15 11:14:19.393311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.257 [2024-05-15 11:14:19.393325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:59472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.257 [2024-05-15 11:14:19.393334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.257 [2024-05-15 11:14:19.393347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:59480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.257 [2024-05-15 11:14:19.393354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.257 [2024-05-15 11:14:19.393362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:59296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.257 [2024-05-15 11:14:19.393369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.257 [2024-05-15 11:14:19.393377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:59304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.257 [2024-05-15 11:14:19.393384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.257 [2024-05-15 11:14:19.393393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:59312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.257 [2024-05-15 11:14:19.393400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.257 [2024-05-15 11:14:19.393408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:59320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.257 [2024-05-15 11:14:19.393415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.257 [2024-05-15 11:14:19.393422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:59328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.257 [2024-05-15 11:14:19.393429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.257 [2024-05-15 11:14:19.393437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:59336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.257 [2024-05-15 11:14:19.393444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.257 [2024-05-15 11:14:19.393452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.257 [2024-05-15 11:14:19.393459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.257 [2024-05-15 11:14:19.393467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:59352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.257 [2024-05-15 11:14:19.393473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.257 [2024-05-15 11:14:19.393481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:59360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.257 [2024-05-15 11:14:19.393488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.257 [2024-05-15 11:14:19.393497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:59368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.257 [2024-05-15 11:14:19.393503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.257 [2024-05-15 11:14:19.393511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:59376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.257 [2024-05-15 11:14:19.393518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.257 [2024-05-15 11:14:19.393525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:59384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.257 [2024-05-15 11:14:19.393534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.257 [2024-05-15 11:14:19.393542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:59392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.257 [2024-05-15 11:14:19.393548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.257 [2024-05-15 11:14:19.393556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:59400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.257 [2024-05-15 11:14:19.393563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.257 [2024-05-15 11:14:19.393572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:59408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.257 [2024-05-15 11:14:19.393579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.257 [2024-05-15 11:14:19.393586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:59488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.257 [2024-05-15 11:14:19.393593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.257 [2024-05-15 11:14:19.393600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:59496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.257 [2024-05-15 11:14:19.393607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.257 [2024-05-15 11:14:19.393615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:59504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.257 [2024-05-15 11:14:19.393621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.257 [2024-05-15 11:14:19.393629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:59512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.257 [2024-05-15 11:14:19.393635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.257 [2024-05-15 11:14:19.393643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:59520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.257 [2024-05-15 11:14:19.393650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.257 [2024-05-15 11:14:19.393658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:59528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.257 [2024-05-15 11:14:19.393664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.257 [2024-05-15 11:14:19.393672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:59536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.257 [2024-05-15 11:14:19.393678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.257 [2024-05-15 11:14:19.393686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:59544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.257 [2024-05-15 11:14:19.393693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.257 [2024-05-15 11:14:19.393701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:59552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.257 [2024-05-15 11:14:19.393707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.257 [2024-05-15 11:14:19.393720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:59560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.257 [2024-05-15 11:14:19.393727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.257 [2024-05-15 11:14:19.393734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:59568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.257 [2024-05-15 11:14:19.393741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.257 [2024-05-15 11:14:19.393748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:59576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.257 [2024-05-15 11:14:19.393755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.257 [2024-05-15 11:14:19.393763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:59584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.257 [2024-05-15 11:14:19.393770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.257 [2024-05-15 11:14:19.393778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:59592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.257 [2024-05-15 11:14:19.393784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.258 [2024-05-15 11:14:19.393792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:59600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.258 [2024-05-15 11:14:19.393798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.258 [2024-05-15 11:14:19.393806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:59608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.258 [2024-05-15 11:14:19.393813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.258 [2024-05-15 11:14:19.393821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:59616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.258 [2024-05-15 11:14:19.393827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.258 [2024-05-15 11:14:19.393835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:59624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.258 [2024-05-15 11:14:19.393841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.258 [2024-05-15 11:14:19.393849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:59632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.258 [2024-05-15 11:14:19.393855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.258 [2024-05-15 11:14:19.393863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:59640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.258 [2024-05-15 11:14:19.393869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.258 [2024-05-15 11:14:19.393877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:59648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.258 [2024-05-15 11:14:19.393883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.258 [2024-05-15 11:14:19.393891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:59656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.258 [2024-05-15 11:14:19.393897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.258 [2024-05-15 11:14:19.393906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:59664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.258 [2024-05-15 11:14:19.393913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.258 [2024-05-15 11:14:19.393920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:59672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.258 [2024-05-15 11:14:19.393927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.258 [2024-05-15 11:14:19.393935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:59680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.258 [2024-05-15 11:14:19.393941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.258 [2024-05-15 11:14:19.393949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:59688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.258 [2024-05-15 11:14:19.393955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.258 [2024-05-15 11:14:19.393963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:59696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.258 [2024-05-15 11:14:19.393969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.258 [2024-05-15 11:14:19.393977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:59704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.258 [2024-05-15 11:14:19.393984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.258 [2024-05-15 11:14:19.393992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:59712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.258 [2024-05-15 11:14:19.393998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.258 [2024-05-15 11:14:19.394006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:59720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.258 [2024-05-15 11:14:19.394012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.258 [2024-05-15 11:14:19.394020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:59728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.258 [2024-05-15 11:14:19.394027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.258 [2024-05-15 11:14:19.394034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:59736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.258 [2024-05-15 11:14:19.394041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.258 [2024-05-15 11:14:19.394048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:59744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.258 [2024-05-15 11:14:19.394055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.258 [2024-05-15 11:14:19.394062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:59752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.258 [2024-05-15 11:14:19.394069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.258 [2024-05-15 11:14:19.394076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:59760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.258 [2024-05-15 11:14:19.394084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.258 [2024-05-15 11:14:19.394092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:59768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.258 [2024-05-15 11:14:19.394099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.258 [2024-05-15 11:14:19.394107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:59776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.258 [2024-05-15 11:14:19.394113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.258 [2024-05-15 11:14:19.394121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:59784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.258 [2024-05-15 11:14:19.394127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.258 [2024-05-15 11:14:19.394135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:59792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.258 [2024-05-15 11:14:19.394141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.258 [2024-05-15 11:14:19.394149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:59800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.258 [2024-05-15 11:14:19.394155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.258 [2024-05-15 11:14:19.394163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:59808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.258 [2024-05-15 11:14:19.394174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.258 [2024-05-15 11:14:19.394182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.258 [2024-05-15 11:14:19.394189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.258 [2024-05-15 11:14:19.394197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:59824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.258 [2024-05-15 11:14:19.394203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.258 [2024-05-15 11:14:19.394211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:59832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.258 [2024-05-15 11:14:19.394217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.258 [2024-05-15 11:14:19.394225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:59840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.258 [2024-05-15 11:14:19.394231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.258 [2024-05-15 11:14:19.394239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:59848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.258 [2024-05-15 11:14:19.394246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.258 [2024-05-15 11:14:19.394253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:59856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.258 [2024-05-15 11:14:19.394260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.258 [2024-05-15 11:14:19.394269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:59864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.258 [2024-05-15 11:14:19.394276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.258 [2024-05-15 11:14:19.394301] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.258 [2024-05-15 11:14:19.394308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59872 len:8 PRP1 0x0 PRP2 0x0 00:22:33.258 [2024-05-15 11:14:19.394314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.258 [2024-05-15 11:14:19.394324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.258 [2024-05-15 11:14:19.394329] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.258 [2024-05-15 11:14:19.394335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59880 len:8 PRP1 0x0 PRP2 0x0 00:22:33.258 [2024-05-15 11:14:19.394342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.259 [2024-05-15 11:14:19.394349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.259 [2024-05-15 11:14:19.394354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.259 [2024-05-15 11:14:19.394359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59888 len:8 PRP1 0x0 PRP2 0x0 00:22:33.259 [2024-05-15 11:14:19.394365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.259 [2024-05-15 11:14:19.394372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.259 [2024-05-15 11:14:19.394376] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.259 [2024-05-15 11:14:19.394382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59896 len:8 PRP1 0x0 PRP2 0x0 00:22:33.259 [2024-05-15 11:14:19.394389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.259 [2024-05-15 11:14:19.394395] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.259 [2024-05-15 11:14:19.394400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.259 [2024-05-15 11:14:19.394405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59904 len:8 PRP1 0x0 PRP2 0x0 00:22:33.259 [2024-05-15 11:14:19.394411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.259 [2024-05-15 11:14:19.394417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.259 [2024-05-15 11:14:19.394422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.259 [2024-05-15 11:14:19.394427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59912 len:8 PRP1 0x0 PRP2 0x0 00:22:33.259 [2024-05-15 11:14:19.394434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.259 [2024-05-15 11:14:19.394441] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.259 [2024-05-15 11:14:19.394445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.259 [2024-05-15 11:14:19.394454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59920 len:8 PRP1 0x0 PRP2 0x0 00:22:33.259 [2024-05-15 11:14:19.394460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.259 [2024-05-15 11:14:19.394467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.259 [2024-05-15 11:14:19.394473] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.259 [2024-05-15 11:14:19.394479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59928 len:8 PRP1 0x0 PRP2 0x0 00:22:33.259 [2024-05-15 11:14:19.394486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.259 [2024-05-15 11:14:19.394494] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.259 [2024-05-15 11:14:19.394499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.259 [2024-05-15 11:14:19.394504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59936 len:8 PRP1 0x0 PRP2 0x0 00:22:33.259 [2024-05-15 11:14:19.394511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.259 [2024-05-15 11:14:19.394517] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.259 [2024-05-15 11:14:19.394522] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.259 [2024-05-15 11:14:19.394527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59944 len:8 PRP1 0x0 PRP2 0x0 00:22:33.259 [2024-05-15 11:14:19.394534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.259 [2024-05-15 11:14:19.394540] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.259 [2024-05-15 11:14:19.394545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.259 [2024-05-15 11:14:19.394550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59952 len:8 PRP1 0x0 PRP2 0x0 00:22:33.259 [2024-05-15 11:14:19.394556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.259 [2024-05-15 11:14:19.394563] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.259 [2024-05-15 11:14:19.394567] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.259 [2024-05-15 11:14:19.394573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59960 len:8 PRP1 0x0 PRP2 0x0 00:22:33.259 [2024-05-15 11:14:19.394579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.259 [2024-05-15 11:14:19.394585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.259 [2024-05-15 11:14:19.394590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.259 [2024-05-15 11:14:19.394596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59968 len:8 PRP1 0x0 PRP2 0x0 00:22:33.259 [2024-05-15 11:14:19.394602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.259 [2024-05-15 11:14:19.394609] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.259 [2024-05-15 11:14:19.394613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.259 [2024-05-15 11:14:19.394619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59976 len:8 PRP1 0x0 PRP2 0x0 00:22:33.259 [2024-05-15 11:14:19.394625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.259 [2024-05-15 11:14:19.394631] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.259 [2024-05-15 11:14:19.394636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.259 [2024-05-15 11:14:19.394641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59984 len:8 PRP1 0x0 PRP2 0x0 00:22:33.259 [2024-05-15 11:14:19.394650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.259 [2024-05-15 11:14:19.394658] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.259 [2024-05-15 11:14:19.394663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.259 [2024-05-15 11:14:19.394673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59992 len:8 PRP1 0x0 PRP2 0x0 00:22:33.259 [2024-05-15 11:14:19.394680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.259 [2024-05-15 11:14:19.394687] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.259 [2024-05-15 11:14:19.394692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.259 [2024-05-15 11:14:19.394698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60000 len:8 PRP1 0x0 PRP2 0x0 00:22:33.259 [2024-05-15 11:14:19.394704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.259 [2024-05-15 11:14:19.394711] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.259 [2024-05-15 11:14:19.394715] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.259 [2024-05-15 11:14:19.394721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60008 len:8 PRP1 0x0 PRP2 0x0 00:22:33.259 [2024-05-15 11:14:19.394727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.259 [2024-05-15 11:14:19.394733] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.259 [2024-05-15 11:14:19.394738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.259 [2024-05-15 11:14:19.394744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60016 len:8 PRP1 0x0 PRP2 0x0 00:22:33.259 [2024-05-15 11:14:19.394750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.259 [2024-05-15 11:14:19.394756] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.259 [2024-05-15 11:14:19.394761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.259 [2024-05-15 11:14:19.394766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60024 len:8 PRP1 0x0 PRP2 0x0 00:22:33.259 [2024-05-15 11:14:19.394773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.259 [2024-05-15 11:14:19.394779] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.259 [2024-05-15 11:14:19.394784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.259 [2024-05-15 11:14:19.394789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60032 len:8 PRP1 0x0 PRP2 0x0 00:22:33.259 [2024-05-15 11:14:19.394796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.259 [2024-05-15 11:14:19.394802] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.259 [2024-05-15 11:14:19.394807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.259 [2024-05-15 11:14:19.394813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60040 len:8 PRP1 0x0 PRP2 0x0 00:22:33.259 [2024-05-15 11:14:19.394819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.259 [2024-05-15 11:14:19.394825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.259 [2024-05-15 11:14:19.394830] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.259 [2024-05-15 11:14:19.394835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60048 len:8 PRP1 0x0 PRP2 0x0 00:22:33.259 [2024-05-15 11:14:19.394843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.259 [2024-05-15 11:14:19.394850] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.259 [2024-05-15 11:14:19.394855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.259 [2024-05-15 11:14:19.394861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60056 len:8 PRP1 0x0 PRP2 0x0 00:22:33.259 [2024-05-15 11:14:19.394868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.259 [2024-05-15 11:14:19.394874] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.259 [2024-05-15 11:14:19.394879] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.259 [2024-05-15 11:14:19.394884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60064 len:8 PRP1 0x0 PRP2 0x0 00:22:33.259 [2024-05-15 11:14:19.394891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.259 [2024-05-15 11:14:19.394897] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.259 [2024-05-15 11:14:19.394903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.259 [2024-05-15 11:14:19.394909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60072 len:8 PRP1 0x0 PRP2 0x0 00:22:33.259 [2024-05-15 11:14:19.394915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.259 [2024-05-15 11:14:19.394921] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.260 [2024-05-15 11:14:19.394926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.260 [2024-05-15 11:14:19.394932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60080 len:8 PRP1 0x0 PRP2 0x0 00:22:33.260 [2024-05-15 11:14:19.394938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.260 [2024-05-15 11:14:19.394944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.260 [2024-05-15 11:14:19.394949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.260 [2024-05-15 11:14:19.394955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60088 len:8 PRP1 0x0 PRP2 0x0 00:22:33.260 [2024-05-15 11:14:19.394962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.260 [2024-05-15 11:14:19.394968] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.260 [2024-05-15 11:14:19.394975] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.260 [2024-05-15 11:14:19.394980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60096 len:8 PRP1 0x0 PRP2 0x0 00:22:33.260 [2024-05-15 11:14:19.394986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.260 [2024-05-15 11:14:19.394992] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.260 [2024-05-15 11:14:19.394997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.260 [2024-05-15 11:14:19.395002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60104 len:8 PRP1 0x0 PRP2 0x0 00:22:33.260 [2024-05-15 11:14:19.395009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.260 [2024-05-15 11:14:19.395015] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.260 [2024-05-15 11:14:19.395020] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.260 [2024-05-15 11:14:19.395027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60112 len:8 PRP1 0x0 PRP2 0x0 00:22:33.260 [2024-05-15 11:14:19.395033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.260 [2024-05-15 11:14:19.395039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.260 [2024-05-15 11:14:19.395044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.260 [2024-05-15 11:14:19.395050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60120 len:8 PRP1 0x0 PRP2 0x0 00:22:33.260 [2024-05-15 11:14:19.395057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.260 [2024-05-15 11:14:19.395064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.260 [2024-05-15 11:14:19.395069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.260 [2024-05-15 11:14:19.395074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60128 len:8 PRP1 0x0 PRP2 0x0 00:22:33.260 [2024-05-15 11:14:19.395080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.260 [2024-05-15 11:14:19.395086] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.260 [2024-05-15 11:14:19.395091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.260 [2024-05-15 11:14:19.395096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60136 len:8 PRP1 0x0 PRP2 0x0 00:22:33.260 [2024-05-15 11:14:19.395102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.260 [2024-05-15 11:14:19.395109] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.260 [2024-05-15 11:14:19.395114] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.260 [2024-05-15 11:14:19.395120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60144 len:8 PRP1 0x0 PRP2 0x0 00:22:33.260 [2024-05-15 11:14:19.395126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.260 [2024-05-15 11:14:19.395132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.260 [2024-05-15 11:14:19.395137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.260 [2024-05-15 11:14:19.395142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60152 len:8 PRP1 0x0 PRP2 0x0 00:22:33.260 [2024-05-15 11:14:19.395148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.260 [2024-05-15 11:14:19.395154] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.260 [2024-05-15 11:14:19.395160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.260 [2024-05-15 11:14:19.395170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60160 len:8 PRP1 0x0 PRP2 0x0 00:22:33.260 [2024-05-15 11:14:19.395176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.260 [2024-05-15 11:14:19.395183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.260 [2024-05-15 11:14:19.395188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.260 [2024-05-15 11:14:19.395193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60168 len:8 PRP1 0x0 PRP2 0x0 00:22:33.260 [2024-05-15 11:14:19.395200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.260 [2024-05-15 11:14:19.395206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.260 [2024-05-15 11:14:19.395212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.260 [2024-05-15 11:14:19.395217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60176 len:8 PRP1 0x0 PRP2 0x0 00:22:33.260 [2024-05-15 11:14:19.395224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.260 [2024-05-15 11:14:19.395230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.260 [2024-05-15 11:14:19.395235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.260 [2024-05-15 11:14:19.395242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60184 len:8 PRP1 0x0 PRP2 0x0 00:22:33.260 [2024-05-15 11:14:19.395248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.260 [2024-05-15 11:14:19.395254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.260 [2024-05-15 11:14:19.395259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.260 [2024-05-15 11:14:19.395264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60192 len:8 PRP1 0x0 PRP2 0x0 00:22:33.260 [2024-05-15 11:14:19.395271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.260 [2024-05-15 11:14:19.395277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.260 [2024-05-15 11:14:19.395282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.260 [2024-05-15 11:14:19.395288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60200 len:8 PRP1 0x0 PRP2 0x0 00:22:33.260 [2024-05-15 11:14:19.395294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.260 [2024-05-15 11:14:19.395300] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.260 [2024-05-15 11:14:19.395305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.260 [2024-05-15 11:14:19.395310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60208 len:8 PRP1 0x0 PRP2 0x0 00:22:33.260 [2024-05-15 11:14:19.395317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.260 [2024-05-15 11:14:19.395323] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.260 [2024-05-15 11:14:19.395328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.260 [2024-05-15 11:14:19.395334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60216 len:8 PRP1 0x0 PRP2 0x0 00:22:33.260 [2024-05-15 11:14:19.395340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.260 [2024-05-15 11:14:19.395346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.260 [2024-05-15 11:14:19.395353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.260 [2024-05-15 11:14:19.395358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60224 len:8 PRP1 0x0 PRP2 0x0 00:22:33.260 [2024-05-15 11:14:19.395365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.260 [2024-05-15 11:14:19.395371] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.260 [2024-05-15 11:14:19.395376] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.260 [2024-05-15 11:14:19.395382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60232 len:8 PRP1 0x0 PRP2 0x0 00:22:33.260 [2024-05-15 11:14:19.406027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.260 [2024-05-15 11:14:19.406038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.260 [2024-05-15 11:14:19.406044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.260 [2024-05-15 11:14:19.406050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60240 len:8 PRP1 0x0 PRP2 0x0 00:22:33.260 [2024-05-15 11:14:19.406057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.260 [2024-05-15 11:14:19.406063] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.260 [2024-05-15 11:14:19.406068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.260 [2024-05-15 11:14:19.406074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60248 len:8 PRP1 0x0 PRP2 0x0 00:22:33.260 [2024-05-15 11:14:19.406081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.260 [2024-05-15 11:14:19.406087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.260 [2024-05-15 11:14:19.406092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.260 [2024-05-15 11:14:19.406097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60256 len:8 PRP1 0x0 PRP2 0x0 00:22:33.260 [2024-05-15 11:14:19.406103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.260 [2024-05-15 11:14:19.406109] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.260 [2024-05-15 11:14:19.406114] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.260 [2024-05-15 11:14:19.406120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60264 len:8 PRP1 0x0 PRP2 0x0 00:22:33.261 [2024-05-15 11:14:19.406126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.261 [2024-05-15 11:14:19.406133] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.261 [2024-05-15 11:14:19.406137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.261 [2024-05-15 11:14:19.406143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60272 len:8 PRP1 0x0 PRP2 0x0 00:22:33.261 [2024-05-15 11:14:19.406149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.261 [2024-05-15 11:14:19.406155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.261 [2024-05-15 11:14:19.406160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.261 [2024-05-15 11:14:19.406177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60280 len:8 PRP1 0x0 PRP2 0x0 00:22:33.261 [2024-05-15 11:14:19.406184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.261 [2024-05-15 11:14:19.406190] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.261 [2024-05-15 11:14:19.406196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.261 [2024-05-15 11:14:19.406201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60288 len:8 PRP1 0x0 PRP2 0x0 00:22:33.261 [2024-05-15 11:14:19.406207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.261 [2024-05-15 11:14:19.406213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.261 [2024-05-15 11:14:19.406218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.261 [2024-05-15 11:14:19.406224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60296 len:8 PRP1 0x0 PRP2 0x0 00:22:33.261 [2024-05-15 11:14:19.406231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.261 [2024-05-15 11:14:19.406238] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.261 [2024-05-15 11:14:19.406243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.261 [2024-05-15 11:14:19.406248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60304 len:8 PRP1 0x0 PRP2 0x0 00:22:33.261 [2024-05-15 11:14:19.406254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.261 [2024-05-15 11:14:19.406260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.261 [2024-05-15 11:14:19.406265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.261 [2024-05-15 11:14:19.406270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59416 len:8 PRP1 0x0 PRP2 0x0 00:22:33.261 [2024-05-15 11:14:19.406277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.261 [2024-05-15 11:14:19.406283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.261 [2024-05-15 11:14:19.406289] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.261 [2024-05-15 11:14:19.406294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59424 len:8 PRP1 0x0 PRP2 0x0 00:22:33.261 [2024-05-15 11:14:19.406300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.261 [2024-05-15 11:14:19.406307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.261 [2024-05-15 11:14:19.406311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.261 [2024-05-15 11:14:19.406316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59432 len:8 PRP1 0x0 PRP2 0x0 00:22:33.261 [2024-05-15 11:14:19.406323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.261 [2024-05-15 11:14:19.406329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.261 [2024-05-15 11:14:19.406335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.261 [2024-05-15 11:14:19.406340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59440 len:8 PRP1 0x0 PRP2 0x0 00:22:33.261 [2024-05-15 11:14:19.406346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.261 [2024-05-15 11:14:19.406352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.261 [2024-05-15 11:14:19.406357] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.261 [2024-05-15 11:14:19.406362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59448 len:8 PRP1 0x0 PRP2 0x0 00:22:33.261 [2024-05-15 11:14:19.406369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.261 [2024-05-15 11:14:19.406375] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.261 [2024-05-15 11:14:19.406380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.261 [2024-05-15 11:14:19.406386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59456 len:8 PRP1 0x0 PRP2 0x0 00:22:33.261 [2024-05-15 11:14:19.406392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.261 [2024-05-15 11:14:19.406398] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.261 [2024-05-15 11:14:19.406404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.261 [2024-05-15 11:14:19.406409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59464 len:8 PRP1 0x0 PRP2 0x0 00:22:33.261 [2024-05-15 11:14:19.406415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.261 [2024-05-15 11:14:19.406456] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1403e00 was disconnected and freed. reset controller. 00:22:33.261 [2024-05-15 11:14:19.406465] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:33.261 [2024-05-15 11:14:19.406484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.261 [2024-05-15 11:14:19.406491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.261 [2024-05-15 11:14:19.406498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.261 [2024-05-15 11:14:19.406505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.261 [2024-05-15 11:14:19.406512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.261 [2024-05-15 11:14:19.406518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.261 [2024-05-15 11:14:19.406525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.261 [2024-05-15 11:14:19.406532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.261 [2024-05-15 11:14:19.406538] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:33.261 [2024-05-15 11:14:19.406567] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123a400 (9): Bad file descriptor 00:22:33.261 [2024-05-15 11:14:19.409551] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:33.261 [2024-05-15 11:14:19.444853] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:33.261 [2024-05-15 11:14:23.796297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:54808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.261 [2024-05-15 11:14:23.796331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.261 [2024-05-15 11:14:23.796348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:54816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.261 [2024-05-15 11:14:23.796356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.261 [2024-05-15 11:14:23.796365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:54824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.261 [2024-05-15 11:14:23.796372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.261 [2024-05-15 11:14:23.796380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:54832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.261 [2024-05-15 11:14:23.796386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.261 [2024-05-15 11:14:23.796395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:54840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.261 [2024-05-15 11:14:23.796401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.261 [2024-05-15 11:14:23.796413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:54848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.261 [2024-05-15 11:14:23.796420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.261 [2024-05-15 11:14:23.796428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:54856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.261 [2024-05-15 11:14:23.796435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.261 [2024-05-15 11:14:23.796443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:54864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.261 [2024-05-15 11:14:23.796450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.261 [2024-05-15 11:14:23.796457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:54872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.261 [2024-05-15 11:14:23.796464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.261 [2024-05-15 11:14:23.796472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:54880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.261 [2024-05-15 11:14:23.796478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.261 [2024-05-15 11:14:23.796487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:54888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.261 [2024-05-15 11:14:23.796493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.261 [2024-05-15 11:14:23.796501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:54896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.261 [2024-05-15 11:14:23.796507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.261 [2024-05-15 11:14:23.796515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:54904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.261 [2024-05-15 11:14:23.796522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.261 [2024-05-15 11:14:23.796531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:54912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.262 [2024-05-15 11:14:23.796537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.262 [2024-05-15 11:14:23.796546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.262 [2024-05-15 11:14:23.796552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.262 [2024-05-15 11:14:23.796560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:54920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.262 [2024-05-15 11:14:23.796567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.262 [2024-05-15 11:14:23.796575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:54928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.262 [2024-05-15 11:14:23.796582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.262 [2024-05-15 11:14:23.796590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:54936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.262 [2024-05-15 11:14:23.796598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.262 [2024-05-15 11:14:23.796606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:54944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.262 [2024-05-15 11:14:23.796613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.262 [2024-05-15 11:14:23.796621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:54952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.262 [2024-05-15 11:14:23.796628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.262 [2024-05-15 11:14:23.796636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:54960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.262 [2024-05-15 11:14:23.796642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.262 [2024-05-15 11:14:23.796650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:54968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.262 [2024-05-15 11:14:23.796656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.262 [2024-05-15 11:14:23.796664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:54976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.262 [2024-05-15 11:14:23.796671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.262 [2024-05-15 11:14:23.796679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:54984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.262 [2024-05-15 11:14:23.796685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.262 [2024-05-15 11:14:23.796693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:54992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.262 [2024-05-15 11:14:23.796699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.262 [2024-05-15 11:14:23.796707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:55000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.262 [2024-05-15 11:14:23.796714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.262 [2024-05-15 11:14:23.796722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:55008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.262 [2024-05-15 11:14:23.796729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.262 [2024-05-15 11:14:23.796737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:55016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.262 [2024-05-15 11:14:23.796743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.262 [2024-05-15 11:14:23.796751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:55024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.262 [2024-05-15 11:14:23.796758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.262 [2024-05-15 11:14:23.796765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:55032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.262 [2024-05-15 11:14:23.796772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.262 [2024-05-15 11:14:23.796779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:55040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.262 [2024-05-15 11:14:23.796787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.262 [2024-05-15 11:14:23.796795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:55048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.262 [2024-05-15 11:14:23.796802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.262 [2024-05-15 11:14:23.796809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:55056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.262 [2024-05-15 11:14:23.796817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.262 [2024-05-15 11:14:23.796825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:55064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.262 [2024-05-15 11:14:23.796834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.262 [2024-05-15 11:14:23.796843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:55072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.262 [2024-05-15 11:14:23.796849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.262 [2024-05-15 11:14:23.796857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:55080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.262 [2024-05-15 11:14:23.796863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.262 [2024-05-15 11:14:23.796872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:55088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.262 [2024-05-15 11:14:23.796880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.262 [2024-05-15 11:14:23.796891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:55096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.262 [2024-05-15 11:14:23.796898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.262 [2024-05-15 11:14:23.796906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:55104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.262 [2024-05-15 11:14:23.796912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.262 [2024-05-15 11:14:23.796921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:55112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.262 [2024-05-15 11:14:23.796928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.262 [2024-05-15 11:14:23.796936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:55120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.262 [2024-05-15 11:14:23.796943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.262 [2024-05-15 11:14:23.796950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:55128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.262 [2024-05-15 11:14:23.796957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.262 [2024-05-15 11:14:23.796965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:55136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.262 [2024-05-15 11:14:23.796971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.262 [2024-05-15 11:14:23.796983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:55144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.262 [2024-05-15 11:14:23.796990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.262 [2024-05-15 11:14:23.796998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.262 [2024-05-15 11:14:23.797004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.262 [2024-05-15 11:14:23.797012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:55160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.262 [2024-05-15 11:14:23.797019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.262 [2024-05-15 11:14:23.797027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:55168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.262 [2024-05-15 11:14:23.797033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.262 [2024-05-15 11:14:23.797041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:55176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.262 [2024-05-15 11:14:23.797048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.262 [2024-05-15 11:14:23.797055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:55184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.262 [2024-05-15 11:14:23.797062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.263 [2024-05-15 11:14:23.797070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:55192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.263 [2024-05-15 11:14:23.797077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.263 [2024-05-15 11:14:23.797085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:55200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.263 [2024-05-15 11:14:23.797091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.263 [2024-05-15 11:14:23.797103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:55208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.263 [2024-05-15 11:14:23.797110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.263 [2024-05-15 11:14:23.797118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:55216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.263 [2024-05-15 11:14:23.797124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.263 [2024-05-15 11:14:23.797132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:55224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.263 [2024-05-15 11:14:23.797139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.263 [2024-05-15 11:14:23.797147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:55232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.263 [2024-05-15 11:14:23.797153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.263 [2024-05-15 11:14:23.797162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:55248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.263 [2024-05-15 11:14:23.797175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.263 [2024-05-15 11:14:23.797183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:55256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.263 [2024-05-15 11:14:23.797190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.263 [2024-05-15 11:14:23.797198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:55264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.263 [2024-05-15 11:14:23.797205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.263 [2024-05-15 11:14:23.797214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:55272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.263 [2024-05-15 11:14:23.797220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.263 [2024-05-15 11:14:23.797228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:55280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.263 [2024-05-15 11:14:23.797236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.263 [2024-05-15 11:14:23.797244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:55288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.263 [2024-05-15 11:14:23.797251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.263 [2024-05-15 11:14:23.797258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:55296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.263 [2024-05-15 11:14:23.797265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.263 [2024-05-15 11:14:23.797273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:55304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.263 [2024-05-15 11:14:23.797279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.263 [2024-05-15 11:14:23.797288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:55312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.263 [2024-05-15 11:14:23.797294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.263 [2024-05-15 11:14:23.797303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:55320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.263 [2024-05-15 11:14:23.797310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.263 [2024-05-15 11:14:23.797318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:55328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.263 [2024-05-15 11:14:23.797324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.263 [2024-05-15 11:14:23.797332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:55336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.263 [2024-05-15 11:14:23.797339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.263 [2024-05-15 11:14:23.797347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:55344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.263 [2024-05-15 11:14:23.797353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.263 [2024-05-15 11:14:23.797362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:55352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.263 [2024-05-15 11:14:23.797368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.263 [2024-05-15 11:14:23.797376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:55360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.263 [2024-05-15 11:14:23.797382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.263 [2024-05-15 11:14:23.797390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:55368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.263 [2024-05-15 11:14:23.797397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.263 [2024-05-15 11:14:23.797406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:55376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.263 [2024-05-15 11:14:23.797412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.263 [2024-05-15 11:14:23.797420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:55384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.263 [2024-05-15 11:14:23.797427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.263 [2024-05-15 11:14:23.797435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:55392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.263 [2024-05-15 11:14:23.797441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.263 [2024-05-15 11:14:23.797449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:55400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.263 [2024-05-15 11:14:23.797455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.263 [2024-05-15 11:14:23.797463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:55408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.263 [2024-05-15 11:14:23.797470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.263 [2024-05-15 11:14:23.797477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:55416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.263 [2024-05-15 11:14:23.797483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.263 [2024-05-15 11:14:23.797492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:55424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.263 [2024-05-15 11:14:23.797498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.263 [2024-05-15 11:14:23.797506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:55432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.263 [2024-05-15 11:14:23.797513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.263 [2024-05-15 11:14:23.797521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:55440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.263 [2024-05-15 11:14:23.797529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.263 [2024-05-15 11:14:23.797540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:55448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.263 [2024-05-15 11:14:23.797549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.263 [2024-05-15 11:14:23.797561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:55456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.263 [2024-05-15 11:14:23.797569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.263 [2024-05-15 11:14:23.797579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:55464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.263 [2024-05-15 11:14:23.797587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.263 [2024-05-15 11:14:23.797598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:55472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.263 [2024-05-15 11:14:23.797605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.263 [2024-05-15 11:14:23.797614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:55480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.263 [2024-05-15 11:14:23.797622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.263 [2024-05-15 11:14:23.797632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:55488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.263 [2024-05-15 11:14:23.797640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.263 [2024-05-15 11:14:23.797649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:55496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.263 [2024-05-15 11:14:23.797656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.263 [2024-05-15 11:14:23.797664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:55504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.263 [2024-05-15 11:14:23.797671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.263 [2024-05-15 11:14:23.797680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:55512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.263 [2024-05-15 11:14:23.797688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.263 [2024-05-15 11:14:23.797695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:55520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.263 [2024-05-15 11:14:23.797702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.264 [2024-05-15 11:14:23.797710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:55528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.264 [2024-05-15 11:14:23.797717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.264 [2024-05-15 11:14:23.797725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:55536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.264 [2024-05-15 11:14:23.797732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.264 [2024-05-15 11:14:23.797740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:55544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.264 [2024-05-15 11:14:23.797746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.264 [2024-05-15 11:14:23.797754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:55552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.264 [2024-05-15 11:14:23.797761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.264 [2024-05-15 11:14:23.797769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:55560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.264 [2024-05-15 11:14:23.797775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.264 [2024-05-15 11:14:23.797784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:55568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.264 [2024-05-15 11:14:23.797790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.264 [2024-05-15 11:14:23.797798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:55576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.264 [2024-05-15 11:14:23.797805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.264 [2024-05-15 11:14:23.797812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:55584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.264 [2024-05-15 11:14:23.797819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.264 [2024-05-15 11:14:23.797826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:55592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.264 [2024-05-15 11:14:23.797833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.264 [2024-05-15 11:14:23.797841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:55600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.264 [2024-05-15 11:14:23.797847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.264 [2024-05-15 11:14:23.797861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:55608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.264 [2024-05-15 11:14:23.797867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.264 [2024-05-15 11:14:23.797875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:55616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.264 [2024-05-15 11:14:23.797882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.264 [2024-05-15 11:14:23.797890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:55624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.264 [2024-05-15 11:14:23.797897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.264 [2024-05-15 11:14:23.797933] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.264 [2024-05-15 11:14:23.797940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55632 len:8 PRP1 0x0 PRP2 0x0 00:22:33.264 [2024-05-15 11:14:23.797946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.264 [2024-05-15 11:14:23.797956] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.264 [2024-05-15 11:14:23.797962] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.264 [2024-05-15 11:14:23.797967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55640 len:8 PRP1 0x0 PRP2 0x0 00:22:33.264 [2024-05-15 11:14:23.797973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.264 [2024-05-15 11:14:23.797982] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.264 [2024-05-15 11:14:23.797987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.264 [2024-05-15 11:14:23.797992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55648 len:8 PRP1 0x0 PRP2 0x0 00:22:33.264 [2024-05-15 11:14:23.797999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.264 [2024-05-15 11:14:23.798005] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.264 [2024-05-15 11:14:23.798010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.264 [2024-05-15 11:14:23.798015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55656 len:8 PRP1 0x0 PRP2 0x0 00:22:33.264 [2024-05-15 11:14:23.798021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.264 [2024-05-15 11:14:23.798028] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.264 [2024-05-15 11:14:23.798034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.264 [2024-05-15 11:14:23.798039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55664 len:8 PRP1 0x0 PRP2 0x0 00:22:33.264 [2024-05-15 11:14:23.798047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.264 [2024-05-15 11:14:23.798053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.264 [2024-05-15 11:14:23.798058] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.264 [2024-05-15 11:14:23.798063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55672 len:8 PRP1 0x0 PRP2 0x0 00:22:33.264 [2024-05-15 11:14:23.798069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.264 [2024-05-15 11:14:23.798076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.264 [2024-05-15 11:14:23.798081] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.264 [2024-05-15 11:14:23.798086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55680 len:8 PRP1 0x0 PRP2 0x0 00:22:33.264 [2024-05-15 11:14:23.798094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.264 [2024-05-15 11:14:23.798100] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.264 [2024-05-15 11:14:23.798105] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.264 [2024-05-15 11:14:23.798111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55688 len:8 PRP1 0x0 PRP2 0x0 00:22:33.264 [2024-05-15 11:14:23.798118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.264 [2024-05-15 11:14:23.798125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.264 [2024-05-15 11:14:23.798131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.264 [2024-05-15 11:14:23.798136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55696 len:8 PRP1 0x0 PRP2 0x0 00:22:33.264 [2024-05-15 11:14:23.798144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.264 [2024-05-15 11:14:23.798151] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.264 [2024-05-15 11:14:23.798156] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.264 [2024-05-15 11:14:23.798162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55704 len:8 PRP1 0x0 PRP2 0x0 00:22:33.264 [2024-05-15 11:14:23.798173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.264 [2024-05-15 11:14:23.798180] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.264 [2024-05-15 11:14:23.798185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.264 [2024-05-15 11:14:23.798191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55712 len:8 PRP1 0x0 PRP2 0x0 00:22:33.264 [2024-05-15 11:14:23.798198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.264 [2024-05-15 11:14:23.798205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.264 [2024-05-15 11:14:23.798210] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.264 [2024-05-15 11:14:23.798215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55720 len:8 PRP1 0x0 PRP2 0x0 00:22:33.264 [2024-05-15 11:14:23.798221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.264 [2024-05-15 11:14:23.798228] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.264 [2024-05-15 11:14:23.798234] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.264 [2024-05-15 11:14:23.798240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55728 len:8 PRP1 0x0 PRP2 0x0 00:22:33.264 [2024-05-15 11:14:23.798248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.264 [2024-05-15 11:14:23.798254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.264 [2024-05-15 11:14:23.798259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.264 [2024-05-15 11:14:23.798266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55736 len:8 PRP1 0x0 PRP2 0x0 00:22:33.264 [2024-05-15 11:14:23.798273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.264 [2024-05-15 11:14:23.798279] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.264 [2024-05-15 11:14:23.798284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.264 [2024-05-15 11:14:23.798290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55744 len:8 PRP1 0x0 PRP2 0x0 00:22:33.265 [2024-05-15 11:14:23.798299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:14:23.798306] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.265 [2024-05-15 11:14:23.798311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.265 [2024-05-15 11:14:23.798317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55752 len:8 PRP1 0x0 PRP2 0x0 00:22:33.265 [2024-05-15 11:14:23.798323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:14:23.798331] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.265 [2024-05-15 11:14:23.798336] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.265 [2024-05-15 11:14:23.798341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55760 len:8 PRP1 0x0 PRP2 0x0 00:22:33.265 [2024-05-15 11:14:23.798347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:14:23.798354] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.265 [2024-05-15 11:14:23.798360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.265 [2024-05-15 11:14:23.798368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55768 len:8 PRP1 0x0 PRP2 0x0 00:22:33.265 [2024-05-15 11:14:23.798374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:14:23.798380] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.265 [2024-05-15 11:14:23.798385] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.265 [2024-05-15 11:14:23.810160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55776 len:8 PRP1 0x0 PRP2 0x0 00:22:33.265 [2024-05-15 11:14:23.810185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:14:23.810195] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.265 [2024-05-15 11:14:23.810201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.265 [2024-05-15 11:14:23.810209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55784 len:8 PRP1 0x0 PRP2 0x0 00:22:33.265 [2024-05-15 11:14:23.810217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:14:23.810226] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.265 [2024-05-15 11:14:23.810233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.265 [2024-05-15 11:14:23.810240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55792 len:8 PRP1 0x0 PRP2 0x0 00:22:33.265 [2024-05-15 11:14:23.810249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:14:23.810258] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.265 [2024-05-15 11:14:23.810265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.265 [2024-05-15 11:14:23.810272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55800 len:8 PRP1 0x0 PRP2 0x0 00:22:33.265 [2024-05-15 11:14:23.810280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:14:23.810289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.265 [2024-05-15 11:14:23.810295] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.265 [2024-05-15 11:14:23.810303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55808 len:8 PRP1 0x0 PRP2 0x0 00:22:33.265 [2024-05-15 11:14:23.810312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:14:23.810321] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.265 [2024-05-15 11:14:23.810327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.265 [2024-05-15 11:14:23.810334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55816 len:8 PRP1 0x0 PRP2 0x0 00:22:33.265 [2024-05-15 11:14:23.810343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:14:23.810352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.265 [2024-05-15 11:14:23.810358] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.265 [2024-05-15 11:14:23.810365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55824 len:8 PRP1 0x0 PRP2 0x0 00:22:33.265 [2024-05-15 11:14:23.810374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:14:23.810419] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1403bf0 was disconnected and freed. reset controller. 00:22:33.265 [2024-05-15 11:14:23.810433] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:33.265 [2024-05-15 11:14:23.810458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.265 [2024-05-15 11:14:23.810467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:14:23.810477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.265 [2024-05-15 11:14:23.810485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:14:23.810495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.265 [2024-05-15 11:14:23.810503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:14:23.810513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.265 [2024-05-15 11:14:23.810521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:14:23.810529] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:33.265 [2024-05-15 11:14:23.810564] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123a400 (9): Bad file descriptor 00:22:33.265 [2024-05-15 11:14:23.814489] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:33.265 [2024-05-15 11:14:23.851149] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:33.265 00:22:33.265 Latency(us) 00:22:33.265 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:33.265 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:33.265 Verification LBA range: start 0x0 length 0x4000 00:22:33.265 NVMe0n1 : 15.01 10653.78 41.62 656.48 0.00 11294.41 422.07 22567.18 00:22:33.265 =================================================================================================================== 00:22:33.265 Total : 10653.78 41.62 656.48 0.00 11294.41 422.07 22567.18 00:22:33.265 Received shutdown signal, test time was about 15.000000 seconds 00:22:33.265 00:22:33.265 Latency(us) 00:22:33.265 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:33.265 =================================================================================================================== 00:22:33.265 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:33.265 11:14:29 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:33.265 11:14:29 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:22:33.265 11:14:29 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:22:33.265 11:14:29 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2346946 00:22:33.265 11:14:29 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:33.265 11:14:29 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2346946 /var/tmp/bdevperf.sock 00:22:33.265 11:14:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@828 -- # '[' -z 2346946 ']' 00:22:33.265 11:14:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:33.265 11:14:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:33.265 11:14:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:33.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:33.265 11:14:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:33.265 11:14:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:33.833 11:14:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:33.833 11:14:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@861 -- # return 0 00:22:33.833 11:14:30 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:33.833 [2024-05-15 11:14:30.999246] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:33.833 11:14:31 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:34.091 [2024-05-15 11:14:31.183764] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:34.091 11:14:31 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:34.350 NVMe0n1 00:22:34.350 11:14:31 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:34.608 00:22:34.609 11:14:31 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:34.867 00:22:35.126 11:14:32 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:35.126 11:14:32 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:22:35.126 11:14:32 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:35.385 11:14:32 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:22:38.698 11:14:35 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:38.698 11:14:35 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:22:38.698 11:14:35 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:38.698 11:14:35 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2347879 00:22:38.698 11:14:35 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 2347879 00:22:39.633 0 00:22:39.633 11:14:36 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:39.633 [2024-05-15 11:14:30.016879] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:22:39.633 [2024-05-15 11:14:30.016931] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2346946 ] 00:22:39.633 EAL: No free 2048 kB hugepages reported on node 1 00:22:39.633 [2024-05-15 11:14:30.072716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.633 [2024-05-15 11:14:30.150092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:39.633 [2024-05-15 11:14:32.484451] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:39.633 [2024-05-15 11:14:32.484495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.633 [2024-05-15 11:14:32.484507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.633 [2024-05-15 11:14:32.484515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.633 [2024-05-15 11:14:32.484522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.633 [2024-05-15 11:14:32.484530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.633 [2024-05-15 11:14:32.484537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.633 [2024-05-15 11:14:32.484544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.633 [2024-05-15 11:14:32.484551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.633 [2024-05-15 11:14:32.484558] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:39.633 [2024-05-15 11:14:32.484577] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:39.633 [2024-05-15 11:14:32.484590] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc52400 (9): Bad file descriptor 00:22:39.633 [2024-05-15 11:14:32.578327] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:39.633 Running I/O for 1 seconds... 00:22:39.633 00:22:39.633 Latency(us) 00:22:39.633 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.633 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:39.633 Verification LBA range: start 0x0 length 0x4000 00:22:39.633 NVMe0n1 : 1.00 10788.12 42.14 0.00 0.00 11820.42 1410.45 12024.43 00:22:39.633 =================================================================================================================== 00:22:39.633 Total : 10788.12 42.14 0.00 0.00 11820.42 1410.45 12024.43 00:22:39.633 11:14:36 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:39.633 11:14:36 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:22:39.891 11:14:36 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:40.149 11:14:37 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:40.149 11:14:37 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:22:40.149 11:14:37 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:40.408 11:14:37 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:22:43.688 11:14:40 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:43.688 11:14:40 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:22:43.688 11:14:40 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 2346946 00:22:43.688 11:14:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # '[' -z 2346946 ']' 00:22:43.688 11:14:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # kill -0 2346946 00:22:43.688 11:14:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # uname 00:22:43.688 11:14:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:43.688 11:14:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2346946 00:22:43.688 11:14:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:22:43.688 11:14:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:22:43.688 11:14:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2346946' 00:22:43.688 killing process with pid 2346946 00:22:43.688 11:14:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # kill 2346946 00:22:43.688 11:14:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@971 -- # wait 2346946 00:22:43.945 11:14:40 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:22:43.945 11:14:40 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:43.945 11:14:41 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:43.945 11:14:41 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:43.945 11:14:41 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:22:43.945 11:14:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:43.945 11:14:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:22:43.945 11:14:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:43.945 11:14:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:22:43.945 11:14:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:43.945 11:14:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:43.945 rmmod nvme_tcp 00:22:43.945 rmmod nvme_fabrics 00:22:43.945 rmmod nvme_keyring 00:22:44.203 11:14:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:44.203 11:14:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:22:44.203 11:14:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:22:44.203 11:14:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2343829 ']' 00:22:44.203 11:14:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2343829 00:22:44.203 11:14:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # '[' -z 2343829 ']' 00:22:44.203 11:14:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # kill -0 2343829 00:22:44.203 11:14:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # uname 00:22:44.203 11:14:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:44.203 11:14:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2343829 00:22:44.203 11:14:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:22:44.203 11:14:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:22:44.203 11:14:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2343829' 00:22:44.203 killing process with pid 2343829 00:22:44.203 11:14:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # kill 2343829 00:22:44.203 [2024-05-15 11:14:41.276220] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:44.203 11:14:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@971 -- # wait 2343829 00:22:44.461 11:14:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:44.461 11:14:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:44.461 11:14:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:44.461 11:14:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:44.461 11:14:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:44.461 11:14:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.461 11:14:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:44.461 11:14:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.363 11:14:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:46.363 00:22:46.363 real 0m38.018s 00:22:46.363 user 2m3.088s 00:22:46.363 sys 0m7.230s 00:22:46.363 11:14:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:46.363 11:14:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:46.363 ************************************ 00:22:46.363 END TEST nvmf_failover 00:22:46.363 ************************************ 00:22:46.363 11:14:43 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:46.363 11:14:43 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:22:46.363 11:14:43 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:46.363 11:14:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:46.622 ************************************ 00:22:46.622 START TEST nvmf_host_discovery 00:22:46.622 ************************************ 00:22:46.622 11:14:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:46.622 * Looking for test storage... 00:22:46.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:46.622 11:14:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:46.622 11:14:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:22:46.622 11:14:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:46.622 11:14:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:46.622 11:14:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:46.622 11:14:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:46.622 11:14:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:46.622 11:14:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:46.622 11:14:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:46.622 11:14:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:46.622 11:14:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:46.622 11:14:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:46.622 11:14:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:46.622 11:14:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:46.622 11:14:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:46.622 11:14:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:46.622 11:14:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:46.622 11:14:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:46.622 11:14:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:46.622 11:14:43 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:46.622 11:14:43 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:46.622 11:14:43 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:46.622 11:14:43 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.622 11:14:43 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.622 11:14:43 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.622 11:14:43 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:22:46.622 11:14:43 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.622 11:14:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:22:46.622 11:14:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:46.622 11:14:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:46.622 11:14:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:46.622 11:14:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:46.622 11:14:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:46.622 11:14:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:46.622 11:14:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:46.622 11:14:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:46.622 11:14:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:46.622 11:14:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:46.622 11:14:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:46.622 11:14:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:46.622 11:14:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:46.622 11:14:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:46.622 11:14:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:22:46.622 11:14:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:46.622 11:14:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:46.623 11:14:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:46.623 11:14:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:46.623 11:14:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:46.623 11:14:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.623 11:14:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:46.623 11:14:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.623 11:14:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:46.623 11:14:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:46.623 11:14:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:22:46.623 11:14:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:51.890 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:51.890 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:22:51.890 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:51.890 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:51.890 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:51.890 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:51.891 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:51.891 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:51.891 Found net devices under 0000:86:00.0: cvl_0_0 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:51.891 Found net devices under 0000:86:00.1: cvl_0_1 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:51.891 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:51.891 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:22:51.891 00:22:51.891 --- 10.0.0.2 ping statistics --- 00:22:51.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.891 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:51.891 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:51.891 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:22:51.891 00:22:51.891 --- 10.0.0.1 ping statistics --- 00:22:51.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.891 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@721 -- # xtrace_disable 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=2352140 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 2352140 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@828 -- # '[' -z 2352140 ']' 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.891 11:14:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:51.892 11:14:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.892 11:14:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:51.892 11:14:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:51.892 [2024-05-15 11:14:48.883863] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:22:51.892 [2024-05-15 11:14:48.883903] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.892 EAL: No free 2048 kB hugepages reported on node 1 00:22:51.892 [2024-05-15 11:14:48.941014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.892 [2024-05-15 11:14:49.011835] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.892 [2024-05-15 11:14:49.011876] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.892 [2024-05-15 11:14:49.011882] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.892 [2024-05-15 11:14:49.011888] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.892 [2024-05-15 11:14:49.011893] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.892 [2024-05-15 11:14:49.011928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:52.459 11:14:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:52.459 11:14:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@861 -- # return 0 00:22:52.459 11:14:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:52.459 11:14:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:52.459 11:14:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:52.459 11:14:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:52.459 11:14:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:52.459 11:14:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:52.459 11:14:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:52.459 [2024-05-15 11:14:49.722782] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:52.718 11:14:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:52.718 11:14:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:52.718 11:14:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:52.718 11:14:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:52.718 [2024-05-15 11:14:49.734746] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:52.718 [2024-05-15 11:14:49.734935] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:52.718 11:14:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:52.718 11:14:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:52.718 11:14:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:52.718 11:14:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:52.718 null0 00:22:52.718 11:14:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:52.718 11:14:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:52.718 11:14:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:52.718 11:14:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:52.718 null1 00:22:52.718 11:14:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:52.718 11:14:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:52.718 11:14:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:52.718 11:14:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:52.718 11:14:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:52.718 11:14:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2352345 00:22:52.718 11:14:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2352345 /tmp/host.sock 00:22:52.718 11:14:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:52.718 11:14:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@828 -- # '[' -z 2352345 ']' 00:22:52.718 11:14:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local rpc_addr=/tmp/host.sock 00:22:52.718 11:14:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:52.718 11:14:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:52.718 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:52.718 11:14:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:52.718 11:14:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:52.718 [2024-05-15 11:14:49.810159] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:22:52.718 [2024-05-15 11:14:49.810209] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2352345 ] 00:22:52.718 EAL: No free 2048 kB hugepages reported on node 1 00:22:52.718 [2024-05-15 11:14:49.864356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.718 [2024-05-15 11:14:49.943400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@861 -- # return 0 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:53.669 11:14:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:53.927 11:14:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:53.928 11:14:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:53.928 11:14:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:53.928 11:14:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.928 [2024-05-15 11:14:50.958158] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:53.928 11:14:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:53.928 11:14:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:22:53.928 11:14:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:53.928 11:14:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:53.928 11:14:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:53.928 11:14:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:53.928 11:14:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.928 11:14:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:53.928 11:14:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:53.928 11:14:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:22:53.928 11:14:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:22:53.928 11:14:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:53.928 11:14:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:53.928 11:14:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:53.928 11:14:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:53.928 11:14:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.928 11:14:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:53.928 11:14:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:53.928 11:14:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:22:53.928 11:14:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:22:53.928 11:14:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:53.928 11:14:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:53.928 11:14:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:53.928 11:14:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:22:53.928 11:14:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:22:53.928 11:14:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:53.928 11:14:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:22:53.928 11:14:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:53.928 11:14:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:53.928 11:14:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:53.928 11:14:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.928 11:14:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:53.928 11:14:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:53.928 11:14:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:22:53.928 11:14:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:22:53.928 11:14:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:22:53.928 11:14:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:53.928 11:14:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:53.928 11:14:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.928 11:14:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:53.928 11:14:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:53.928 11:14:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:53.928 11:14:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:22:53.928 11:14:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:22:53.928 11:14:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:53.928 11:14:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:22:53.928 11:14:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:53.928 11:14:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:53.928 11:14:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:53.928 11:14:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:53.928 11:14:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:53.928 11:14:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.928 11:14:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:53.928 11:14:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ '' == \n\v\m\e\0 ]] 00:22:53.928 11:14:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # sleep 1 00:22:54.494 [2024-05-15 11:14:51.682338] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:54.494 [2024-05-15 11:14:51.682361] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:54.494 [2024-05-15 11:14:51.682376] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:54.752 [2024-05-15 11:14:51.768648] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:54.752 [2024-05-15 11:14:51.984885] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:54.752 [2024-05-15 11:14:51.984907] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:55.010 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:22:55.010 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:55.010 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:22:55.010 11:14:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:55.010 11:14:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:55.010 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:55.010 11:14:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:55.010 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:55.010 11:14:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:55.010 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:55.010 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.010 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:22:55.010 11:14:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:55.010 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:55.010 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:22:55.010 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:22:55.010 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:22:55.010 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:22:55.010 11:14:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:55.010 11:14:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:55.010 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:55.010 11:14:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:55.010 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:55.010 11:14:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:55.010 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:55.010 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:55.010 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:22:55.010 11:14:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:55.010 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:55.010 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:22:55.010 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:22:55.010 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:22:55.010 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_paths nvme0 00:22:55.010 11:14:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:55.010 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:55.010 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:55.010 11:14:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:55.010 11:14:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:55.010 11:14:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:55.010 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ 4420 == \4\4\2\0 ]] 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:55.269 [2024-05-15 11:14:52.458219] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:55.269 [2024-05-15 11:14:52.459293] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:55.269 [2024-05-15 11:14:52.459316] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:55.269 11:14:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:55.270 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:55.270 11:14:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:55.270 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:55.270 11:14:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:55.270 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:55.270 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.270 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:22:55.270 11:14:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:55.270 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:55.270 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:22:55.270 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:22:55.270 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:55.270 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:22:55.270 11:14:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:55.270 11:14:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:55.270 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:55.270 11:14:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:55.270 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:55.270 11:14:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:55.270 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:55.528 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:55.528 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:22:55.528 11:14:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:55.528 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:55.528 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:22:55.528 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:22:55.528 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:55.528 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_paths nvme0 00:22:55.528 11:14:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:55.528 11:14:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:55.528 11:14:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:55.528 11:14:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:55.528 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:55.528 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:55.528 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:55.528 [2024-05-15 11:14:52.585690] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:22:55.528 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:22:55.528 11:14:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # sleep 1 00:22:55.528 [2024-05-15 11:14:52.687191] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:55.528 [2024-05-15 11:14:52.687208] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:55.528 [2024-05-15 11:14:52.687213] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:56.478 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:22:56.478 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:56.478 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_paths nvme0 00:22:56.478 11:14:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:56.478 11:14:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:56.478 11:14:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:56.478 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:56.478 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.478 11:14:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:56.478 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:56.478 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:56.478 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:22:56.478 11:14:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:22:56.478 11:14:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:56.478 11:14:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:56.478 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:56.478 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:22:56.478 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:22:56.478 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:56.478 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:22:56.478 11:14:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:56.478 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:56.478 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.478 11:14:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:56.478 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:56.478 11:14:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:56.478 11:14:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:56.478 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:22:56.478 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:22:56.478 11:14:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:56.478 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:56.478 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.478 [2024-05-15 11:14:53.706306] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:56.478 [2024-05-15 11:14:53.706327] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:56.478 [2024-05-15 11:14:53.706917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.478 [2024-05-15 11:14:53.706933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.478 [2024-05-15 11:14:53.706941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.478 [2024-05-15 11:14:53.706948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.478 [2024-05-15 11:14:53.706955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.478 [2024-05-15 11:14:53.706961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.478 [2024-05-15 11:14:53.706968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.478 [2024-05-15 11:14:53.706974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.478 [2024-05-15 11:14:53.706980] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1855220 is same with the state(5) to be set 00:22:56.478 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:56.478 11:14:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:56.478 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:56.478 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:22:56.478 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:22:56.478 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:56.478 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:22:56.478 11:14:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:56.478 11:14:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:56.478 11:14:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:56.478 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:56.478 11:14:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:56.478 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.478 [2024-05-15 11:14:53.716929] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1855220 (9): Bad file descriptor 00:22:56.478 [2024-05-15 11:14:53.726970] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:56.478 [2024-05-15 11:14:53.727231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:56.478 [2024-05-15 11:14:53.727399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:56.478 [2024-05-15 11:14:53.727410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1855220 with addr=10.0.0.2, port=4420 00:22:56.478 [2024-05-15 11:14:53.727417] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1855220 is same with the state(5) to be set 00:22:56.478 [2024-05-15 11:14:53.727429] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1855220 (9): Bad file descriptor 00:22:56.478 [2024-05-15 11:14:53.727445] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:56.478 [2024-05-15 11:14:53.727452] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:56.478 [2024-05-15 11:14:53.727460] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:56.478 [2024-05-15 11:14:53.727471] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:56.478 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:56.478 [2024-05-15 11:14:53.737022] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:56.478 [2024-05-15 11:14:53.737215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:56.478 [2024-05-15 11:14:53.737361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:56.478 [2024-05-15 11:14:53.737372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1855220 with addr=10.0.0.2, port=4420 00:22:56.478 [2024-05-15 11:14:53.737379] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1855220 is same with the state(5) to be set 00:22:56.478 [2024-05-15 11:14:53.737389] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1855220 (9): Bad file descriptor 00:22:56.478 [2024-05-15 11:14:53.737398] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:56.478 [2024-05-15 11:14:53.737404] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:56.478 [2024-05-15 11:14:53.737410] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:56.478 [2024-05-15 11:14:53.737420] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:56.738 [2024-05-15 11:14:53.747073] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:56.738 [2024-05-15 11:14:53.747354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:56.738 [2024-05-15 11:14:53.747518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:56.738 [2024-05-15 11:14:53.747528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1855220 with addr=10.0.0.2, port=4420 00:22:56.738 [2024-05-15 11:14:53.747535] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1855220 is same with the state(5) to be set 00:22:56.738 [2024-05-15 11:14:53.747551] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1855220 (9): Bad file descriptor 00:22:56.738 [2024-05-15 11:14:53.747568] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:56.738 [2024-05-15 11:14:53.747575] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:56.738 [2024-05-15 11:14:53.747581] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:56.738 [2024-05-15 11:14:53.747591] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:56.738 [2024-05-15 11:14:53.757132] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:56.738 [2024-05-15 11:14:53.757319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:56.738 [2024-05-15 11:14:53.757436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:56.738 [2024-05-15 11:14:53.757446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1855220 with addr=10.0.0.2, port=4420 00:22:56.738 [2024-05-15 11:14:53.757453] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1855220 is same with the state(5) to be set 00:22:56.738 [2024-05-15 11:14:53.757464] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1855220 (9): Bad file descriptor 00:22:56.738 [2024-05-15 11:14:53.757474] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:56.738 [2024-05-15 11:14:53.757479] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:56.738 [2024-05-15 11:14:53.757486] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:56.738 [2024-05-15 11:14:53.757495] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:56.738 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.738 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:22:56.738 11:14:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:56.738 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:56.738 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:22:56.738 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:22:56.738 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:56.738 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:22:56.738 [2024-05-15 11:14:53.767185] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:56.738 [2024-05-15 11:14:53.767379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:56.738 [2024-05-15 11:14:53.767542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:56.738 [2024-05-15 11:14:53.767552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1855220 with addr=10.0.0.2, port=4420 00:22:56.738 [2024-05-15 11:14:53.767560] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1855220 is same with the state(5) to be set 00:22:56.738 [2024-05-15 11:14:53.767570] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1855220 (9): Bad file descriptor 00:22:56.738 [2024-05-15 11:14:53.767579] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:56.738 [2024-05-15 11:14:53.767585] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:56.738 [2024-05-15 11:14:53.767591] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:56.738 [2024-05-15 11:14:53.767603] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:56.738 11:14:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:56.738 11:14:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:56.738 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:56.738 11:14:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:56.738 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.738 11:14:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:56.738 [2024-05-15 11:14:53.777236] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:56.738 [2024-05-15 11:14:53.777429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:56.738 [2024-05-15 11:14:53.777640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:56.738 [2024-05-15 11:14:53.777650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1855220 with addr=10.0.0.2, port=4420 00:22:56.738 [2024-05-15 11:14:53.777657] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1855220 is same with the state(5) to be set 00:22:56.738 [2024-05-15 11:14:53.777668] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1855220 (9): Bad file descriptor 00:22:56.738 [2024-05-15 11:14:53.777683] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:56.738 [2024-05-15 11:14:53.777690] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:56.738 [2024-05-15 11:14:53.777696] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:56.738 [2024-05-15 11:14:53.777706] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:56.738 [2024-05-15 11:14:53.787290] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:56.738 [2024-05-15 11:14:53.787395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:56.738 [2024-05-15 11:14:53.787536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:56.738 [2024-05-15 11:14:53.787546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1855220 with addr=10.0.0.2, port=4420 00:22:56.738 [2024-05-15 11:14:53.787553] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1855220 is same with the state(5) to be set 00:22:56.738 [2024-05-15 11:14:53.787563] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1855220 (9): Bad file descriptor 00:22:56.738 [2024-05-15 11:14:53.787571] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:56.738 [2024-05-15 11:14:53.787577] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:56.738 [2024-05-15 11:14:53.787583] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:56.738 [2024-05-15 11:14:53.787592] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:56.738 [2024-05-15 11:14:53.792731] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:56.738 [2024-05-15 11:14:53.792747] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:56.738 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:56.738 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:56.738 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:22:56.738 11:14:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:56.738 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:56.738 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:22:56.738 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:22:56.738 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:22:56.738 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_paths nvme0 00:22:56.738 11:14:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:56.738 11:14:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:56.738 11:14:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:56.738 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:56.738 11:14:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:56.738 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.738 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:56.738 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ 4421 == \4\4\2\1 ]] 00:22:56.738 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:22:56.738 11:14:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:22:56.738 11:14:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:56.738 11:14:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:56.738 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:56.738 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:22:56.738 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:22:56.738 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:56.738 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:22:56.738 11:14:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:56.738 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:56.738 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.738 11:14:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:56.738 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:56.738 11:14:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:56.738 11:14:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:56.738 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:22:56.738 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:22:56.738 11:14:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:56.739 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:56.739 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.739 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:56.739 11:14:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:22:56.739 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:22:56.739 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:22:56.739 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:22:56.739 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:22:56.739 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:22:56.739 11:14:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:56.739 11:14:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:56.739 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:56.739 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.739 11:14:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:56.739 11:14:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:56.739 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:56.739 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ '' == '' ]] 00:22:56.739 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:22:56.739 11:14:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:22:56.739 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:22:56.739 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:22:56.739 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:22:56.739 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:22:56.739 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:22:56.739 11:14:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:56.739 11:14:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:56.739 11:14:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:56.739 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:56.739 11:14:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:56.739 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.739 11:14:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:56.997 11:14:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ '' == '' ]] 00:22:56.997 11:14:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:22:56.997 11:14:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:22:56.997 11:14:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:22:56.997 11:14:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:56.997 11:14:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:56.997 11:14:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:22:56.997 11:14:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:22:56.997 11:14:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:56.997 11:14:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:22:56.997 11:14:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:56.997 11:14:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:56.997 11:14:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:56.997 11:14:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.997 11:14:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:56.997 11:14:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:22:56.997 11:14:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:22:56.997 11:14:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:22:56.997 11:14:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:22:56.997 11:14:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:56.997 11:14:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:56.997 11:14:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:57.931 [2024-05-15 11:14:55.131238] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:57.931 [2024-05-15 11:14:55.131254] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:57.931 [2024-05-15 11:14:55.131266] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:58.190 [2024-05-15 11:14:55.219547] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:22:58.448 [2024-05-15 11:14:55.488268] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:58.448 [2024-05-15 11:14:55.488295] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:58.448 11:14:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:58.448 11:14:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:58.448 11:14:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:22:58.448 11:14:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:58.448 11:14:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:22:58.448 11:14:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:58.448 11:14:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:22:58.448 11:14:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:58.448 11:14:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:58.448 11:14:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:58.448 11:14:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:58.448 request: 00:22:58.448 { 00:22:58.448 "name": "nvme", 00:22:58.448 "trtype": "tcp", 00:22:58.448 "traddr": "10.0.0.2", 00:22:58.448 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:58.448 "adrfam": "ipv4", 00:22:58.448 "trsvcid": "8009", 00:22:58.448 "wait_for_attach": true, 00:22:58.448 "method": "bdev_nvme_start_discovery", 00:22:58.448 "req_id": 1 00:22:58.448 } 00:22:58.448 Got JSON-RPC error response 00:22:58.448 response: 00:22:58.448 { 00:22:58.448 "code": -17, 00:22:58.448 "message": "File exists" 00:22:58.448 } 00:22:58.448 11:14:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:22:58.448 11:14:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:22:58.448 11:14:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:58.448 11:14:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:58.448 11:14:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:58.448 11:14:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:22:58.448 11:14:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:58.448 11:14:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:58.448 11:14:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:58.448 11:14:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:58.448 11:14:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:58.448 11:14:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:58.448 11:14:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:58.448 11:14:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:22:58.448 11:14:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:22:58.448 11:14:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:58.448 11:14:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:58.448 11:14:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:58.448 11:14:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:58.448 11:14:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:58.448 11:14:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:58.448 11:14:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:58.448 11:14:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:58.448 11:14:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:58.448 11:14:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:22:58.449 11:14:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:58.449 11:14:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:22:58.449 11:14:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:58.449 11:14:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:22:58.449 11:14:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:58.449 11:14:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:58.449 11:14:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:58.449 11:14:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:58.449 request: 00:22:58.449 { 00:22:58.449 "name": "nvme_second", 00:22:58.449 "trtype": "tcp", 00:22:58.449 "traddr": "10.0.0.2", 00:22:58.449 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:58.449 "adrfam": "ipv4", 00:22:58.449 "trsvcid": "8009", 00:22:58.449 "wait_for_attach": true, 00:22:58.449 "method": "bdev_nvme_start_discovery", 00:22:58.449 "req_id": 1 00:22:58.449 } 00:22:58.449 Got JSON-RPC error response 00:22:58.449 response: 00:22:58.449 { 00:22:58.449 "code": -17, 00:22:58.449 "message": "File exists" 00:22:58.449 } 00:22:58.449 11:14:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:22:58.449 11:14:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:22:58.449 11:14:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:58.449 11:14:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:58.449 11:14:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:58.449 11:14:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:22:58.449 11:14:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:58.449 11:14:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:58.449 11:14:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:58.449 11:14:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:58.449 11:14:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:58.449 11:14:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:58.449 11:14:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:58.449 11:14:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:22:58.449 11:14:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:22:58.449 11:14:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:58.449 11:14:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:58.449 11:14:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:58.449 11:14:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:58.449 11:14:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:58.449 11:14:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:58.449 11:14:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:58.706 11:14:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:58.706 11:14:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:58.707 11:14:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:22:58.707 11:14:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:58.707 11:14:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:22:58.707 11:14:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:58.707 11:14:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:22:58.707 11:14:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:58.707 11:14:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:58.707 11:14:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:58.707 11:14:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:59.641 [2024-05-15 11:14:56.731710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.641 [2024-05-15 11:14:56.731966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.641 [2024-05-15 11:14:56.731977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1868d40 with addr=10.0.0.2, port=8010 00:22:59.641 [2024-05-15 11:14:56.731988] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:59.641 [2024-05-15 11:14:56.731995] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:59.641 [2024-05-15 11:14:56.732001] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:00.574 [2024-05-15 11:14:57.734192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:00.574 [2024-05-15 11:14:57.734453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:00.574 [2024-05-15 11:14:57.734464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1868d40 with addr=10.0.0.2, port=8010 00:23:00.574 [2024-05-15 11:14:57.734474] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:00.574 [2024-05-15 11:14:57.734480] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:00.574 [2024-05-15 11:14:57.734486] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:01.507 [2024-05-15 11:14:58.736365] bdev_nvme.c:7010:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:01.507 request: 00:23:01.507 { 00:23:01.507 "name": "nvme_second", 00:23:01.507 "trtype": "tcp", 00:23:01.507 "traddr": "10.0.0.2", 00:23:01.507 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:01.507 "adrfam": "ipv4", 00:23:01.507 "trsvcid": "8010", 00:23:01.507 "attach_timeout_ms": 3000, 00:23:01.507 "method": "bdev_nvme_start_discovery", 00:23:01.507 "req_id": 1 00:23:01.507 } 00:23:01.507 Got JSON-RPC error response 00:23:01.507 response: 00:23:01.507 { 00:23:01.507 "code": -110, 00:23:01.507 "message": "Connection timed out" 00:23:01.507 } 00:23:01.507 11:14:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:23:01.507 11:14:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:23:01.507 11:14:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:01.507 11:14:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:01.507 11:14:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:01.507 11:14:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:01.507 11:14:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:01.507 11:14:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:01.507 11:14:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.507 11:14:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:01.507 11:14:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:01.507 11:14:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:01.507 11:14:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.766 11:14:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:01.766 11:14:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:01.766 11:14:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2352345 00:23:01.766 11:14:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:01.766 11:14:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:01.766 11:14:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:23:01.766 11:14:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:01.766 11:14:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:23:01.766 11:14:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:01.766 11:14:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:01.766 rmmod nvme_tcp 00:23:01.766 rmmod nvme_fabrics 00:23:01.766 rmmod nvme_keyring 00:23:01.766 11:14:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:01.766 11:14:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:23:01.766 11:14:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:23:01.766 11:14:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 2352140 ']' 00:23:01.766 11:14:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 2352140 00:23:01.766 11:14:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@947 -- # '[' -z 2352140 ']' 00:23:01.766 11:14:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # kill -0 2352140 00:23:01.766 11:14:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # uname 00:23:01.766 11:14:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:01.766 11:14:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2352140 00:23:01.766 11:14:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:23:01.766 11:14:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:23:01.766 11:14:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2352140' 00:23:01.766 killing process with pid 2352140 00:23:01.766 11:14:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # kill 2352140 00:23:01.766 [2024-05-15 11:14:58.904327] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:01.766 11:14:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@971 -- # wait 2352140 00:23:02.025 11:14:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:02.025 11:14:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:02.025 11:14:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:02.025 11:14:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:02.025 11:14:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:02.025 11:14:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.025 11:14:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:02.025 11:14:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:03.944 11:15:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:03.944 00:23:03.944 real 0m17.557s 00:23:03.944 user 0m22.444s 00:23:03.944 sys 0m5.133s 00:23:03.945 11:15:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # xtrace_disable 00:23:03.945 11:15:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:03.945 ************************************ 00:23:03.945 END TEST nvmf_host_discovery 00:23:03.945 ************************************ 00:23:04.239 11:15:01 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:04.239 11:15:01 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:23:04.239 11:15:01 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:23:04.239 11:15:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:04.239 ************************************ 00:23:04.239 START TEST nvmf_host_multipath_status 00:23:04.239 ************************************ 00:23:04.239 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:04.239 * Looking for test storage... 00:23:04.239 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:04.239 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:04.239 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:23:04.240 11:15:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:09.514 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:09.514 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:23:09.514 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:09.514 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:09.514 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:09.514 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:09.514 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:09.514 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:23:09.514 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:09.514 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:23:09.514 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:23:09.514 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:23:09.514 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:23:09.514 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:23:09.514 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:23:09.514 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:09.514 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:09.514 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:09.514 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:09.514 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:09.514 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:09.514 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:09.514 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:09.514 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:09.514 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:09.514 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:09.514 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:09.514 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:09.514 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:09.514 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:09.514 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:09.514 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:09.514 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:09.514 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:09.514 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:09.514 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:09.514 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:09.514 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:09.514 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:09.514 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:09.514 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:09.514 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:09.514 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:09.514 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:09.515 Found net devices under 0000:86:00.0: cvl_0_0 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:09.515 Found net devices under 0000:86:00.1: cvl_0_1 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:09.515 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:09.515 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:23:09.515 00:23:09.515 --- 10.0.0.2 ping statistics --- 00:23:09.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.515 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:09.515 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:09.515 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:23:09.515 00:23:09.515 --- 10.0.0.1 ping statistics --- 00:23:09.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.515 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@721 -- # xtrace_disable 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2357924 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2357924 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@828 -- # '[' -z 2357924 ']' 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:09.515 11:15:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:09.515 [2024-05-15 11:15:06.745008] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:23:09.515 [2024-05-15 11:15:06.745048] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:09.515 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.774 [2024-05-15 11:15:06.810640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:09.774 [2024-05-15 11:15:06.909474] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:09.774 [2024-05-15 11:15:06.909514] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:09.774 [2024-05-15 11:15:06.909521] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:09.774 [2024-05-15 11:15:06.909527] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:09.774 [2024-05-15 11:15:06.909532] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:09.774 [2024-05-15 11:15:06.909588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:09.774 [2024-05-15 11:15:06.909592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:10.340 11:15:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:10.340 11:15:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@861 -- # return 0 00:23:10.340 11:15:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:10.340 11:15:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@727 -- # xtrace_disable 00:23:10.340 11:15:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:10.598 11:15:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:10.598 11:15:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2357924 00:23:10.598 11:15:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:10.598 [2024-05-15 11:15:07.771242] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:10.598 11:15:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:10.856 Malloc0 00:23:10.856 11:15:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:11.114 11:15:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:11.114 11:15:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:11.372 [2024-05-15 11:15:08.459550] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:11.372 [2024-05-15 11:15:08.459802] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:11.372 11:15:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:11.372 [2024-05-15 11:15:08.636247] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:11.630 11:15:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2358290 00:23:11.630 11:15:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:11.630 11:15:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:11.630 11:15:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2358290 /var/tmp/bdevperf.sock 00:23:11.630 11:15:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@828 -- # '[' -z 2358290 ']' 00:23:11.630 11:15:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:11.630 11:15:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:11.630 11:15:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:11.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:11.630 11:15:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:11.630 11:15:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:12.564 11:15:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:12.564 11:15:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@861 -- # return 0 00:23:12.564 11:15:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:12.564 11:15:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:23:12.822 Nvme0n1 00:23:13.079 11:15:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:13.337 Nvme0n1 00:23:13.337 11:15:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:23:13.337 11:15:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:15.234 11:15:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:23:15.235 11:15:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:15.491 11:15:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:15.749 11:15:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:23:16.685 11:15:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:23:16.685 11:15:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:16.685 11:15:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:16.685 11:15:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:16.944 11:15:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:16.944 11:15:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:16.944 11:15:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:16.944 11:15:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:16.944 11:15:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:16.944 11:15:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:16.944 11:15:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:16.944 11:15:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:17.202 11:15:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:17.202 11:15:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:17.202 11:15:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:17.202 11:15:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:17.461 11:15:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:17.461 11:15:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:17.461 11:15:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:17.461 11:15:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:17.718 11:15:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:17.718 11:15:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:17.718 11:15:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:17.718 11:15:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:17.718 11:15:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:17.718 11:15:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:17.719 11:15:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:17.977 11:15:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:18.234 11:15:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:23:19.169 11:15:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:23:19.169 11:15:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:19.169 11:15:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:19.169 11:15:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:19.427 11:15:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:19.427 11:15:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:19.427 11:15:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:19.427 11:15:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:19.427 11:15:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:19.427 11:15:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:19.427 11:15:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:19.427 11:15:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:19.685 11:15:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:19.685 11:15:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:19.685 11:15:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:19.685 11:15:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:19.943 11:15:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:19.943 11:15:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:19.943 11:15:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:19.943 11:15:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:20.202 11:15:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:20.202 11:15:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:20.202 11:15:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.202 11:15:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:20.202 11:15:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:20.202 11:15:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:23:20.202 11:15:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:20.461 11:15:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:20.719 11:15:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:23:21.653 11:15:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:23:21.653 11:15:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:21.653 11:15:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:21.653 11:15:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:21.911 11:15:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:21.911 11:15:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:21.911 11:15:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:21.911 11:15:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:21.911 11:15:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:21.911 11:15:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:21.911 11:15:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:22.169 11:15:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:22.169 11:15:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:22.169 11:15:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:22.169 11:15:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:22.170 11:15:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:22.428 11:15:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:22.428 11:15:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:22.428 11:15:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:22.428 11:15:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:22.686 11:15:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:22.686 11:15:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:22.686 11:15:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:22.686 11:15:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:22.686 11:15:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:22.686 11:15:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:23:22.686 11:15:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:22.945 11:15:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:23.203 11:15:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:23:24.138 11:15:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:23:24.138 11:15:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:24.138 11:15:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:24.138 11:15:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:24.396 11:15:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:24.396 11:15:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:24.396 11:15:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:24.396 11:15:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:24.654 11:15:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:24.654 11:15:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:24.654 11:15:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:24.654 11:15:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:24.654 11:15:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:24.654 11:15:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:24.654 11:15:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:24.654 11:15:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:24.912 11:15:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:24.912 11:15:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:24.912 11:15:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:24.912 11:15:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:25.170 11:15:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:25.170 11:15:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:25.170 11:15:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:25.170 11:15:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:25.429 11:15:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:25.429 11:15:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:23:25.429 11:15:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:25.429 11:15:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:25.687 11:15:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:23:26.622 11:15:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:23:26.622 11:15:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:26.622 11:15:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:26.622 11:15:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:26.919 11:15:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:26.919 11:15:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:26.919 11:15:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:26.919 11:15:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:27.200 11:15:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:27.200 11:15:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:27.201 11:15:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:27.201 11:15:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:27.201 11:15:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:27.201 11:15:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:27.201 11:15:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:27.201 11:15:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:27.460 11:15:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:27.460 11:15:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:27.460 11:15:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:27.460 11:15:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:27.718 11:15:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:27.718 11:15:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:27.718 11:15:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:27.719 11:15:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:27.719 11:15:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:27.719 11:15:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:23:27.719 11:15:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:27.977 11:15:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:28.236 11:15:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:23:29.173 11:15:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:23:29.173 11:15:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:29.173 11:15:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:29.173 11:15:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:29.432 11:15:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:29.432 11:15:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:29.432 11:15:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:29.432 11:15:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:29.432 11:15:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:29.432 11:15:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:29.432 11:15:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:29.432 11:15:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:29.691 11:15:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:29.692 11:15:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:29.692 11:15:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:29.692 11:15:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:29.951 11:15:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:29.951 11:15:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:29.951 11:15:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:29.951 11:15:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:29.951 11:15:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:29.951 11:15:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:29.951 11:15:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:29.951 11:15:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:30.209 11:15:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:30.209 11:15:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:23:30.467 11:15:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:23:30.467 11:15:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:30.467 11:15:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:30.726 11:15:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:23:31.663 11:15:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:23:31.663 11:15:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:31.922 11:15:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:31.922 11:15:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:31.922 11:15:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:31.922 11:15:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:31.922 11:15:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:31.922 11:15:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:32.180 11:15:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:32.180 11:15:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:32.180 11:15:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:32.180 11:15:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:32.438 11:15:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:32.438 11:15:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:32.438 11:15:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:32.438 11:15:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:32.438 11:15:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:32.438 11:15:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:32.438 11:15:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:32.438 11:15:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:32.696 11:15:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:32.696 11:15:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:32.696 11:15:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:32.696 11:15:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:32.953 11:15:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:32.953 11:15:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:23:32.953 11:15:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:33.211 11:15:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:33.211 11:15:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:23:34.586 11:15:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:23:34.586 11:15:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:34.586 11:15:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:34.586 11:15:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:34.586 11:15:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:34.587 11:15:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:34.587 11:15:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:34.587 11:15:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:34.587 11:15:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:34.587 11:15:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:34.587 11:15:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:34.587 11:15:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:34.844 11:15:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:34.844 11:15:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:34.844 11:15:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:34.844 11:15:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:35.101 11:15:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:35.101 11:15:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:35.101 11:15:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.101 11:15:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:35.101 11:15:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:35.101 11:15:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:35.101 11:15:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.101 11:15:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:35.359 11:15:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:35.359 11:15:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:23:35.359 11:15:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:35.618 11:15:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:35.876 11:15:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:23:36.812 11:15:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:23:36.812 11:15:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:36.812 11:15:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:36.812 11:15:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:37.071 11:15:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:37.071 11:15:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:37.071 11:15:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.071 11:15:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:37.071 11:15:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:37.071 11:15:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:37.071 11:15:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.071 11:15:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:37.330 11:15:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:37.330 11:15:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:37.330 11:15:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.330 11:15:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:37.588 11:15:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:37.588 11:15:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:37.588 11:15:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.588 11:15:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:37.588 11:15:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:37.589 11:15:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:37.589 11:15:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.589 11:15:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:37.847 11:15:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:37.847 11:15:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:23:37.847 11:15:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:38.106 11:15:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:38.366 11:15:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:23:39.300 11:15:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:23:39.300 11:15:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:39.300 11:15:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.300 11:15:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:39.559 11:15:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:39.559 11:15:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:39.559 11:15:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.560 11:15:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:39.560 11:15:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:39.560 11:15:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:39.560 11:15:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.560 11:15:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:39.818 11:15:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:39.818 11:15:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:39.818 11:15:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.818 11:15:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:40.077 11:15:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:40.077 11:15:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:40.077 11:15:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.077 11:15:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:40.077 11:15:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:40.077 11:15:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:40.077 11:15:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.077 11:15:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:40.337 11:15:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:40.337 11:15:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2358290 00:23:40.337 11:15:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@947 -- # '[' -z 2358290 ']' 00:23:40.337 11:15:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # kill -0 2358290 00:23:40.337 11:15:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # uname 00:23:40.337 11:15:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:40.337 11:15:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2358290 00:23:40.337 11:15:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:23:40.337 11:15:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:23:40.337 11:15:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2358290' 00:23:40.337 killing process with pid 2358290 00:23:40.337 11:15:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # kill 2358290 00:23:40.337 11:15:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # wait 2358290 00:23:40.602 Connection closed with partial response: 00:23:40.602 00:23:40.602 00:23:40.602 11:15:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2358290 00:23:40.602 11:15:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:40.602 [2024-05-15 11:15:08.698775] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:23:40.602 [2024-05-15 11:15:08.698827] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2358290 ] 00:23:40.602 EAL: No free 2048 kB hugepages reported on node 1 00:23:40.602 [2024-05-15 11:15:08.750345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.602 [2024-05-15 11:15:08.823357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:40.602 Running I/O for 90 seconds... 00:23:40.602 [2024-05-15 11:15:22.638360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:28392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.602 [2024-05-15 11:15:22.638399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:40.602 [2024-05-15 11:15:22.638453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.602 [2024-05-15 11:15:22.638462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:40.602 [2024-05-15 11:15:22.638476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:28408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.602 [2024-05-15 11:15:22.638483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.602 [2024-05-15 11:15:22.638497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:28416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.602 [2024-05-15 11:15:22.638504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.602 [2024-05-15 11:15:22.638517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:28424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.602 [2024-05-15 11:15:22.638525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:40.602 [2024-05-15 11:15:22.638538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:28432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.602 [2024-05-15 11:15:22.638545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:40.602 [2024-05-15 11:15:22.638558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:28440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.602 [2024-05-15 11:15:22.638565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:40.602 [2024-05-15 11:15:22.638578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:28448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.602 [2024-05-15 11:15:22.638585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:40.602 [2024-05-15 11:15:22.639808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:28456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.602 [2024-05-15 11:15:22.639818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:40.602 [2024-05-15 11:15:22.639833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:28464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.602 [2024-05-15 11:15:22.639840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:40.602 [2024-05-15 11:15:22.639854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:28472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.602 [2024-05-15 11:15:22.639867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:40.602 [2024-05-15 11:15:22.639880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:28480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.602 [2024-05-15 11:15:22.639888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:40.602 [2024-05-15 11:15:22.639900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:28488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.602 [2024-05-15 11:15:22.639907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:40.602 [2024-05-15 11:15:22.639921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.602 [2024-05-15 11:15:22.639928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:40.602 [2024-05-15 11:15:22.639942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:27496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.602 [2024-05-15 11:15:22.639949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:40.602 [2024-05-15 11:15:22.639962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:27504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.602 [2024-05-15 11:15:22.639969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:40.602 [2024-05-15 11:15:22.639982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:27512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.602 [2024-05-15 11:15:22.639989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:40.602 [2024-05-15 11:15:22.640003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:27520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.602 [2024-05-15 11:15:22.640010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:40.602 [2024-05-15 11:15:22.640023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:27528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.602 [2024-05-15 11:15:22.640031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:40.602 [2024-05-15 11:15:22.640044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:27536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.602 [2024-05-15 11:15:22.640051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:40.603 [2024-05-15 11:15:22.640064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:27544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.603 [2024-05-15 11:15:22.640071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:40.603 [2024-05-15 11:15:22.640084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:27552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.603 [2024-05-15 11:15:22.640092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:40.603 [2024-05-15 11:15:22.640106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:27560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.603 [2024-05-15 11:15:22.640112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:40.603 [2024-05-15 11:15:22.640127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:27568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.603 [2024-05-15 11:15:22.640135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:40.603 [2024-05-15 11:15:22.640148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:27576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.603 [2024-05-15 11:15:22.640156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:40.603 [2024-05-15 11:15:22.640212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:27584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.603 [2024-05-15 11:15:22.640222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:40.603 [2024-05-15 11:15:22.640236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:27592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.603 [2024-05-15 11:15:22.640244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:40.603 [2024-05-15 11:15:22.640258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:27600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.603 [2024-05-15 11:15:22.640266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:40.603 [2024-05-15 11:15:22.640281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.603 [2024-05-15 11:15:22.640289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:40.603 [2024-05-15 11:15:22.640303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:27616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.603 [2024-05-15 11:15:22.640310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:40.603 [2024-05-15 11:15:22.640324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:27624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.603 [2024-05-15 11:15:22.640332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:40.603 [2024-05-15 11:15:22.640345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:27632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.603 [2024-05-15 11:15:22.640353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:40.603 [2024-05-15 11:15:22.640367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:27640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.603 [2024-05-15 11:15:22.640374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:40.603 [2024-05-15 11:15:22.640387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:27648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.603 [2024-05-15 11:15:22.640395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:40.603 [2024-05-15 11:15:22.640409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:27656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.603 [2024-05-15 11:15:22.640416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:40.603 [2024-05-15 11:15:22.640432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:27664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.603 [2024-05-15 11:15:22.640439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.603 [2024-05-15 11:15:22.640454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:28496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.603 [2024-05-15 11:15:22.640461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:40.603 [2024-05-15 11:15:22.640476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:27672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.603 [2024-05-15 11:15:22.640483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:40.603 [2024-05-15 11:15:22.640497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:27680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.603 [2024-05-15 11:15:22.640504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:40.603 [2024-05-15 11:15:22.640519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:27688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.603 [2024-05-15 11:15:22.640526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:40.603 [2024-05-15 11:15:22.640540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:27696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.603 [2024-05-15 11:15:22.640547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:40.603 [2024-05-15 11:15:22.640561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:27704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.603 [2024-05-15 11:15:22.640568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:40.603 [2024-05-15 11:15:22.640582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:27712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.603 [2024-05-15 11:15:22.640589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:40.603 [2024-05-15 11:15:22.640603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:27720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.603 [2024-05-15 11:15:22.640611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:40.603 [2024-05-15 11:15:22.640625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:27728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.603 [2024-05-15 11:15:22.640632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:40.603 [2024-05-15 11:15:22.640646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:27736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.603 [2024-05-15 11:15:22.640653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:40.603 [2024-05-15 11:15:22.640667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:27744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.603 [2024-05-15 11:15:22.640675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:40.603 [2024-05-15 11:15:22.640690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:27752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.603 [2024-05-15 11:15:22.640697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:40.603 [2024-05-15 11:15:22.640711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:27760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.603 [2024-05-15 11:15:22.640718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:40.603 [2024-05-15 11:15:22.640732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:27768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.603 [2024-05-15 11:15:22.640739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:40.603 [2024-05-15 11:15:22.640753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.603 [2024-05-15 11:15:22.640761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:40.603 [2024-05-15 11:15:22.640775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.603 [2024-05-15 11:15:22.640782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:40.603 [2024-05-15 11:15:22.640795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:27792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.603 [2024-05-15 11:15:22.640802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:40.603 [2024-05-15 11:15:22.640816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:27800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.603 [2024-05-15 11:15:22.640824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:40.603 [2024-05-15 11:15:22.640838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.603 [2024-05-15 11:15:22.640845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:40.603 [2024-05-15 11:15:22.640859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.603 [2024-05-15 11:15:22.640867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:40.603 [2024-05-15 11:15:22.640881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:27824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.603 [2024-05-15 11:15:22.640888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:40.603 [2024-05-15 11:15:22.640902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:27832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.603 [2024-05-15 11:15:22.640909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:40.603 [2024-05-15 11:15:22.640923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.603 [2024-05-15 11:15:22.640931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:40.603 [2024-05-15 11:15:22.640945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.604 [2024-05-15 11:15:22.640953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:40.604 [2024-05-15 11:15:22.640969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:27856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.604 [2024-05-15 11:15:22.640977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:40.604 [2024-05-15 11:15:22.640991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:27864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.604 [2024-05-15 11:15:22.640999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:40.604 [2024-05-15 11:15:22.641012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:27872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.604 [2024-05-15 11:15:22.641020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:40.604 [2024-05-15 11:15:22.641033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:27880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.604 [2024-05-15 11:15:22.641041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:40.604 [2024-05-15 11:15:22.641054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:27888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.604 [2024-05-15 11:15:22.641061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:40.604 [2024-05-15 11:15:22.641076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:27896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.604 [2024-05-15 11:15:22.641083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:40.604 [2024-05-15 11:15:22.641097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:27904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.604 [2024-05-15 11:15:22.641104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:40.604 [2024-05-15 11:15:22.641118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:27912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.604 [2024-05-15 11:15:22.641126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.604 [2024-05-15 11:15:22.641141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:27920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.604 [2024-05-15 11:15:22.641148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:40.604 [2024-05-15 11:15:22.641162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:27928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.604 [2024-05-15 11:15:22.641174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:40.604 [2024-05-15 11:15:22.641188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:27936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.604 [2024-05-15 11:15:22.641195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:40.604 [2024-05-15 11:15:22.641210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:27944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.604 [2024-05-15 11:15:22.641220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:40.604 [2024-05-15 11:15:22.641326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:27952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.604 [2024-05-15 11:15:22.641336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:40.604 [2024-05-15 11:15:22.641353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:27960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.604 [2024-05-15 11:15:22.641360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:40.604 [2024-05-15 11:15:22.641376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:27968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.604 [2024-05-15 11:15:22.641383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:40.604 [2024-05-15 11:15:22.641399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:27976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.604 [2024-05-15 11:15:22.641407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:40.604 [2024-05-15 11:15:22.641423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:27984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.604 [2024-05-15 11:15:22.641430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:40.604 [2024-05-15 11:15:22.641446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:27992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.604 [2024-05-15 11:15:22.641453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:40.604 [2024-05-15 11:15:22.641469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:28000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.604 [2024-05-15 11:15:22.641476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:40.604 [2024-05-15 11:15:22.641492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:28008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.604 [2024-05-15 11:15:22.641499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:40.604 [2024-05-15 11:15:22.641515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:28016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.604 [2024-05-15 11:15:22.641522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:40.604 [2024-05-15 11:15:22.641538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:28024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.604 [2024-05-15 11:15:22.641545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:40.604 [2024-05-15 11:15:22.641561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:28032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.604 [2024-05-15 11:15:22.641568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:40.604 [2024-05-15 11:15:22.641584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:28040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.604 [2024-05-15 11:15:22.641591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:40.604 [2024-05-15 11:15:22.641609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:28048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.604 [2024-05-15 11:15:22.641616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:40.604 [2024-05-15 11:15:22.641631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:28056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.604 [2024-05-15 11:15:22.641638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:40.604 [2024-05-15 11:15:22.641654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:28064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.604 [2024-05-15 11:15:22.641661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:40.604 [2024-05-15 11:15:22.641677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:28072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.604 [2024-05-15 11:15:22.641684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:40.604 [2024-05-15 11:15:22.641700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.604 [2024-05-15 11:15:22.641707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:40.604 [2024-05-15 11:15:22.641723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:28088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.604 [2024-05-15 11:15:22.641730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:40.604 [2024-05-15 11:15:22.641746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:28096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.604 [2024-05-15 11:15:22.641753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:40.604 [2024-05-15 11:15:22.641769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.604 [2024-05-15 11:15:22.641776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:40.604 [2024-05-15 11:15:22.641792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:28112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.604 [2024-05-15 11:15:22.641799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:40.604 [2024-05-15 11:15:22.641815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:28120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.604 [2024-05-15 11:15:22.641822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:40.604 [2024-05-15 11:15:22.641838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:28128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.604 [2024-05-15 11:15:22.641845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:40.604 [2024-05-15 11:15:22.641860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:28136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.605 [2024-05-15 11:15:22.641868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:40.605 [2024-05-15 11:15:22.641886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:28144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.605 [2024-05-15 11:15:22.641893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:40.605 [2024-05-15 11:15:22.641909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:28152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.605 [2024-05-15 11:15:22.641916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:40.605 [2024-05-15 11:15:22.641932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:28160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.605 [2024-05-15 11:15:22.641939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:40.605 [2024-05-15 11:15:22.641955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:28168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.605 [2024-05-15 11:15:22.641962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.605 [2024-05-15 11:15:22.641978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:28176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.605 [2024-05-15 11:15:22.641985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:40.605 [2024-05-15 11:15:22.642000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:28184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.605 [2024-05-15 11:15:22.642007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:40.605 [2024-05-15 11:15:22.642023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:28192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.605 [2024-05-15 11:15:22.642030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:40.605 [2024-05-15 11:15:22.642046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:28200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.605 [2024-05-15 11:15:22.642052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:40.605 [2024-05-15 11:15:22.642068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:28208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.605 [2024-05-15 11:15:22.642075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:40.605 [2024-05-15 11:15:22.642091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.605 [2024-05-15 11:15:22.642098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:40.605 [2024-05-15 11:15:22.642114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:28224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.605 [2024-05-15 11:15:22.642120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:40.605 [2024-05-15 11:15:22.642137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:28232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.605 [2024-05-15 11:15:22.642144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:40.605 [2024-05-15 11:15:22.642160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:28240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.605 [2024-05-15 11:15:22.642173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:40.605 [2024-05-15 11:15:22.642189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.605 [2024-05-15 11:15:22.642196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:40.605 [2024-05-15 11:15:22.642212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:28256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.605 [2024-05-15 11:15:22.642219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:40.605 [2024-05-15 11:15:22.642235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:28504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.605 [2024-05-15 11:15:22.642242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:40.605 [2024-05-15 11:15:22.642258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:28264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.605 [2024-05-15 11:15:22.642275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:40.605 [2024-05-15 11:15:22.642291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:28272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.605 [2024-05-15 11:15:22.642298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:40.605 [2024-05-15 11:15:22.642315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:28280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.605 [2024-05-15 11:15:22.642322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:40.605 [2024-05-15 11:15:22.642338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:28288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.605 [2024-05-15 11:15:22.642345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:40.605 [2024-05-15 11:15:22.642361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:28296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.605 [2024-05-15 11:15:22.642368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:40.605 [2024-05-15 11:15:22.642384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:28304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.605 [2024-05-15 11:15:22.642391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:40.605 [2024-05-15 11:15:22.642407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:28312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.605 [2024-05-15 11:15:22.642413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:40.605 [2024-05-15 11:15:22.642429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:28320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.605 [2024-05-15 11:15:22.642436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:40.605 [2024-05-15 11:15:22.642452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.605 [2024-05-15 11:15:22.642460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:40.605 [2024-05-15 11:15:22.642476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:28336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.605 [2024-05-15 11:15:22.642483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:40.605 [2024-05-15 11:15:22.642499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:28344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.605 [2024-05-15 11:15:22.642506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:40.605 [2024-05-15 11:15:22.642524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:28352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.605 [2024-05-15 11:15:22.642531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:40.605 [2024-05-15 11:15:22.642547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:28360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.605 [2024-05-15 11:15:22.642554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:40.605 [2024-05-15 11:15:22.642570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:28368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.605 [2024-05-15 11:15:22.642576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:40.605 [2024-05-15 11:15:22.642592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:28376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.605 [2024-05-15 11:15:22.642599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:40.605 [2024-05-15 11:15:22.642615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:28384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.605 [2024-05-15 11:15:22.642622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:40.605 [2024-05-15 11:15:35.359987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:43752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.605 [2024-05-15 11:15:35.360028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:40.605 [2024-05-15 11:15:35.360063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:43768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.605 [2024-05-15 11:15:35.360073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:40.605 [2024-05-15 11:15:35.360087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:43784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.605 [2024-05-15 11:15:35.360095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:40.605 [2024-05-15 11:15:35.360109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:43800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.605 [2024-05-15 11:15:35.360116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:40.606 [2024-05-15 11:15:35.360130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:43816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.606 [2024-05-15 11:15:35.360137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:40.606 [2024-05-15 11:15:35.361695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:43832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.606 [2024-05-15 11:15:35.361717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:40.606 [2024-05-15 11:15:35.361734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:43848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.606 [2024-05-15 11:15:35.361742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:40.606 [2024-05-15 11:15:35.361756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:43864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.606 [2024-05-15 11:15:35.361763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:40.606 [2024-05-15 11:15:35.361776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:43880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.606 [2024-05-15 11:15:35.361783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:40.606 [2024-05-15 11:15:35.361796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:43896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.606 [2024-05-15 11:15:35.361804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:40.606 [2024-05-15 11:15:35.361816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:43912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.606 [2024-05-15 11:15:35.361824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:40.606 [2024-05-15 11:15:35.361836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.606 [2024-05-15 11:15:35.361844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:40.606 [2024-05-15 11:15:35.361857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:43944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.606 [2024-05-15 11:15:35.361864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:40.606 [2024-05-15 11:15:35.361877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.606 [2024-05-15 11:15:35.361884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:40.606 [2024-05-15 11:15:35.361896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:43976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.606 [2024-05-15 11:15:35.361904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:40.606 [2024-05-15 11:15:35.361917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:43992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.606 [2024-05-15 11:15:35.361924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:40.606 [2024-05-15 11:15:35.361937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:44008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.606 [2024-05-15 11:15:35.361944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:40.606 [2024-05-15 11:15:35.361965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:44024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.606 [2024-05-15 11:15:35.361973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:40.606 [2024-05-15 11:15:35.361986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:44040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.606 [2024-05-15 11:15:35.361993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.606 [2024-05-15 11:15:35.362007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:43624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.606 [2024-05-15 11:15:35.362014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:40.606 [2024-05-15 11:15:35.362027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:44056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.606 [2024-05-15 11:15:35.362034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:40.606 [2024-05-15 11:15:35.362047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.606 [2024-05-15 11:15:35.362055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:40.606 [2024-05-15 11:15:35.362068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:44088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.606 [2024-05-15 11:15:35.362076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:40.606 [2024-05-15 11:15:35.362089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:44104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.606 [2024-05-15 11:15:35.362097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:40.606 [2024-05-15 11:15:35.362110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:44120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.606 [2024-05-15 11:15:35.362117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:40.606 [2024-05-15 11:15:35.362131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:44136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.606 [2024-05-15 11:15:35.362139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:40.606 [2024-05-15 11:15:35.362152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.606 [2024-05-15 11:15:35.362160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:40.606 [2024-05-15 11:15:35.362178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:44168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.606 [2024-05-15 11:15:35.362185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:40.606 [2024-05-15 11:15:35.362198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:44184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.606 [2024-05-15 11:15:35.362205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:40.606 [2024-05-15 11:15:35.362218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.606 [2024-05-15 11:15:35.362227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:40.606 [2024-05-15 11:15:35.362239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:44216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.606 [2024-05-15 11:15:35.362247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:40.606 [2024-05-15 11:15:35.362259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:44232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.606 [2024-05-15 11:15:35.362267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:40.606 [2024-05-15 11:15:35.362280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:44248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.606 [2024-05-15 11:15:35.362288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:40.606 [2024-05-15 11:15:35.362300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:44264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.606 [2024-05-15 11:15:35.362307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:40.606 [2024-05-15 11:15:35.362320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:44280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.606 [2024-05-15 11:15:35.362328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:40.606 [2024-05-15 11:15:35.362340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:44296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.606 [2024-05-15 11:15:35.362347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:40.606 [2024-05-15 11:15:35.362360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:44312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.606 [2024-05-15 11:15:35.362367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:40.606 [2024-05-15 11:15:35.362382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:44328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.607 [2024-05-15 11:15:35.362389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:40.607 [2024-05-15 11:15:35.362402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:44344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.607 [2024-05-15 11:15:35.362410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:40.607 [2024-05-15 11:15:35.362422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:44360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.607 [2024-05-15 11:15:35.362431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:40.607 [2024-05-15 11:15:35.362445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.607 [2024-05-15 11:15:35.362452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:40.607 [2024-05-15 11:15:35.362465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:44392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.607 [2024-05-15 11:15:35.362474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:40.607 [2024-05-15 11:15:35.362487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:44408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.607 [2024-05-15 11:15:35.362495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:40.607 [2024-05-15 11:15:35.362508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:43648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.607 [2024-05-15 11:15:35.362515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:40.607 [2024-05-15 11:15:35.362946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:43680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.607 [2024-05-15 11:15:35.362962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:40.607 [2024-05-15 11:15:35.362977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:44424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.607 [2024-05-15 11:15:35.362984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:40.607 [2024-05-15 11:15:35.362997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:44440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.607 [2024-05-15 11:15:35.363004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:40.607 [2024-05-15 11:15:35.363017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:44456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.607 [2024-05-15 11:15:35.363024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:40.607 [2024-05-15 11:15:35.363037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:44472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.607 [2024-05-15 11:15:35.363044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:40.607 [2024-05-15 11:15:35.363057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:44488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.607 [2024-05-15 11:15:35.363065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.607 [2024-05-15 11:15:35.363078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:44504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.607 [2024-05-15 11:15:35.363085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.607 Received shutdown signal, test time was about 26.948556 seconds 00:23:40.607 00:23:40.607 Latency(us) 00:23:40.607 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.607 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:40.607 Verification LBA range: start 0x0 length 0x4000 00:23:40.607 Nvme0n1 : 26.95 10211.60 39.89 0.00 0.00 12513.04 254.66 3019898.88 00:23:40.607 =================================================================================================================== 00:23:40.607 Total : 10211.60 39.89 0.00 0.00 12513.04 254.66 3019898.88 00:23:40.607 11:15:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:40.866 11:15:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:23:40.866 11:15:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:40.866 11:15:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:23:40.866 11:15:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:40.866 11:15:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:23:40.866 11:15:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:40.866 11:15:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:23:40.866 11:15:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:40.866 11:15:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:40.866 rmmod nvme_tcp 00:23:40.866 rmmod nvme_fabrics 00:23:40.866 rmmod nvme_keyring 00:23:40.866 11:15:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:40.866 11:15:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:23:40.866 11:15:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:23:40.866 11:15:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2357924 ']' 00:23:40.866 11:15:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2357924 00:23:40.866 11:15:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@947 -- # '[' -z 2357924 ']' 00:23:40.866 11:15:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # kill -0 2357924 00:23:40.866 11:15:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # uname 00:23:40.866 11:15:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:40.866 11:15:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2357924 00:23:40.866 11:15:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:23:40.866 11:15:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:23:40.866 11:15:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2357924' 00:23:40.866 killing process with pid 2357924 00:23:40.866 11:15:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # kill 2357924 00:23:40.866 [2024-05-15 11:15:38.020248] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:40.866 11:15:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # wait 2357924 00:23:41.125 11:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:41.125 11:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:41.125 11:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:41.125 11:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:41.125 11:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:41.125 11:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:41.125 11:15:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:41.125 11:15:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.658 11:15:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:43.658 00:23:43.658 real 0m39.051s 00:23:43.658 user 1m45.909s 00:23:43.658 sys 0m10.221s 00:23:43.658 11:15:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # xtrace_disable 00:23:43.658 11:15:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:43.658 ************************************ 00:23:43.658 END TEST nvmf_host_multipath_status 00:23:43.658 ************************************ 00:23:43.658 11:15:40 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:43.658 11:15:40 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:23:43.658 11:15:40 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:23:43.658 11:15:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:43.658 ************************************ 00:23:43.658 START TEST nvmf_discovery_remove_ifc 00:23:43.658 ************************************ 00:23:43.658 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:43.658 * Looking for test storage... 00:23:43.658 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:43.658 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:43.658 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:23:43.658 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:43.658 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:43.658 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:43.658 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:43.658 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:43.658 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:43.658 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:43.658 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:43.658 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:43.658 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:43.658 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:43.658 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:43.658 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:43.658 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:43.658 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:43.658 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:43.658 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:43.658 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:43.658 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:43.658 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:43.658 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.658 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.658 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.658 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:23:43.658 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.658 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:23:43.658 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:43.658 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:43.658 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:43.658 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:43.658 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:43.658 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:43.658 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:43.659 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:43.659 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:23:43.659 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:23:43.659 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:23:43.659 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:23:43.659 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:23:43.659 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:23:43.659 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:23:43.659 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:43.659 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:43.659 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:43.659 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:43.659 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:43.659 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.659 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:43.659 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.659 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:43.659 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:43.659 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:23:43.659 11:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:48.920 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:48.920 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:23:48.920 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:48.920 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:48.920 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:48.920 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:48.920 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:48.920 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:23:48.920 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:48.920 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:48.921 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:48.921 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:48.921 Found net devices under 0000:86:00.0: cvl_0_0 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:48.921 Found net devices under 0000:86:00.1: cvl_0_1 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:48.921 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:48.922 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:48.922 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:48.922 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:48.922 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:48.922 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:48.922 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:23:48.922 00:23:48.922 --- 10.0.0.2 ping statistics --- 00:23:48.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:48.922 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:23:48.922 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:48.922 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:48.922 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:23:48.922 00:23:48.922 --- 10.0.0.1 ping statistics --- 00:23:48.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:48.922 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:23:48.922 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:48.922 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:23:48.922 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:48.922 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:48.922 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:48.922 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:48.922 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:48.922 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:48.922 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:48.922 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:23:48.922 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:48.922 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@721 -- # xtrace_disable 00:23:48.922 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:48.922 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=2366707 00:23:48.922 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 2366707 00:23:48.922 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:48.922 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@828 -- # '[' -z 2366707 ']' 00:23:48.922 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:48.922 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:48.922 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:48.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:48.922 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:48.922 11:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:48.922 [2024-05-15 11:15:45.859295] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:23:48.922 [2024-05-15 11:15:45.859338] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:48.922 EAL: No free 2048 kB hugepages reported on node 1 00:23:48.922 [2024-05-15 11:15:45.915606] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.922 [2024-05-15 11:15:45.994535] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:48.922 [2024-05-15 11:15:45.994567] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:48.922 [2024-05-15 11:15:45.994575] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:48.922 [2024-05-15 11:15:45.994581] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:48.922 [2024-05-15 11:15:45.994586] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:48.922 [2024-05-15 11:15:45.994608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:49.497 11:15:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:49.497 11:15:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@861 -- # return 0 00:23:49.497 11:15:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:49.497 11:15:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@727 -- # xtrace_disable 00:23:49.497 11:15:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:49.497 11:15:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:49.497 11:15:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:23:49.497 11:15:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:49.497 11:15:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:49.497 [2024-05-15 11:15:46.709551] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:49.497 [2024-05-15 11:15:46.717538] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:49.497 [2024-05-15 11:15:46.717700] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:49.497 null0 00:23:49.497 [2024-05-15 11:15:46.749706] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:49.785 11:15:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:49.785 11:15:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2366894 00:23:49.785 11:15:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:23:49.785 11:15:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2366894 /tmp/host.sock 00:23:49.785 11:15:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@828 -- # '[' -z 2366894 ']' 00:23:49.785 11:15:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local rpc_addr=/tmp/host.sock 00:23:49.785 11:15:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:49.785 11:15:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:49.785 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:49.785 11:15:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:49.785 11:15:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:49.785 [2024-05-15 11:15:46.816529] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:23:49.785 [2024-05-15 11:15:46.816568] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2366894 ] 00:23:49.785 EAL: No free 2048 kB hugepages reported on node 1 00:23:49.785 [2024-05-15 11:15:46.870941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.785 [2024-05-15 11:15:46.949527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:50.363 11:15:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:50.363 11:15:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@861 -- # return 0 00:23:50.363 11:15:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:50.363 11:15:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:23:50.363 11:15:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:50.363 11:15:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:50.363 11:15:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:50.363 11:15:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:23:50.363 11:15:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:50.363 11:15:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:50.621 11:15:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:50.622 11:15:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:23:50.622 11:15:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:50.622 11:15:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:51.555 [2024-05-15 11:15:48.758326] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:51.556 [2024-05-15 11:15:48.758351] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:51.556 [2024-05-15 11:15:48.758365] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:51.815 [2024-05-15 11:15:48.844629] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:51.815 [2024-05-15 11:15:48.900481] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:51.815 [2024-05-15 11:15:48.900525] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:51.815 [2024-05-15 11:15:48.900546] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:51.815 [2024-05-15 11:15:48.900560] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:51.815 [2024-05-15 11:15:48.900578] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:51.815 11:15:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:51.815 11:15:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:23:51.815 11:15:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:51.815 [2024-05-15 11:15:48.906479] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x117d8b0 was disconnected and freed. delete nvme_qpair. 00:23:51.815 11:15:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:51.815 11:15:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:51.815 11:15:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:51.815 11:15:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:51.815 11:15:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:51.815 11:15:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:51.815 11:15:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:51.815 11:15:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:23:51.815 11:15:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:23:51.815 11:15:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:23:51.815 11:15:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:23:51.815 11:15:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:51.815 11:15:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:51.815 11:15:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:51.815 11:15:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:51.815 11:15:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:51.815 11:15:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:51.815 11:15:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:52.074 11:15:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:52.074 11:15:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:52.074 11:15:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:53.008 11:15:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:53.008 11:15:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:53.008 11:15:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:53.008 11:15:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:53.008 11:15:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:53.008 11:15:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:53.008 11:15:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:53.008 11:15:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:53.008 11:15:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:53.008 11:15:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:53.942 11:15:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:53.942 11:15:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:53.942 11:15:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:53.942 11:15:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:53.942 11:15:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:53.942 11:15:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:53.942 11:15:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:53.942 11:15:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:54.200 11:15:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:54.200 11:15:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:55.135 11:15:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:55.135 11:15:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:55.135 11:15:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:55.135 11:15:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:55.135 11:15:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.135 11:15:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:55.135 11:15:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:55.135 11:15:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.135 11:15:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:55.135 11:15:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:56.073 11:15:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:56.073 11:15:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:56.073 11:15:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:56.073 11:15:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:56.073 11:15:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:56.073 11:15:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:56.073 11:15:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:56.073 11:15:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:56.073 11:15:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:56.073 11:15:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:57.453 11:15:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:57.453 11:15:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:57.453 11:15:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:57.453 11:15:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:57.453 11:15:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:57.453 11:15:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:57.453 11:15:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:57.453 [2024-05-15 11:15:54.341865] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:23:57.453 [2024-05-15 11:15:54.341907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.453 [2024-05-15 11:15:54.341918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.453 [2024-05-15 11:15:54.341928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.453 [2024-05-15 11:15:54.341935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.453 [2024-05-15 11:15:54.341942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.453 [2024-05-15 11:15:54.341948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.453 [2024-05-15 11:15:54.341955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.453 [2024-05-15 11:15:54.341962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.453 [2024-05-15 11:15:54.341968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.453 [2024-05-15 11:15:54.341975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.453 [2024-05-15 11:15:54.341981] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11449e0 is same with the state(5) to be set 00:23:57.453 11:15:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:57.453 [2024-05-15 11:15:54.351885] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11449e0 (9): Bad file descriptor 00:23:57.453 [2024-05-15 11:15:54.361925] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:57.453 11:15:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:57.453 11:15:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:58.388 11:15:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:58.388 11:15:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:58.388 11:15:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:58.388 11:15:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:58.388 11:15:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:58.388 11:15:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:58.388 11:15:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:58.388 [2024-05-15 11:15:55.420184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:59.322 [2024-05-15 11:15:56.444189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:59.322 [2024-05-15 11:15:56.444233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11449e0 with addr=10.0.0.2, port=4420 00:23:59.322 [2024-05-15 11:15:56.444251] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11449e0 is same with the state(5) to be set 00:23:59.322 [2024-05-15 11:15:56.444678] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11449e0 (9): Bad file descriptor 00:23:59.322 [2024-05-15 11:15:56.444706] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.322 [2024-05-15 11:15:56.444731] bdev_nvme.c:6718:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:23:59.322 [2024-05-15 11:15:56.444758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.322 [2024-05-15 11:15:56.444771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.322 [2024-05-15 11:15:56.444784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.322 [2024-05-15 11:15:56.444794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.322 [2024-05-15 11:15:56.444805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.322 [2024-05-15 11:15:56.444814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.322 [2024-05-15 11:15:56.444824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.322 [2024-05-15 11:15:56.444833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.322 [2024-05-15 11:15:56.444843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.322 [2024-05-15 11:15:56.444852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.322 [2024-05-15 11:15:56.444862] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:23:59.322 [2024-05-15 11:15:56.445281] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1143e10 (9): Bad file descriptor 00:23:59.322 [2024-05-15 11:15:56.446294] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:23:59.322 [2024-05-15 11:15:56.446309] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:23:59.322 11:15:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:59.322 11:15:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:59.322 11:15:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:00.256 11:15:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:00.256 11:15:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:00.256 11:15:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:00.256 11:15:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:00.256 11:15:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:00.256 11:15:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:00.256 11:15:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:00.256 11:15:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:00.256 11:15:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:00.256 11:15:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:00.514 11:15:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:00.514 11:15:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:00.514 11:15:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:00.514 11:15:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:00.514 11:15:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:00.514 11:15:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:00.514 11:15:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:00.514 11:15:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:00.514 11:15:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:00.514 11:15:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:00.514 11:15:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:00.514 11:15:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:01.446 [2024-05-15 11:15:58.503719] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:01.446 [2024-05-15 11:15:58.503738] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:01.446 [2024-05-15 11:15:58.503750] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:01.446 [2024-05-15 11:15:58.632140] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:01.446 11:15:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:01.446 11:15:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:01.446 11:15:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:01.446 11:15:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:01.446 11:15:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:01.446 11:15:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:01.446 11:15:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:01.446 11:15:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:01.446 [2024-05-15 11:15:58.693312] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:01.446 [2024-05-15 11:15:58.693345] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:01.446 [2024-05-15 11:15:58.693361] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:01.446 [2024-05-15 11:15:58.693374] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:01.446 [2024-05-15 11:15:58.693381] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:01.446 11:15:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:01.446 11:15:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:01.446 [2024-05-15 11:15:58.701612] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x11886a0 was disconnected and freed. delete nvme_qpair. 00:24:02.821 11:15:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:02.821 11:15:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:02.821 11:15:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:02.821 11:15:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:02.821 11:15:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:02.821 11:15:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:02.821 11:15:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:02.821 11:15:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:02.821 11:15:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:02.821 11:15:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:02.821 11:15:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2366894 00:24:02.821 11:15:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@947 -- # '[' -z 2366894 ']' 00:24:02.821 11:15:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # kill -0 2366894 00:24:02.821 11:15:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # uname 00:24:02.821 11:15:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:24:02.821 11:15:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2366894 00:24:02.821 11:15:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:24:02.821 11:15:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:24:02.821 11:15:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2366894' 00:24:02.821 killing process with pid 2366894 00:24:02.821 11:15:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # kill 2366894 00:24:02.821 11:15:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # wait 2366894 00:24:02.821 11:15:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:02.821 11:15:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:02.821 11:15:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:24:02.821 11:15:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:02.821 11:15:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:24:02.821 11:15:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:02.821 11:15:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:02.821 rmmod nvme_tcp 00:24:02.821 rmmod nvme_fabrics 00:24:02.821 rmmod nvme_keyring 00:24:02.821 11:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:02.821 11:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:24:02.821 11:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:24:02.821 11:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 2366707 ']' 00:24:02.821 11:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 2366707 00:24:02.821 11:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@947 -- # '[' -z 2366707 ']' 00:24:02.821 11:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # kill -0 2366707 00:24:02.821 11:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # uname 00:24:02.821 11:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:24:02.821 11:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2366707 00:24:03.079 11:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:24:03.079 11:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:24:03.079 11:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2366707' 00:24:03.079 killing process with pid 2366707 00:24:03.079 11:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # kill 2366707 00:24:03.079 [2024-05-15 11:16:00.093729] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:03.079 11:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # wait 2366707 00:24:03.079 11:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:03.079 11:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:03.079 11:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:03.079 11:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:03.079 11:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:03.079 11:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.079 11:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:03.079 11:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:05.610 11:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:05.610 00:24:05.610 real 0m21.967s 00:24:05.610 user 0m27.557s 00:24:05.610 sys 0m5.281s 00:24:05.610 11:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:24:05.610 11:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:05.610 ************************************ 00:24:05.610 END TEST nvmf_discovery_remove_ifc 00:24:05.610 ************************************ 00:24:05.610 11:16:02 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:05.610 11:16:02 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:24:05.610 11:16:02 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:24:05.610 11:16:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:05.610 ************************************ 00:24:05.610 START TEST nvmf_identify_kernel_target 00:24:05.610 ************************************ 00:24:05.610 11:16:02 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:05.610 * Looking for test storage... 00:24:05.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:05.610 11:16:02 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:05.610 11:16:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:05.610 11:16:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:05.610 11:16:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:05.610 11:16:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:05.610 11:16:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:05.610 11:16:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:05.610 11:16:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:05.610 11:16:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:05.610 11:16:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:05.610 11:16:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:05.610 11:16:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:05.610 11:16:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:05.610 11:16:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:05.610 11:16:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:05.611 11:16:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:05.611 11:16:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:05.611 11:16:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:05.611 11:16:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:05.611 11:16:02 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:05.611 11:16:02 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:05.611 11:16:02 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:05.611 11:16:02 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.611 11:16:02 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.611 11:16:02 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.611 11:16:02 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:05.611 11:16:02 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.611 11:16:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:24:05.611 11:16:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:05.611 11:16:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:05.611 11:16:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:05.611 11:16:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:05.611 11:16:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:05.611 11:16:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:05.611 11:16:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:05.611 11:16:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:05.611 11:16:02 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:05.611 11:16:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:05.611 11:16:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:05.611 11:16:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:05.611 11:16:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:05.611 11:16:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:05.611 11:16:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:05.611 11:16:02 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:05.611 11:16:02 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:05.611 11:16:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:05.611 11:16:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:05.611 11:16:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:24:05.611 11:16:02 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:10.877 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:10.877 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:24:10.877 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:10.877 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:10.877 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:10.877 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:10.877 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:10.877 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:24:10.877 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:10.877 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:24:10.877 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:24:10.877 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:24:10.877 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:24:10.877 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:24:10.877 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:24:10.877 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:10.877 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:10.877 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:10.877 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:10.877 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:10.877 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:10.878 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:10.878 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:10.878 Found net devices under 0000:86:00.0: cvl_0_0 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:10.878 Found net devices under 0000:86:00.1: cvl_0_1 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:10.878 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:10.878 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:24:10.878 00:24:10.878 --- 10.0.0.2 ping statistics --- 00:24:10.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.878 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:10.878 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:10.878 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:24:10.878 00:24:10.878 --- 10.0.0.1 ping statistics --- 00:24:10.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.878 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:10.878 11:16:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:12.777 Waiting for block devices as requested 00:24:13.034 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:24:13.034 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:13.034 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:13.292 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:13.292 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:13.292 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:13.292 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:13.550 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:13.550 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:13.550 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:13.550 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:13.808 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:13.808 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:13.808 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:14.065 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:14.065 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:14.065 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:14.065 11:16:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:14.065 11:16:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:14.065 11:16:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:24:14.065 11:16:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:24:14.065 11:16:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:14.065 11:16:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:24:14.065 11:16:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:24:14.065 11:16:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:14.065 11:16:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:14.322 No valid GPT data, bailing 00:24:14.322 11:16:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:14.322 11:16:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:24:14.322 11:16:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:24:14.322 11:16:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:24:14.322 11:16:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:24:14.322 11:16:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:14.322 11:16:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:14.322 11:16:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:14.322 11:16:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:14.322 11:16:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:24:14.322 11:16:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:24:14.323 11:16:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:24:14.323 11:16:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:24:14.323 11:16:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:24:14.323 11:16:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:24:14.323 11:16:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:24:14.323 11:16:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:14.323 11:16:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:24:14.323 00:24:14.323 Discovery Log Number of Records 2, Generation counter 2 00:24:14.323 =====Discovery Log Entry 0====== 00:24:14.323 trtype: tcp 00:24:14.323 adrfam: ipv4 00:24:14.323 subtype: current discovery subsystem 00:24:14.323 treq: not specified, sq flow control disable supported 00:24:14.323 portid: 1 00:24:14.323 trsvcid: 4420 00:24:14.323 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:14.323 traddr: 10.0.0.1 00:24:14.323 eflags: none 00:24:14.323 sectype: none 00:24:14.323 =====Discovery Log Entry 1====== 00:24:14.323 trtype: tcp 00:24:14.323 adrfam: ipv4 00:24:14.323 subtype: nvme subsystem 00:24:14.323 treq: not specified, sq flow control disable supported 00:24:14.323 portid: 1 00:24:14.323 trsvcid: 4420 00:24:14.323 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:14.323 traddr: 10.0.0.1 00:24:14.323 eflags: none 00:24:14.323 sectype: none 00:24:14.323 11:16:11 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:24:14.323 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:14.323 EAL: No free 2048 kB hugepages reported on node 1 00:24:14.323 ===================================================== 00:24:14.323 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:14.323 ===================================================== 00:24:14.323 Controller Capabilities/Features 00:24:14.323 ================================ 00:24:14.323 Vendor ID: 0000 00:24:14.323 Subsystem Vendor ID: 0000 00:24:14.323 Serial Number: a21a5f7e6af3dd93f1f3 00:24:14.323 Model Number: Linux 00:24:14.323 Firmware Version: 6.7.0-68 00:24:14.323 Recommended Arb Burst: 0 00:24:14.323 IEEE OUI Identifier: 00 00 00 00:24:14.323 Multi-path I/O 00:24:14.323 May have multiple subsystem ports: No 00:24:14.323 May have multiple controllers: No 00:24:14.323 Associated with SR-IOV VF: No 00:24:14.323 Max Data Transfer Size: Unlimited 00:24:14.323 Max Number of Namespaces: 0 00:24:14.323 Max Number of I/O Queues: 1024 00:24:14.323 NVMe Specification Version (VS): 1.3 00:24:14.323 NVMe Specification Version (Identify): 1.3 00:24:14.323 Maximum Queue Entries: 1024 00:24:14.323 Contiguous Queues Required: No 00:24:14.323 Arbitration Mechanisms Supported 00:24:14.323 Weighted Round Robin: Not Supported 00:24:14.323 Vendor Specific: Not Supported 00:24:14.323 Reset Timeout: 7500 ms 00:24:14.323 Doorbell Stride: 4 bytes 00:24:14.323 NVM Subsystem Reset: Not Supported 00:24:14.323 Command Sets Supported 00:24:14.323 NVM Command Set: Supported 00:24:14.323 Boot Partition: Not Supported 00:24:14.323 Memory Page Size Minimum: 4096 bytes 00:24:14.323 Memory Page Size Maximum: 4096 bytes 00:24:14.323 Persistent Memory Region: Not Supported 00:24:14.323 Optional Asynchronous Events Supported 00:24:14.323 Namespace Attribute Notices: Not Supported 00:24:14.323 Firmware Activation Notices: Not Supported 00:24:14.323 ANA Change Notices: Not Supported 00:24:14.323 PLE Aggregate Log Change Notices: Not Supported 00:24:14.323 LBA Status Info Alert Notices: Not Supported 00:24:14.323 EGE Aggregate Log Change Notices: Not Supported 00:24:14.323 Normal NVM Subsystem Shutdown event: Not Supported 00:24:14.323 Zone Descriptor Change Notices: Not Supported 00:24:14.323 Discovery Log Change Notices: Supported 00:24:14.323 Controller Attributes 00:24:14.323 128-bit Host Identifier: Not Supported 00:24:14.323 Non-Operational Permissive Mode: Not Supported 00:24:14.323 NVM Sets: Not Supported 00:24:14.323 Read Recovery Levels: Not Supported 00:24:14.323 Endurance Groups: Not Supported 00:24:14.323 Predictable Latency Mode: Not Supported 00:24:14.323 Traffic Based Keep ALive: Not Supported 00:24:14.323 Namespace Granularity: Not Supported 00:24:14.323 SQ Associations: Not Supported 00:24:14.323 UUID List: Not Supported 00:24:14.323 Multi-Domain Subsystem: Not Supported 00:24:14.323 Fixed Capacity Management: Not Supported 00:24:14.323 Variable Capacity Management: Not Supported 00:24:14.323 Delete Endurance Group: Not Supported 00:24:14.323 Delete NVM Set: Not Supported 00:24:14.323 Extended LBA Formats Supported: Not Supported 00:24:14.323 Flexible Data Placement Supported: Not Supported 00:24:14.323 00:24:14.323 Controller Memory Buffer Support 00:24:14.323 ================================ 00:24:14.323 Supported: No 00:24:14.323 00:24:14.323 Persistent Memory Region Support 00:24:14.323 ================================ 00:24:14.323 Supported: No 00:24:14.323 00:24:14.323 Admin Command Set Attributes 00:24:14.323 ============================ 00:24:14.323 Security Send/Receive: Not Supported 00:24:14.323 Format NVM: Not Supported 00:24:14.323 Firmware Activate/Download: Not Supported 00:24:14.323 Namespace Management: Not Supported 00:24:14.323 Device Self-Test: Not Supported 00:24:14.323 Directives: Not Supported 00:24:14.323 NVMe-MI: Not Supported 00:24:14.323 Virtualization Management: Not Supported 00:24:14.323 Doorbell Buffer Config: Not Supported 00:24:14.323 Get LBA Status Capability: Not Supported 00:24:14.323 Command & Feature Lockdown Capability: Not Supported 00:24:14.323 Abort Command Limit: 1 00:24:14.323 Async Event Request Limit: 1 00:24:14.323 Number of Firmware Slots: N/A 00:24:14.323 Firmware Slot 1 Read-Only: N/A 00:24:14.323 Firmware Activation Without Reset: N/A 00:24:14.323 Multiple Update Detection Support: N/A 00:24:14.323 Firmware Update Granularity: No Information Provided 00:24:14.323 Per-Namespace SMART Log: No 00:24:14.323 Asymmetric Namespace Access Log Page: Not Supported 00:24:14.323 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:14.323 Command Effects Log Page: Not Supported 00:24:14.323 Get Log Page Extended Data: Supported 00:24:14.323 Telemetry Log Pages: Not Supported 00:24:14.323 Persistent Event Log Pages: Not Supported 00:24:14.323 Supported Log Pages Log Page: May Support 00:24:14.323 Commands Supported & Effects Log Page: Not Supported 00:24:14.323 Feature Identifiers & Effects Log Page:May Support 00:24:14.323 NVMe-MI Commands & Effects Log Page: May Support 00:24:14.323 Data Area 4 for Telemetry Log: Not Supported 00:24:14.323 Error Log Page Entries Supported: 1 00:24:14.323 Keep Alive: Not Supported 00:24:14.323 00:24:14.323 NVM Command Set Attributes 00:24:14.323 ========================== 00:24:14.323 Submission Queue Entry Size 00:24:14.323 Max: 1 00:24:14.323 Min: 1 00:24:14.323 Completion Queue Entry Size 00:24:14.323 Max: 1 00:24:14.323 Min: 1 00:24:14.323 Number of Namespaces: 0 00:24:14.323 Compare Command: Not Supported 00:24:14.323 Write Uncorrectable Command: Not Supported 00:24:14.323 Dataset Management Command: Not Supported 00:24:14.323 Write Zeroes Command: Not Supported 00:24:14.323 Set Features Save Field: Not Supported 00:24:14.323 Reservations: Not Supported 00:24:14.323 Timestamp: Not Supported 00:24:14.323 Copy: Not Supported 00:24:14.323 Volatile Write Cache: Not Present 00:24:14.323 Atomic Write Unit (Normal): 1 00:24:14.323 Atomic Write Unit (PFail): 1 00:24:14.323 Atomic Compare & Write Unit: 1 00:24:14.323 Fused Compare & Write: Not Supported 00:24:14.323 Scatter-Gather List 00:24:14.323 SGL Command Set: Supported 00:24:14.323 SGL Keyed: Not Supported 00:24:14.323 SGL Bit Bucket Descriptor: Not Supported 00:24:14.323 SGL Metadata Pointer: Not Supported 00:24:14.323 Oversized SGL: Not Supported 00:24:14.323 SGL Metadata Address: Not Supported 00:24:14.323 SGL Offset: Supported 00:24:14.323 Transport SGL Data Block: Not Supported 00:24:14.323 Replay Protected Memory Block: Not Supported 00:24:14.323 00:24:14.323 Firmware Slot Information 00:24:14.323 ========================= 00:24:14.323 Active slot: 0 00:24:14.323 00:24:14.323 00:24:14.323 Error Log 00:24:14.323 ========= 00:24:14.323 00:24:14.323 Active Namespaces 00:24:14.323 ================= 00:24:14.323 Discovery Log Page 00:24:14.323 ================== 00:24:14.323 Generation Counter: 2 00:24:14.323 Number of Records: 2 00:24:14.323 Record Format: 0 00:24:14.323 00:24:14.323 Discovery Log Entry 0 00:24:14.323 ---------------------- 00:24:14.323 Transport Type: 3 (TCP) 00:24:14.323 Address Family: 1 (IPv4) 00:24:14.323 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:14.323 Entry Flags: 00:24:14.323 Duplicate Returned Information: 0 00:24:14.323 Explicit Persistent Connection Support for Discovery: 0 00:24:14.323 Transport Requirements: 00:24:14.323 Secure Channel: Not Specified 00:24:14.323 Port ID: 1 (0x0001) 00:24:14.324 Controller ID: 65535 (0xffff) 00:24:14.324 Admin Max SQ Size: 32 00:24:14.324 Transport Service Identifier: 4420 00:24:14.324 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:14.324 Transport Address: 10.0.0.1 00:24:14.324 Discovery Log Entry 1 00:24:14.324 ---------------------- 00:24:14.324 Transport Type: 3 (TCP) 00:24:14.324 Address Family: 1 (IPv4) 00:24:14.324 Subsystem Type: 2 (NVM Subsystem) 00:24:14.324 Entry Flags: 00:24:14.324 Duplicate Returned Information: 0 00:24:14.324 Explicit Persistent Connection Support for Discovery: 0 00:24:14.324 Transport Requirements: 00:24:14.324 Secure Channel: Not Specified 00:24:14.324 Port ID: 1 (0x0001) 00:24:14.324 Controller ID: 65535 (0xffff) 00:24:14.324 Admin Max SQ Size: 32 00:24:14.324 Transport Service Identifier: 4420 00:24:14.324 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:14.324 Transport Address: 10.0.0.1 00:24:14.324 11:16:11 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:14.324 EAL: No free 2048 kB hugepages reported on node 1 00:24:14.587 get_feature(0x01) failed 00:24:14.587 get_feature(0x02) failed 00:24:14.587 get_feature(0x04) failed 00:24:14.587 ===================================================== 00:24:14.587 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:14.587 ===================================================== 00:24:14.587 Controller Capabilities/Features 00:24:14.587 ================================ 00:24:14.587 Vendor ID: 0000 00:24:14.587 Subsystem Vendor ID: 0000 00:24:14.587 Serial Number: 3937ad4774718717facd 00:24:14.587 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:14.587 Firmware Version: 6.7.0-68 00:24:14.587 Recommended Arb Burst: 6 00:24:14.587 IEEE OUI Identifier: 00 00 00 00:24:14.587 Multi-path I/O 00:24:14.587 May have multiple subsystem ports: Yes 00:24:14.587 May have multiple controllers: Yes 00:24:14.587 Associated with SR-IOV VF: No 00:24:14.587 Max Data Transfer Size: Unlimited 00:24:14.587 Max Number of Namespaces: 1024 00:24:14.587 Max Number of I/O Queues: 128 00:24:14.587 NVMe Specification Version (VS): 1.3 00:24:14.587 NVMe Specification Version (Identify): 1.3 00:24:14.587 Maximum Queue Entries: 1024 00:24:14.587 Contiguous Queues Required: No 00:24:14.587 Arbitration Mechanisms Supported 00:24:14.587 Weighted Round Robin: Not Supported 00:24:14.587 Vendor Specific: Not Supported 00:24:14.587 Reset Timeout: 7500 ms 00:24:14.587 Doorbell Stride: 4 bytes 00:24:14.587 NVM Subsystem Reset: Not Supported 00:24:14.587 Command Sets Supported 00:24:14.587 NVM Command Set: Supported 00:24:14.587 Boot Partition: Not Supported 00:24:14.587 Memory Page Size Minimum: 4096 bytes 00:24:14.587 Memory Page Size Maximum: 4096 bytes 00:24:14.587 Persistent Memory Region: Not Supported 00:24:14.587 Optional Asynchronous Events Supported 00:24:14.587 Namespace Attribute Notices: Supported 00:24:14.587 Firmware Activation Notices: Not Supported 00:24:14.587 ANA Change Notices: Supported 00:24:14.587 PLE Aggregate Log Change Notices: Not Supported 00:24:14.587 LBA Status Info Alert Notices: Not Supported 00:24:14.587 EGE Aggregate Log Change Notices: Not Supported 00:24:14.587 Normal NVM Subsystem Shutdown event: Not Supported 00:24:14.587 Zone Descriptor Change Notices: Not Supported 00:24:14.587 Discovery Log Change Notices: Not Supported 00:24:14.587 Controller Attributes 00:24:14.587 128-bit Host Identifier: Supported 00:24:14.587 Non-Operational Permissive Mode: Not Supported 00:24:14.587 NVM Sets: Not Supported 00:24:14.587 Read Recovery Levels: Not Supported 00:24:14.587 Endurance Groups: Not Supported 00:24:14.587 Predictable Latency Mode: Not Supported 00:24:14.587 Traffic Based Keep ALive: Supported 00:24:14.587 Namespace Granularity: Not Supported 00:24:14.587 SQ Associations: Not Supported 00:24:14.587 UUID List: Not Supported 00:24:14.587 Multi-Domain Subsystem: Not Supported 00:24:14.587 Fixed Capacity Management: Not Supported 00:24:14.587 Variable Capacity Management: Not Supported 00:24:14.587 Delete Endurance Group: Not Supported 00:24:14.587 Delete NVM Set: Not Supported 00:24:14.587 Extended LBA Formats Supported: Not Supported 00:24:14.587 Flexible Data Placement Supported: Not Supported 00:24:14.587 00:24:14.587 Controller Memory Buffer Support 00:24:14.587 ================================ 00:24:14.587 Supported: No 00:24:14.587 00:24:14.587 Persistent Memory Region Support 00:24:14.587 ================================ 00:24:14.587 Supported: No 00:24:14.587 00:24:14.587 Admin Command Set Attributes 00:24:14.587 ============================ 00:24:14.587 Security Send/Receive: Not Supported 00:24:14.587 Format NVM: Not Supported 00:24:14.587 Firmware Activate/Download: Not Supported 00:24:14.587 Namespace Management: Not Supported 00:24:14.587 Device Self-Test: Not Supported 00:24:14.587 Directives: Not Supported 00:24:14.587 NVMe-MI: Not Supported 00:24:14.587 Virtualization Management: Not Supported 00:24:14.587 Doorbell Buffer Config: Not Supported 00:24:14.587 Get LBA Status Capability: Not Supported 00:24:14.587 Command & Feature Lockdown Capability: Not Supported 00:24:14.587 Abort Command Limit: 4 00:24:14.587 Async Event Request Limit: 4 00:24:14.587 Number of Firmware Slots: N/A 00:24:14.587 Firmware Slot 1 Read-Only: N/A 00:24:14.587 Firmware Activation Without Reset: N/A 00:24:14.587 Multiple Update Detection Support: N/A 00:24:14.587 Firmware Update Granularity: No Information Provided 00:24:14.587 Per-Namespace SMART Log: Yes 00:24:14.587 Asymmetric Namespace Access Log Page: Supported 00:24:14.587 ANA Transition Time : 10 sec 00:24:14.587 00:24:14.587 Asymmetric Namespace Access Capabilities 00:24:14.587 ANA Optimized State : Supported 00:24:14.587 ANA Non-Optimized State : Supported 00:24:14.587 ANA Inaccessible State : Supported 00:24:14.587 ANA Persistent Loss State : Supported 00:24:14.587 ANA Change State : Supported 00:24:14.587 ANAGRPID is not changed : No 00:24:14.587 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:14.587 00:24:14.587 ANA Group Identifier Maximum : 128 00:24:14.587 Number of ANA Group Identifiers : 128 00:24:14.587 Max Number of Allowed Namespaces : 1024 00:24:14.587 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:14.587 Command Effects Log Page: Supported 00:24:14.587 Get Log Page Extended Data: Supported 00:24:14.587 Telemetry Log Pages: Not Supported 00:24:14.587 Persistent Event Log Pages: Not Supported 00:24:14.587 Supported Log Pages Log Page: May Support 00:24:14.587 Commands Supported & Effects Log Page: Not Supported 00:24:14.587 Feature Identifiers & Effects Log Page:May Support 00:24:14.587 NVMe-MI Commands & Effects Log Page: May Support 00:24:14.587 Data Area 4 for Telemetry Log: Not Supported 00:24:14.587 Error Log Page Entries Supported: 128 00:24:14.587 Keep Alive: Supported 00:24:14.587 Keep Alive Granularity: 1000 ms 00:24:14.587 00:24:14.587 NVM Command Set Attributes 00:24:14.587 ========================== 00:24:14.587 Submission Queue Entry Size 00:24:14.587 Max: 64 00:24:14.587 Min: 64 00:24:14.587 Completion Queue Entry Size 00:24:14.587 Max: 16 00:24:14.587 Min: 16 00:24:14.587 Number of Namespaces: 1024 00:24:14.587 Compare Command: Not Supported 00:24:14.587 Write Uncorrectable Command: Not Supported 00:24:14.587 Dataset Management Command: Supported 00:24:14.587 Write Zeroes Command: Supported 00:24:14.587 Set Features Save Field: Not Supported 00:24:14.587 Reservations: Not Supported 00:24:14.587 Timestamp: Not Supported 00:24:14.587 Copy: Not Supported 00:24:14.587 Volatile Write Cache: Present 00:24:14.587 Atomic Write Unit (Normal): 1 00:24:14.587 Atomic Write Unit (PFail): 1 00:24:14.587 Atomic Compare & Write Unit: 1 00:24:14.587 Fused Compare & Write: Not Supported 00:24:14.587 Scatter-Gather List 00:24:14.587 SGL Command Set: Supported 00:24:14.587 SGL Keyed: Not Supported 00:24:14.587 SGL Bit Bucket Descriptor: Not Supported 00:24:14.587 SGL Metadata Pointer: Not Supported 00:24:14.587 Oversized SGL: Not Supported 00:24:14.587 SGL Metadata Address: Not Supported 00:24:14.587 SGL Offset: Supported 00:24:14.587 Transport SGL Data Block: Not Supported 00:24:14.587 Replay Protected Memory Block: Not Supported 00:24:14.587 00:24:14.587 Firmware Slot Information 00:24:14.587 ========================= 00:24:14.587 Active slot: 0 00:24:14.587 00:24:14.587 Asymmetric Namespace Access 00:24:14.587 =========================== 00:24:14.587 Change Count : 0 00:24:14.587 Number of ANA Group Descriptors : 1 00:24:14.587 ANA Group Descriptor : 0 00:24:14.587 ANA Group ID : 1 00:24:14.587 Number of NSID Values : 1 00:24:14.587 Change Count : 0 00:24:14.587 ANA State : 1 00:24:14.587 Namespace Identifier : 1 00:24:14.587 00:24:14.587 Commands Supported and Effects 00:24:14.587 ============================== 00:24:14.587 Admin Commands 00:24:14.587 -------------- 00:24:14.587 Get Log Page (02h): Supported 00:24:14.587 Identify (06h): Supported 00:24:14.587 Abort (08h): Supported 00:24:14.587 Set Features (09h): Supported 00:24:14.587 Get Features (0Ah): Supported 00:24:14.587 Asynchronous Event Request (0Ch): Supported 00:24:14.587 Keep Alive (18h): Supported 00:24:14.587 I/O Commands 00:24:14.587 ------------ 00:24:14.587 Flush (00h): Supported 00:24:14.587 Write (01h): Supported LBA-Change 00:24:14.587 Read (02h): Supported 00:24:14.587 Write Zeroes (08h): Supported LBA-Change 00:24:14.587 Dataset Management (09h): Supported 00:24:14.587 00:24:14.587 Error Log 00:24:14.587 ========= 00:24:14.587 Entry: 0 00:24:14.587 Error Count: 0x3 00:24:14.587 Submission Queue Id: 0x0 00:24:14.587 Command Id: 0x5 00:24:14.587 Phase Bit: 0 00:24:14.587 Status Code: 0x2 00:24:14.587 Status Code Type: 0x0 00:24:14.587 Do Not Retry: 1 00:24:14.587 Error Location: 0x28 00:24:14.587 LBA: 0x0 00:24:14.587 Namespace: 0x0 00:24:14.587 Vendor Log Page: 0x0 00:24:14.587 ----------- 00:24:14.587 Entry: 1 00:24:14.587 Error Count: 0x2 00:24:14.587 Submission Queue Id: 0x0 00:24:14.587 Command Id: 0x5 00:24:14.587 Phase Bit: 0 00:24:14.587 Status Code: 0x2 00:24:14.587 Status Code Type: 0x0 00:24:14.587 Do Not Retry: 1 00:24:14.587 Error Location: 0x28 00:24:14.587 LBA: 0x0 00:24:14.587 Namespace: 0x0 00:24:14.587 Vendor Log Page: 0x0 00:24:14.587 ----------- 00:24:14.587 Entry: 2 00:24:14.587 Error Count: 0x1 00:24:14.587 Submission Queue Id: 0x0 00:24:14.587 Command Id: 0x4 00:24:14.587 Phase Bit: 0 00:24:14.587 Status Code: 0x2 00:24:14.587 Status Code Type: 0x0 00:24:14.587 Do Not Retry: 1 00:24:14.587 Error Location: 0x28 00:24:14.587 LBA: 0x0 00:24:14.587 Namespace: 0x0 00:24:14.587 Vendor Log Page: 0x0 00:24:14.587 00:24:14.587 Number of Queues 00:24:14.587 ================ 00:24:14.587 Number of I/O Submission Queues: 128 00:24:14.587 Number of I/O Completion Queues: 128 00:24:14.587 00:24:14.587 ZNS Specific Controller Data 00:24:14.587 ============================ 00:24:14.587 Zone Append Size Limit: 0 00:24:14.587 00:24:14.587 00:24:14.587 Active Namespaces 00:24:14.587 ================= 00:24:14.587 get_feature(0x05) failed 00:24:14.588 Namespace ID:1 00:24:14.588 Command Set Identifier: NVM (00h) 00:24:14.588 Deallocate: Supported 00:24:14.588 Deallocated/Unwritten Error: Not Supported 00:24:14.588 Deallocated Read Value: Unknown 00:24:14.588 Deallocate in Write Zeroes: Not Supported 00:24:14.588 Deallocated Guard Field: 0xFFFF 00:24:14.588 Flush: Supported 00:24:14.588 Reservation: Not Supported 00:24:14.588 Namespace Sharing Capabilities: Multiple Controllers 00:24:14.588 Size (in LBAs): 1953525168 (931GiB) 00:24:14.588 Capacity (in LBAs): 1953525168 (931GiB) 00:24:14.588 Utilization (in LBAs): 1953525168 (931GiB) 00:24:14.588 UUID: b54639b9-d437-4e05-bf8f-2d51c6ff72ba 00:24:14.588 Thin Provisioning: Not Supported 00:24:14.588 Per-NS Atomic Units: Yes 00:24:14.588 Atomic Boundary Size (Normal): 0 00:24:14.588 Atomic Boundary Size (PFail): 0 00:24:14.588 Atomic Boundary Offset: 0 00:24:14.588 NGUID/EUI64 Never Reused: No 00:24:14.588 ANA group ID: 1 00:24:14.588 Namespace Write Protected: No 00:24:14.588 Number of LBA Formats: 1 00:24:14.588 Current LBA Format: LBA Format #00 00:24:14.588 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:14.588 00:24:14.588 11:16:11 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:14.588 11:16:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:14.588 11:16:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:24:14.588 11:16:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:14.588 11:16:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:24:14.588 11:16:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:14.588 11:16:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:14.588 rmmod nvme_tcp 00:24:14.588 rmmod nvme_fabrics 00:24:14.588 11:16:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:14.588 11:16:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:24:14.588 11:16:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:24:14.588 11:16:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:24:14.588 11:16:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:14.588 11:16:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:14.588 11:16:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:14.588 11:16:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:14.588 11:16:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:14.588 11:16:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.588 11:16:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:14.588 11:16:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.540 11:16:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:16.540 11:16:13 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:16.540 11:16:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:16.540 11:16:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:24:16.540 11:16:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:16.540 11:16:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:16.540 11:16:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:16.540 11:16:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:16.540 11:16:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:16.540 11:16:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:24:16.540 11:16:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:19.813 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:19.813 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:19.813 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:19.813 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:19.813 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:19.813 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:19.813 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:19.813 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:19.813 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:19.813 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:19.813 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:19.813 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:19.813 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:19.813 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:19.813 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:19.813 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:20.072 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:24:20.329 00:24:20.329 real 0m15.000s 00:24:20.329 user 0m3.604s 00:24:20.329 sys 0m7.720s 00:24:20.329 11:16:17 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # xtrace_disable 00:24:20.329 11:16:17 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:20.329 ************************************ 00:24:20.329 END TEST nvmf_identify_kernel_target 00:24:20.329 ************************************ 00:24:20.329 11:16:17 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:20.329 11:16:17 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:24:20.329 11:16:17 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:24:20.329 11:16:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:20.329 ************************************ 00:24:20.329 START TEST nvmf_auth_host 00:24:20.329 ************************************ 00:24:20.329 11:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:20.330 * Looking for test storage... 00:24:20.588 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:20.588 11:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:25.848 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:25.848 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:25.848 Found net devices under 0000:86:00.0: cvl_0_0 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:25.848 Found net devices under 0000:86:00.1: cvl_0_1 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:25.848 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:25.849 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:25.849 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:25.849 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:25.849 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:25.849 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:25.849 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:25.849 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:25.849 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:25.849 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:25.849 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:25.849 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:25.849 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:25.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:25.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:24:25.849 00:24:25.849 --- 10.0.0.2 ping statistics --- 00:24:25.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:25.849 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:24:25.849 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:25.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:25.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:24:25.849 00:24:25.849 --- 10.0.0.1 ping statistics --- 00:24:25.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:25.849 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:24:25.849 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:25.849 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:24:25.849 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:25.849 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:25.849 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:25.849 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:25.849 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:25.849 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:25.849 11:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:25.849 11:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:24:25.849 11:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:25.849 11:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@721 -- # xtrace_disable 00:24:25.849 11:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.849 11:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2378828 00:24:25.849 11:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2378828 00:24:25.849 11:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:25.849 11:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@828 -- # '[' -z 2378828 ']' 00:24:25.849 11:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:25.849 11:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local max_retries=100 00:24:25.849 11:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:25.849 11:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # xtrace_disable 00:24:25.849 11:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.780 11:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:24:26.780 11:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@861 -- # return 0 00:24:26.780 11:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:26.780 11:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@727 -- # xtrace_disable 00:24:26.780 11:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.780 11:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:26.780 11:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:26.780 11:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:24:26.780 11:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:26.780 11:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:26.780 11:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:26.780 11:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:26.780 11:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:26.780 11:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:26.780 11:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f1d351b73b50391e22baf675745aaec6 00:24:26.780 11:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:26.780 11:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.GfY 00:24:26.780 11:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f1d351b73b50391e22baf675745aaec6 0 00:24:26.780 11:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f1d351b73b50391e22baf675745aaec6 0 00:24:26.780 11:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:26.780 11:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:26.780 11:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f1d351b73b50391e22baf675745aaec6 00:24:26.780 11:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:26.780 11:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:26.780 11:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.GfY 00:24:26.780 11:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.GfY 00:24:26.780 11:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.GfY 00:24:26.780 11:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:24:26.780 11:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:26.780 11:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:26.780 11:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:26.780 11:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:24:26.780 11:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:24:26.780 11:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:26.780 11:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=242b579d4adf67c9a003573fb1205a5a104942c057592da0fd6f7d8cb5ebc896 00:24:26.780 11:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:24:26.780 11:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.i8W 00:24:26.780 11:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 242b579d4adf67c9a003573fb1205a5a104942c057592da0fd6f7d8cb5ebc896 3 00:24:26.780 11:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 242b579d4adf67c9a003573fb1205a5a104942c057592da0fd6f7d8cb5ebc896 3 00:24:26.780 11:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:26.780 11:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:26.780 11:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=242b579d4adf67c9a003573fb1205a5a104942c057592da0fd6f7d8cb5ebc896 00:24:26.780 11:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:24:26.780 11:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:26.780 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.i8W 00:24:26.780 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.i8W 00:24:26.780 11:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.i8W 00:24:26.780 11:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:24:26.780 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:26.780 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:26.780 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:26.780 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:26.780 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:26.781 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:26.781 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a4f7db4428d55aa419ac39831b03d2d1706fe762c1565bf7 00:24:26.781 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:26.781 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.9J5 00:24:26.781 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a4f7db4428d55aa419ac39831b03d2d1706fe762c1565bf7 0 00:24:26.781 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a4f7db4428d55aa419ac39831b03d2d1706fe762c1565bf7 0 00:24:26.781 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:26.781 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:26.781 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a4f7db4428d55aa419ac39831b03d2d1706fe762c1565bf7 00:24:26.781 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:26.781 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:27.039 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.9J5 00:24:27.039 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.9J5 00:24:27.039 11:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.9J5 00:24:27.039 11:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:24:27.039 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:27.039 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:27.039 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:27.039 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:24:27.039 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:27.039 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:27.039 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=14067d4e2105b3f24c1a347ab5a8f3aa0d4e76eab17d4960 00:24:27.039 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:24:27.039 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ttu 00:24:27.039 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 14067d4e2105b3f24c1a347ab5a8f3aa0d4e76eab17d4960 2 00:24:27.039 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 14067d4e2105b3f24c1a347ab5a8f3aa0d4e76eab17d4960 2 00:24:27.039 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:27.039 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:27.039 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=14067d4e2105b3f24c1a347ab5a8f3aa0d4e76eab17d4960 00:24:27.039 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:24:27.039 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:27.039 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ttu 00:24:27.039 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ttu 00:24:27.039 11:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.ttu 00:24:27.039 11:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:27.039 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:27.039 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:27.039 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:27.039 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:24:27.039 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:27.039 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:27.039 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=195602a4dac165e438a7e1a7f7aefa67 00:24:27.039 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:24:27.039 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.L8J 00:24:27.039 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 195602a4dac165e438a7e1a7f7aefa67 1 00:24:27.039 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 195602a4dac165e438a7e1a7f7aefa67 1 00:24:27.039 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:27.039 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:27.039 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=195602a4dac165e438a7e1a7f7aefa67 00:24:27.039 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:24:27.039 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:27.039 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.L8J 00:24:27.039 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.L8J 00:24:27.039 11:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.L8J 00:24:27.039 11:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:27.039 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:27.040 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:27.040 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:27.040 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:24:27.040 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:27.040 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:27.040 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4bd16b48f7270aada20af095ec976767 00:24:27.040 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:24:27.040 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Pdp 00:24:27.040 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4bd16b48f7270aada20af095ec976767 1 00:24:27.040 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4bd16b48f7270aada20af095ec976767 1 00:24:27.040 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:27.040 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:27.040 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4bd16b48f7270aada20af095ec976767 00:24:27.040 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:24:27.040 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:27.040 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Pdp 00:24:27.040 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Pdp 00:24:27.040 11:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Pdp 00:24:27.040 11:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:24:27.040 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:27.040 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:27.040 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:27.040 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:24:27.040 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:27.040 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:27.040 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cf2c920ad54f76a73a8942ae8f8bc28a892088f8d8c415b6 00:24:27.040 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:24:27.040 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.D3Z 00:24:27.040 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cf2c920ad54f76a73a8942ae8f8bc28a892088f8d8c415b6 2 00:24:27.040 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cf2c920ad54f76a73a8942ae8f8bc28a892088f8d8c415b6 2 00:24:27.040 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:27.040 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:27.040 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cf2c920ad54f76a73a8942ae8f8bc28a892088f8d8c415b6 00:24:27.040 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:24:27.040 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:27.298 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.D3Z 00:24:27.298 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.D3Z 00:24:27.298 11:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.D3Z 00:24:27.298 11:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:24:27.298 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:27.298 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:27.298 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:27.299 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:27.299 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:27.299 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:27.299 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0aa5ad35f70459032878f79c07490b3b 00:24:27.299 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:27.299 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.hOX 00:24:27.299 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0aa5ad35f70459032878f79c07490b3b 0 00:24:27.299 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0aa5ad35f70459032878f79c07490b3b 0 00:24:27.299 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:27.299 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:27.299 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0aa5ad35f70459032878f79c07490b3b 00:24:27.299 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:27.299 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:27.299 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.hOX 00:24:27.299 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.hOX 00:24:27.299 11:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.hOX 00:24:27.299 11:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:24:27.299 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:27.299 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:27.299 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:27.299 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:24:27.299 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:24:27.299 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:27.299 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f5d16a73b4f28c5018a5e51103fb7d83d4d198457b90f680559c37f60e901b8e 00:24:27.299 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:24:27.299 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.M3X 00:24:27.299 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f5d16a73b4f28c5018a5e51103fb7d83d4d198457b90f680559c37f60e901b8e 3 00:24:27.299 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f5d16a73b4f28c5018a5e51103fb7d83d4d198457b90f680559c37f60e901b8e 3 00:24:27.299 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:27.299 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:27.299 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f5d16a73b4f28c5018a5e51103fb7d83d4d198457b90f680559c37f60e901b8e 00:24:27.299 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:24:27.299 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:27.299 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.M3X 00:24:27.299 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.M3X 00:24:27.299 11:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.M3X 00:24:27.299 11:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:24:27.299 11:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2378828 00:24:27.299 11:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@828 -- # '[' -z 2378828 ']' 00:24:27.299 11:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:27.299 11:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local max_retries=100 00:24:27.299 11:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:27.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:27.299 11:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # xtrace_disable 00:24:27.299 11:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.557 11:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:24:27.557 11:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@861 -- # return 0 00:24:27.557 11:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:27.557 11:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.GfY 00:24:27.557 11:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:27.557 11:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.557 11:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:27.557 11:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.i8W ]] 00:24:27.557 11:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.i8W 00:24:27.557 11:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:27.557 11:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.557 11:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:27.557 11:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:27.557 11:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.9J5 00:24:27.557 11:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:27.557 11:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.557 11:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:27.557 11:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.ttu ]] 00:24:27.557 11:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ttu 00:24:27.557 11:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:27.557 11:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.557 11:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:27.557 11:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:27.557 11:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.L8J 00:24:27.557 11:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:27.557 11:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.557 11:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:27.557 11:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Pdp ]] 00:24:27.557 11:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Pdp 00:24:27.557 11:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:27.557 11:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.557 11:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:27.557 11:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:27.557 11:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.D3Z 00:24:27.557 11:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:27.557 11:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.557 11:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:27.557 11:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.hOX ]] 00:24:27.557 11:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.hOX 00:24:27.557 11:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:27.557 11:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.557 11:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:27.557 11:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:27.557 11:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.M3X 00:24:27.557 11:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:27.557 11:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.557 11:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:27.557 11:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:24:27.557 11:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:24:27.557 11:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:24:27.557 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:27.557 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:27.558 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:27.558 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.558 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.558 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:27.558 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.558 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:27.558 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:27.558 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:27.558 11:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:24:27.558 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:24:27.558 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:27.558 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:27.558 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:27.558 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:27.558 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:24:27.558 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:27.558 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:27.558 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:27.558 11:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:30.084 Waiting for block devices as requested 00:24:30.084 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:24:30.084 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:30.342 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:30.342 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:30.342 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:30.600 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:30.600 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:30.600 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:30.600 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:30.857 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:30.857 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:30.857 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:30.857 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:31.114 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:31.114 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:31.114 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:31.371 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:31.936 11:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:31.936 11:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:31.936 11:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:24:31.936 11:16:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:24:31.936 11:16:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:31.936 11:16:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:24:31.936 11:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:24:31.936 11:16:28 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:31.936 11:16:28 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:31.936 No valid GPT data, bailing 00:24:31.936 11:16:29 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:31.936 11:16:29 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:24:31.936 11:16:29 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:24:31.936 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:24:31.936 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:24:31.936 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:31.936 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:31.936 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:31.936 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:24:31.936 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:24:31.936 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:24:31.936 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:24:31.936 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:24:31.936 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:24:31.936 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:24:31.936 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:24:31.937 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:31.937 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:24:31.937 00:24:31.937 Discovery Log Number of Records 2, Generation counter 2 00:24:31.937 =====Discovery Log Entry 0====== 00:24:31.937 trtype: tcp 00:24:31.937 adrfam: ipv4 00:24:31.937 subtype: current discovery subsystem 00:24:31.937 treq: not specified, sq flow control disable supported 00:24:31.937 portid: 1 00:24:31.937 trsvcid: 4420 00:24:31.937 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:31.937 traddr: 10.0.0.1 00:24:31.937 eflags: none 00:24:31.937 sectype: none 00:24:31.937 =====Discovery Log Entry 1====== 00:24:31.937 trtype: tcp 00:24:31.937 adrfam: ipv4 00:24:31.937 subtype: nvme subsystem 00:24:31.937 treq: not specified, sq flow control disable supported 00:24:31.937 portid: 1 00:24:31.937 trsvcid: 4420 00:24:31.937 subnqn: nqn.2024-02.io.spdk:cnode0 00:24:31.937 traddr: 10.0.0.1 00:24:31.937 eflags: none 00:24:31.937 sectype: none 00:24:31.937 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:31.937 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:24:31.937 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:31.937 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:31.937 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.937 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:31.937 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:31.937 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:31.937 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTRmN2RiNDQyOGQ1NWFhNDE5YWMzOTgzMWIwM2QyZDE3MDZmZTc2MmMxNTY1YmY3ilihiw==: 00:24:31.937 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTQwNjdkNGUyMTA1YjNmMjRjMWEzNDdhYjVhOGYzYWEwZDRlNzZlYWIxN2Q0OTYwDYye/w==: 00:24:31.937 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:31.937 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:31.937 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTRmN2RiNDQyOGQ1NWFhNDE5YWMzOTgzMWIwM2QyZDE3MDZmZTc2MmMxNTY1YmY3ilihiw==: 00:24:31.937 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTQwNjdkNGUyMTA1YjNmMjRjMWEzNDdhYjVhOGYzYWEwZDRlNzZlYWIxN2Q0OTYwDYye/w==: ]] 00:24:31.937 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTQwNjdkNGUyMTA1YjNmMjRjMWEzNDdhYjVhOGYzYWEwZDRlNzZlYWIxN2Q0OTYwDYye/w==: 00:24:31.937 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:31.937 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:24:31.937 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:31.937 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:31.937 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:24:31.937 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.937 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:24:31.937 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:31.937 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:31.937 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.937 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:31.937 11:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:31.937 11:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.937 11:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:31.937 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.937 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:31.937 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:31.937 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:31.937 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.937 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.937 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:31.937 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.937 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:31.937 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:31.937 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:31.937 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:31.937 11:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:31.937 11:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.195 nvme0n1 00:24:32.195 11:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:32.195 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.195 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.195 11:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:32.195 11:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.195 11:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:32.195 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.195 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.195 11:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:32.195 11:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.195 11:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:32.195 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:32.195 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:32.195 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.195 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:24:32.195 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.195 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:32.195 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:32.195 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:32.195 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFkMzUxYjczYjUwMzkxZTIyYmFmNjc1NzQ1YWFlYzYdA+TR: 00:24:32.195 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQyYjU3OWQ0YWRmNjdjOWEwMDM1NzNmYjEyMDVhNWExMDQ5NDJjMDU3NTkyZGEwZmQ2ZjdkOGNiNWViYzg5Ng5uH30=: 00:24:32.195 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:32.195 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:32.195 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFkMzUxYjczYjUwMzkxZTIyYmFmNjc1NzQ1YWFlYzYdA+TR: 00:24:32.195 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQyYjU3OWQ0YWRmNjdjOWEwMDM1NzNmYjEyMDVhNWExMDQ5NDJjMDU3NTkyZGEwZmQ2ZjdkOGNiNWViYzg5Ng5uH30=: ]] 00:24:32.195 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQyYjU3OWQ0YWRmNjdjOWEwMDM1NzNmYjEyMDVhNWExMDQ5NDJjMDU3NTkyZGEwZmQ2ZjdkOGNiNWViYzg5Ng5uH30=: 00:24:32.195 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:24:32.195 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.195 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:32.195 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:32.195 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:32.195 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.195 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:32.195 11:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:32.195 11:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.195 11:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:32.195 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.195 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:32.195 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:32.195 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:32.195 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.195 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.195 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:32.195 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.195 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:32.195 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:32.195 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:32.195 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:32.195 11:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:32.195 11:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.454 nvme0n1 00:24:32.454 11:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:32.454 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.454 11:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:32.454 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.454 11:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.454 11:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:32.454 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.454 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.454 11:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:32.454 11:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.454 11:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:32.454 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.454 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:32.454 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.454 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:32.454 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:32.454 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:32.454 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTRmN2RiNDQyOGQ1NWFhNDE5YWMzOTgzMWIwM2QyZDE3MDZmZTc2MmMxNTY1YmY3ilihiw==: 00:24:32.454 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTQwNjdkNGUyMTA1YjNmMjRjMWEzNDdhYjVhOGYzYWEwZDRlNzZlYWIxN2Q0OTYwDYye/w==: 00:24:32.454 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:32.454 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:32.454 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTRmN2RiNDQyOGQ1NWFhNDE5YWMzOTgzMWIwM2QyZDE3MDZmZTc2MmMxNTY1YmY3ilihiw==: 00:24:32.454 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTQwNjdkNGUyMTA1YjNmMjRjMWEzNDdhYjVhOGYzYWEwZDRlNzZlYWIxN2Q0OTYwDYye/w==: ]] 00:24:32.455 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTQwNjdkNGUyMTA1YjNmMjRjMWEzNDdhYjVhOGYzYWEwZDRlNzZlYWIxN2Q0OTYwDYye/w==: 00:24:32.455 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:24:32.455 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.455 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:32.455 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:32.455 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:32.455 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.455 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:32.455 11:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:32.455 11:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.455 11:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:32.455 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.455 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:32.455 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:32.455 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:32.455 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.455 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.455 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:32.455 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.455 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:32.455 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:32.455 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:32.455 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:32.455 11:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:32.455 11:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.713 nvme0n1 00:24:32.713 11:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:32.713 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.713 11:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:32.713 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.713 11:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.713 11:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:32.713 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.713 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.713 11:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:32.713 11:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.713 11:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:32.713 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.713 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:32.713 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.713 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:32.713 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:32.713 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:32.713 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTk1NjAyYTRkYWMxNjVlNDM4YTdlMWE3ZjdhZWZhNjdgbGhU: 00:24:32.713 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGJkMTZiNDhmNzI3MGFhZGEyMGFmMDk1ZWM5NzY3Njf+N3rr: 00:24:32.713 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:32.713 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:32.713 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTk1NjAyYTRkYWMxNjVlNDM4YTdlMWE3ZjdhZWZhNjdgbGhU: 00:24:32.713 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGJkMTZiNDhmNzI3MGFhZGEyMGFmMDk1ZWM5NzY3Njf+N3rr: ]] 00:24:32.713 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGJkMTZiNDhmNzI3MGFhZGEyMGFmMDk1ZWM5NzY3Njf+N3rr: 00:24:32.713 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:24:32.713 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.713 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:32.713 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:32.713 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:32.713 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.713 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:32.713 11:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:32.713 11:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.713 11:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:32.713 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.713 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:32.713 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:32.713 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:32.713 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.713 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.713 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:32.713 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.713 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:32.713 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:32.713 11:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:32.713 11:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:32.713 11:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:32.713 11:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.971 nvme0n1 00:24:32.971 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:32.971 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.971 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.971 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:32.971 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.971 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:32.971 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.971 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.971 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:32.971 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.971 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:32.971 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.971 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:24:32.971 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.971 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:32.972 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:32.972 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:32.972 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2YyYzkyMGFkNTRmNzZhNzNhODk0MmFlOGY4YmMyOGE4OTIwODhmOGQ4YzQxNWI2ZRCG4A==: 00:24:32.972 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGFhNWFkMzVmNzA0NTkwMzI4NzhmNzljMDc0OTBiM2IYnKrq: 00:24:32.972 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:32.972 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:32.972 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2YyYzkyMGFkNTRmNzZhNzNhODk0MmFlOGY4YmMyOGE4OTIwODhmOGQ4YzQxNWI2ZRCG4A==: 00:24:32.972 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGFhNWFkMzVmNzA0NTkwMzI4NzhmNzljMDc0OTBiM2IYnKrq: ]] 00:24:32.972 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGFhNWFkMzVmNzA0NTkwMzI4NzhmNzljMDc0OTBiM2IYnKrq: 00:24:32.972 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:24:32.972 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.972 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:32.972 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:32.972 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:32.972 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.972 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:32.972 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:32.972 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.972 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:32.972 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.972 11:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:32.972 11:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:32.972 11:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:32.972 11:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.972 11:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.972 11:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:32.972 11:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.972 11:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:32.972 11:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:32.972 11:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:32.972 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:32.972 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:32.972 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.972 nvme0n1 00:24:32.972 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:32.972 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.972 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.972 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:32.972 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.231 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:33.231 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.231 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.231 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:33.231 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.231 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:33.231 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.231 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:33.231 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.231 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:33.231 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:33.231 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:33.231 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjVkMTZhNzNiNGYyOGM1MDE4YTVlNTExMDNmYjdkODNkNGQxOTg0NTdiOTBmNjgwNTU5YzM3ZjYwZTkwMWI4ZR5MDp8=: 00:24:33.231 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:33.231 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:33.231 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:33.231 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjVkMTZhNzNiNGYyOGM1MDE4YTVlNTExMDNmYjdkODNkNGQxOTg0NTdiOTBmNjgwNTU5YzM3ZjYwZTkwMWI4ZR5MDp8=: 00:24:33.231 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:33.231 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:24:33.231 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.231 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:33.231 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:33.231 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:33.231 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.231 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:33.231 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:33.231 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.231 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:33.231 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.231 11:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:33.231 11:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:33.231 11:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:33.231 11:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.231 11:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.231 11:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:33.231 11:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.231 11:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:33.231 11:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:33.231 11:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:33.231 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:33.231 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:33.231 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.231 nvme0n1 00:24:33.231 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:33.231 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.231 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.231 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:33.231 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.231 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:33.231 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.232 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.232 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:33.232 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.490 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:33.490 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:33.490 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.490 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:33.490 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.490 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:33.490 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:33.490 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:33.490 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFkMzUxYjczYjUwMzkxZTIyYmFmNjc1NzQ1YWFlYzYdA+TR: 00:24:33.490 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQyYjU3OWQ0YWRmNjdjOWEwMDM1NzNmYjEyMDVhNWExMDQ5NDJjMDU3NTkyZGEwZmQ2ZjdkOGNiNWViYzg5Ng5uH30=: 00:24:33.490 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:33.490 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:33.490 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFkMzUxYjczYjUwMzkxZTIyYmFmNjc1NzQ1YWFlYzYdA+TR: 00:24:33.490 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQyYjU3OWQ0YWRmNjdjOWEwMDM1NzNmYjEyMDVhNWExMDQ5NDJjMDU3NTkyZGEwZmQ2ZjdkOGNiNWViYzg5Ng5uH30=: ]] 00:24:33.490 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQyYjU3OWQ0YWRmNjdjOWEwMDM1NzNmYjEyMDVhNWExMDQ5NDJjMDU3NTkyZGEwZmQ2ZjdkOGNiNWViYzg5Ng5uH30=: 00:24:33.491 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:24:33.491 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.491 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:33.491 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:33.491 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:33.491 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.491 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:33.491 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:33.491 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.491 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:33.491 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.491 11:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:33.491 11:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:33.491 11:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:33.491 11:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.491 11:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.491 11:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:33.491 11:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.491 11:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:33.491 11:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:33.491 11:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:33.491 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:33.491 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:33.491 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.491 nvme0n1 00:24:33.491 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:33.491 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.491 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.491 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:33.491 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.491 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:33.491 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.491 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.491 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:33.491 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.750 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:33.750 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.750 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:33.750 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.750 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:33.750 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:33.750 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:33.750 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTRmN2RiNDQyOGQ1NWFhNDE5YWMzOTgzMWIwM2QyZDE3MDZmZTc2MmMxNTY1YmY3ilihiw==: 00:24:33.750 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTQwNjdkNGUyMTA1YjNmMjRjMWEzNDdhYjVhOGYzYWEwZDRlNzZlYWIxN2Q0OTYwDYye/w==: 00:24:33.750 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:33.750 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:33.750 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTRmN2RiNDQyOGQ1NWFhNDE5YWMzOTgzMWIwM2QyZDE3MDZmZTc2MmMxNTY1YmY3ilihiw==: 00:24:33.750 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTQwNjdkNGUyMTA1YjNmMjRjMWEzNDdhYjVhOGYzYWEwZDRlNzZlYWIxN2Q0OTYwDYye/w==: ]] 00:24:33.750 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTQwNjdkNGUyMTA1YjNmMjRjMWEzNDdhYjVhOGYzYWEwZDRlNzZlYWIxN2Q0OTYwDYye/w==: 00:24:33.750 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:24:33.750 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.750 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:33.750 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:33.750 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:33.750 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.750 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:33.750 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:33.750 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.750 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:33.750 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.750 11:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:33.750 11:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:33.750 11:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:33.750 11:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.750 11:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.750 11:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:33.750 11:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.750 11:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:33.750 11:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:33.750 11:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:33.750 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:33.750 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:33.750 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.750 nvme0n1 00:24:33.750 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:33.750 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.750 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.750 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:33.750 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.750 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:33.750 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.750 11:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.750 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:33.750 11:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.750 11:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:33.750 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.750 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:33.750 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.750 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:33.750 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:33.750 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:33.750 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTk1NjAyYTRkYWMxNjVlNDM4YTdlMWE3ZjdhZWZhNjdgbGhU: 00:24:33.750 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGJkMTZiNDhmNzI3MGFhZGEyMGFmMDk1ZWM5NzY3Njf+N3rr: 00:24:33.750 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:33.750 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:33.750 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTk1NjAyYTRkYWMxNjVlNDM4YTdlMWE3ZjdhZWZhNjdgbGhU: 00:24:33.750 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGJkMTZiNDhmNzI3MGFhZGEyMGFmMDk1ZWM5NzY3Njf+N3rr: ]] 00:24:33.750 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGJkMTZiNDhmNzI3MGFhZGEyMGFmMDk1ZWM5NzY3Njf+N3rr: 00:24:33.750 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:24:33.750 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.750 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:33.750 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:33.750 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:33.750 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.750 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:33.750 11:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:33.750 11:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.009 11:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:34.009 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.009 11:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:34.009 11:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:34.009 11:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:34.009 11:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.009 11:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.009 11:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:34.009 11:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.009 11:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:34.009 11:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:34.009 11:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:34.009 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:34.009 11:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:34.009 11:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.009 nvme0n1 00:24:34.009 11:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:34.009 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.009 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.009 11:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:34.009 11:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.010 11:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:34.010 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.010 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.010 11:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:34.010 11:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.010 11:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:34.010 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:34.010 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:34.010 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.010 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:34.010 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:34.010 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:34.010 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2YyYzkyMGFkNTRmNzZhNzNhODk0MmFlOGY4YmMyOGE4OTIwODhmOGQ4YzQxNWI2ZRCG4A==: 00:24:34.010 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGFhNWFkMzVmNzA0NTkwMzI4NzhmNzljMDc0OTBiM2IYnKrq: 00:24:34.010 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:34.010 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:34.010 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2YyYzkyMGFkNTRmNzZhNzNhODk0MmFlOGY4YmMyOGE4OTIwODhmOGQ4YzQxNWI2ZRCG4A==: 00:24:34.010 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGFhNWFkMzVmNzA0NTkwMzI4NzhmNzljMDc0OTBiM2IYnKrq: ]] 00:24:34.010 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGFhNWFkMzVmNzA0NTkwMzI4NzhmNzljMDc0OTBiM2IYnKrq: 00:24:34.010 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:24:34.010 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:34.010 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:34.010 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:34.010 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:34.010 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.010 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:34.010 11:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:34.010 11:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.010 11:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:34.010 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.010 11:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:34.010 11:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:34.010 11:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:34.010 11:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.010 11:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.010 11:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:34.010 11:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.010 11:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:34.010 11:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:34.010 11:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:34.010 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:34.010 11:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:34.010 11:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.268 nvme0n1 00:24:34.268 11:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:34.268 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.268 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.268 11:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:34.268 11:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.268 11:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:34.269 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.269 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.269 11:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:34.269 11:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.269 11:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:34.269 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:34.269 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:34.269 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.269 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:34.269 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:34.269 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:34.269 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjVkMTZhNzNiNGYyOGM1MDE4YTVlNTExMDNmYjdkODNkNGQxOTg0NTdiOTBmNjgwNTU5YzM3ZjYwZTkwMWI4ZR5MDp8=: 00:24:34.269 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:34.269 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:34.269 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:34.269 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjVkMTZhNzNiNGYyOGM1MDE4YTVlNTExMDNmYjdkODNkNGQxOTg0NTdiOTBmNjgwNTU5YzM3ZjYwZTkwMWI4ZR5MDp8=: 00:24:34.269 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:34.269 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:24:34.269 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:34.269 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:34.269 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:34.269 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:34.269 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.269 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:34.269 11:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:34.269 11:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.269 11:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:34.269 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.269 11:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:34.269 11:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:34.269 11:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:34.269 11:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.269 11:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.269 11:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:34.269 11:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.269 11:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:34.269 11:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:34.269 11:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:34.269 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:34.269 11:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:34.269 11:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.527 nvme0n1 00:24:34.527 11:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:34.527 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.527 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.527 11:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:34.527 11:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.528 11:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:34.528 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.528 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.528 11:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:34.528 11:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.528 11:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:34.528 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:34.528 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:34.528 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:34.528 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.528 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:34.528 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:34.528 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:34.528 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFkMzUxYjczYjUwMzkxZTIyYmFmNjc1NzQ1YWFlYzYdA+TR: 00:24:34.528 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQyYjU3OWQ0YWRmNjdjOWEwMDM1NzNmYjEyMDVhNWExMDQ5NDJjMDU3NTkyZGEwZmQ2ZjdkOGNiNWViYzg5Ng5uH30=: 00:24:34.528 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:34.528 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:34.528 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFkMzUxYjczYjUwMzkxZTIyYmFmNjc1NzQ1YWFlYzYdA+TR: 00:24:34.528 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQyYjU3OWQ0YWRmNjdjOWEwMDM1NzNmYjEyMDVhNWExMDQ5NDJjMDU3NTkyZGEwZmQ2ZjdkOGNiNWViYzg5Ng5uH30=: ]] 00:24:34.528 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQyYjU3OWQ0YWRmNjdjOWEwMDM1NzNmYjEyMDVhNWExMDQ5NDJjMDU3NTkyZGEwZmQ2ZjdkOGNiNWViYzg5Ng5uH30=: 00:24:34.528 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:24:34.528 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:34.528 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:34.528 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:34.528 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:34.528 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.528 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:34.528 11:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:34.528 11:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.528 11:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:34.528 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.528 11:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:34.528 11:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:34.528 11:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:34.528 11:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.528 11:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.528 11:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:34.528 11:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.528 11:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:34.528 11:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:34.528 11:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:34.528 11:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:34.528 11:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:34.528 11:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.786 nvme0n1 00:24:34.786 11:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:34.786 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.786 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.786 11:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:34.786 11:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.786 11:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:35.045 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.045 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.045 11:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:35.045 11:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.045 11:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:35.045 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:35.045 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:24:35.045 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:35.045 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:35.045 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:35.045 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:35.046 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTRmN2RiNDQyOGQ1NWFhNDE5YWMzOTgzMWIwM2QyZDE3MDZmZTc2MmMxNTY1YmY3ilihiw==: 00:24:35.046 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTQwNjdkNGUyMTA1YjNmMjRjMWEzNDdhYjVhOGYzYWEwZDRlNzZlYWIxN2Q0OTYwDYye/w==: 00:24:35.046 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:35.046 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:35.046 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTRmN2RiNDQyOGQ1NWFhNDE5YWMzOTgzMWIwM2QyZDE3MDZmZTc2MmMxNTY1YmY3ilihiw==: 00:24:35.046 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTQwNjdkNGUyMTA1YjNmMjRjMWEzNDdhYjVhOGYzYWEwZDRlNzZlYWIxN2Q0OTYwDYye/w==: ]] 00:24:35.046 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTQwNjdkNGUyMTA1YjNmMjRjMWEzNDdhYjVhOGYzYWEwZDRlNzZlYWIxN2Q0OTYwDYye/w==: 00:24:35.046 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:24:35.046 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:35.046 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:35.046 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:35.046 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:35.046 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:35.046 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:35.046 11:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:35.046 11:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.046 11:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:35.046 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:35.046 11:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:35.046 11:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:35.046 11:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:35.046 11:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.046 11:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.046 11:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:35.046 11:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.046 11:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:35.046 11:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:35.046 11:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:35.046 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:35.046 11:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:35.046 11:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.304 nvme0n1 00:24:35.304 11:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:35.304 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.304 11:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:35.304 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:35.304 11:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.304 11:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:35.304 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.304 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.304 11:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:35.304 11:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.304 11:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:35.304 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:35.304 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:24:35.304 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:35.304 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:35.304 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:35.304 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:35.304 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTk1NjAyYTRkYWMxNjVlNDM4YTdlMWE3ZjdhZWZhNjdgbGhU: 00:24:35.304 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGJkMTZiNDhmNzI3MGFhZGEyMGFmMDk1ZWM5NzY3Njf+N3rr: 00:24:35.304 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:35.304 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:35.304 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTk1NjAyYTRkYWMxNjVlNDM4YTdlMWE3ZjdhZWZhNjdgbGhU: 00:24:35.304 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGJkMTZiNDhmNzI3MGFhZGEyMGFmMDk1ZWM5NzY3Njf+N3rr: ]] 00:24:35.304 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGJkMTZiNDhmNzI3MGFhZGEyMGFmMDk1ZWM5NzY3Njf+N3rr: 00:24:35.304 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:24:35.304 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:35.304 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:35.304 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:35.305 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:35.305 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:35.305 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:35.305 11:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:35.305 11:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.305 11:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:35.305 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:35.305 11:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:35.305 11:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:35.305 11:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:35.305 11:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.305 11:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.305 11:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:35.305 11:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.305 11:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:35.305 11:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:35.305 11:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:35.305 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:35.305 11:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:35.305 11:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.563 nvme0n1 00:24:35.563 11:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:35.563 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.563 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:35.563 11:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:35.563 11:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.563 11:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:35.563 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.563 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.563 11:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:35.563 11:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.563 11:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:35.563 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:35.563 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:24:35.563 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:35.563 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:35.563 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:35.563 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:35.563 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2YyYzkyMGFkNTRmNzZhNzNhODk0MmFlOGY4YmMyOGE4OTIwODhmOGQ4YzQxNWI2ZRCG4A==: 00:24:35.563 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGFhNWFkMzVmNzA0NTkwMzI4NzhmNzljMDc0OTBiM2IYnKrq: 00:24:35.563 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:35.563 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:35.563 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2YyYzkyMGFkNTRmNzZhNzNhODk0MmFlOGY4YmMyOGE4OTIwODhmOGQ4YzQxNWI2ZRCG4A==: 00:24:35.563 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGFhNWFkMzVmNzA0NTkwMzI4NzhmNzljMDc0OTBiM2IYnKrq: ]] 00:24:35.563 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGFhNWFkMzVmNzA0NTkwMzI4NzhmNzljMDc0OTBiM2IYnKrq: 00:24:35.563 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:24:35.563 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:35.563 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:35.563 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:35.563 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:35.563 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:35.563 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:35.563 11:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:35.564 11:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.564 11:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:35.564 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:35.564 11:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:35.564 11:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:35.564 11:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:35.564 11:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.564 11:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.564 11:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:35.564 11:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.564 11:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:35.564 11:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:35.564 11:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:35.564 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:35.564 11:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:35.564 11:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.823 nvme0n1 00:24:35.823 11:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:35.823 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.823 11:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:35.823 11:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:35.823 11:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.823 11:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:35.823 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.823 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.823 11:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:35.823 11:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.823 11:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:35.823 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:35.823 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:24:35.823 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:35.823 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:35.823 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:35.823 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:35.823 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjVkMTZhNzNiNGYyOGM1MDE4YTVlNTExMDNmYjdkODNkNGQxOTg0NTdiOTBmNjgwNTU5YzM3ZjYwZTkwMWI4ZR5MDp8=: 00:24:35.823 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:35.823 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:35.823 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:35.823 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjVkMTZhNzNiNGYyOGM1MDE4YTVlNTExMDNmYjdkODNkNGQxOTg0NTdiOTBmNjgwNTU5YzM3ZjYwZTkwMWI4ZR5MDp8=: 00:24:35.823 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:35.823 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:24:35.823 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:35.823 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:35.823 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:35.823 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:35.823 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:35.823 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:35.823 11:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:35.823 11:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.823 11:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:35.823 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:35.823 11:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:35.823 11:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:35.823 11:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:35.823 11:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.823 11:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.823 11:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:35.823 11:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.823 11:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:35.823 11:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:35.823 11:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:35.823 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:35.823 11:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:35.823 11:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.081 nvme0n1 00:24:36.081 11:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:36.081 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.081 11:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:36.081 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:36.081 11:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.081 11:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:36.081 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.081 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.081 11:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:36.081 11:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.341 11:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:36.341 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:36.341 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:36.341 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:24:36.341 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:36.341 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:36.341 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:36.341 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:36.341 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFkMzUxYjczYjUwMzkxZTIyYmFmNjc1NzQ1YWFlYzYdA+TR: 00:24:36.341 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQyYjU3OWQ0YWRmNjdjOWEwMDM1NzNmYjEyMDVhNWExMDQ5NDJjMDU3NTkyZGEwZmQ2ZjdkOGNiNWViYzg5Ng5uH30=: 00:24:36.341 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:36.341 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:36.341 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFkMzUxYjczYjUwMzkxZTIyYmFmNjc1NzQ1YWFlYzYdA+TR: 00:24:36.341 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQyYjU3OWQ0YWRmNjdjOWEwMDM1NzNmYjEyMDVhNWExMDQ5NDJjMDU3NTkyZGEwZmQ2ZjdkOGNiNWViYzg5Ng5uH30=: ]] 00:24:36.342 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQyYjU3OWQ0YWRmNjdjOWEwMDM1NzNmYjEyMDVhNWExMDQ5NDJjMDU3NTkyZGEwZmQ2ZjdkOGNiNWViYzg5Ng5uH30=: 00:24:36.342 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:24:36.342 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:36.342 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:36.342 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:36.342 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:36.342 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:36.342 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:36.342 11:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:36.342 11:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.342 11:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:36.342 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:36.342 11:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:36.342 11:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:36.342 11:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:36.342 11:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.342 11:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.342 11:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:36.342 11:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.342 11:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:36.342 11:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:36.342 11:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:36.342 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:36.342 11:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:36.342 11:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.600 nvme0n1 00:24:36.601 11:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:36.601 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.601 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:36.601 11:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:36.601 11:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.601 11:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:36.601 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.601 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.601 11:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:36.601 11:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.601 11:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:36.601 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:36.601 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:24:36.601 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:36.601 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:36.601 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:36.601 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:36.601 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTRmN2RiNDQyOGQ1NWFhNDE5YWMzOTgzMWIwM2QyZDE3MDZmZTc2MmMxNTY1YmY3ilihiw==: 00:24:36.601 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTQwNjdkNGUyMTA1YjNmMjRjMWEzNDdhYjVhOGYzYWEwZDRlNzZlYWIxN2Q0OTYwDYye/w==: 00:24:36.601 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:36.601 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:36.601 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTRmN2RiNDQyOGQ1NWFhNDE5YWMzOTgzMWIwM2QyZDE3MDZmZTc2MmMxNTY1YmY3ilihiw==: 00:24:36.601 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTQwNjdkNGUyMTA1YjNmMjRjMWEzNDdhYjVhOGYzYWEwZDRlNzZlYWIxN2Q0OTYwDYye/w==: ]] 00:24:36.601 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTQwNjdkNGUyMTA1YjNmMjRjMWEzNDdhYjVhOGYzYWEwZDRlNzZlYWIxN2Q0OTYwDYye/w==: 00:24:36.601 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:24:36.601 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:36.601 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:36.601 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:36.601 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:36.601 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:36.601 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:36.601 11:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:36.601 11:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.601 11:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:36.601 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:36.601 11:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:36.601 11:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:36.601 11:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:36.601 11:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.601 11:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.601 11:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:36.601 11:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.601 11:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:36.601 11:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:36.601 11:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:36.601 11:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:36.601 11:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:36.601 11:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.167 nvme0n1 00:24:37.167 11:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.167 11:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.167 11:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:37.167 11:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.167 11:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.167 11:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.167 11:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.167 11:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.167 11:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.167 11:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.167 11:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.168 11:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:37.168 11:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:24:37.168 11:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:37.168 11:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:37.168 11:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:37.168 11:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:37.168 11:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTk1NjAyYTRkYWMxNjVlNDM4YTdlMWE3ZjdhZWZhNjdgbGhU: 00:24:37.168 11:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGJkMTZiNDhmNzI3MGFhZGEyMGFmMDk1ZWM5NzY3Njf+N3rr: 00:24:37.168 11:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:37.168 11:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:37.168 11:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTk1NjAyYTRkYWMxNjVlNDM4YTdlMWE3ZjdhZWZhNjdgbGhU: 00:24:37.168 11:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGJkMTZiNDhmNzI3MGFhZGEyMGFmMDk1ZWM5NzY3Njf+N3rr: ]] 00:24:37.168 11:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGJkMTZiNDhmNzI3MGFhZGEyMGFmMDk1ZWM5NzY3Njf+N3rr: 00:24:37.168 11:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:24:37.168 11:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:37.168 11:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:37.168 11:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:37.168 11:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:37.168 11:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:37.168 11:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:37.168 11:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.168 11:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.168 11:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.168 11:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:37.168 11:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:37.168 11:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:37.168 11:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:37.168 11:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.168 11:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.168 11:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:37.168 11:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.168 11:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:37.168 11:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:37.168 11:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:37.168 11:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:37.168 11:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.168 11:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.426 nvme0n1 00:24:37.426 11:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.426 11:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:37.426 11:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.426 11:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.426 11:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.426 11:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.426 11:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.426 11:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.426 11:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.426 11:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.426 11:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.426 11:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:37.427 11:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:24:37.427 11:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:37.427 11:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:37.427 11:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:37.427 11:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:37.427 11:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2YyYzkyMGFkNTRmNzZhNzNhODk0MmFlOGY4YmMyOGE4OTIwODhmOGQ4YzQxNWI2ZRCG4A==: 00:24:37.427 11:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGFhNWFkMzVmNzA0NTkwMzI4NzhmNzljMDc0OTBiM2IYnKrq: 00:24:37.427 11:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:37.427 11:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:37.427 11:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2YyYzkyMGFkNTRmNzZhNzNhODk0MmFlOGY4YmMyOGE4OTIwODhmOGQ4YzQxNWI2ZRCG4A==: 00:24:37.427 11:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGFhNWFkMzVmNzA0NTkwMzI4NzhmNzljMDc0OTBiM2IYnKrq: ]] 00:24:37.427 11:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGFhNWFkMzVmNzA0NTkwMzI4NzhmNzljMDc0OTBiM2IYnKrq: 00:24:37.427 11:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:24:37.427 11:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:37.427 11:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:37.427 11:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:37.427 11:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:37.427 11:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:37.427 11:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:37.427 11:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.427 11:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.685 11:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.685 11:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:37.685 11:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:37.685 11:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:37.685 11:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:37.685 11:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.685 11:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.685 11:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:37.685 11:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.685 11:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:37.685 11:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:37.685 11:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:37.685 11:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:37.685 11:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.686 11:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.944 nvme0n1 00:24:37.944 11:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.944 11:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.944 11:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:37.944 11:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.944 11:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.944 11:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.944 11:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.944 11:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.944 11:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.944 11:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.944 11:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.944 11:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:37.944 11:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:24:37.944 11:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:37.944 11:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:37.944 11:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:37.944 11:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:37.944 11:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjVkMTZhNzNiNGYyOGM1MDE4YTVlNTExMDNmYjdkODNkNGQxOTg0NTdiOTBmNjgwNTU5YzM3ZjYwZTkwMWI4ZR5MDp8=: 00:24:37.944 11:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:37.944 11:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:37.944 11:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:37.944 11:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjVkMTZhNzNiNGYyOGM1MDE4YTVlNTExMDNmYjdkODNkNGQxOTg0NTdiOTBmNjgwNTU5YzM3ZjYwZTkwMWI4ZR5MDp8=: 00:24:37.944 11:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:37.944 11:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:24:37.944 11:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:37.944 11:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:37.944 11:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:37.944 11:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:37.944 11:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:37.944 11:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:37.944 11:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.944 11:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.944 11:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.944 11:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:37.944 11:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:37.944 11:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:37.944 11:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:37.944 11:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.944 11:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.944 11:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:37.944 11:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.944 11:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:37.944 11:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:37.944 11:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:37.944 11:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:37.944 11:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.944 11:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.510 nvme0n1 00:24:38.510 11:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:38.510 11:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.510 11:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:38.510 11:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:38.510 11:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.510 11:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:38.510 11:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.510 11:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.510 11:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:38.510 11:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.510 11:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:38.510 11:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:38.510 11:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:38.510 11:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:24:38.510 11:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:38.510 11:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:38.510 11:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:38.510 11:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:38.510 11:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFkMzUxYjczYjUwMzkxZTIyYmFmNjc1NzQ1YWFlYzYdA+TR: 00:24:38.510 11:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQyYjU3OWQ0YWRmNjdjOWEwMDM1NzNmYjEyMDVhNWExMDQ5NDJjMDU3NTkyZGEwZmQ2ZjdkOGNiNWViYzg5Ng5uH30=: 00:24:38.510 11:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:38.510 11:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:38.510 11:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFkMzUxYjczYjUwMzkxZTIyYmFmNjc1NzQ1YWFlYzYdA+TR: 00:24:38.510 11:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQyYjU3OWQ0YWRmNjdjOWEwMDM1NzNmYjEyMDVhNWExMDQ5NDJjMDU3NTkyZGEwZmQ2ZjdkOGNiNWViYzg5Ng5uH30=: ]] 00:24:38.510 11:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQyYjU3OWQ0YWRmNjdjOWEwMDM1NzNmYjEyMDVhNWExMDQ5NDJjMDU3NTkyZGEwZmQ2ZjdkOGNiNWViYzg5Ng5uH30=: 00:24:38.510 11:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:24:38.510 11:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:38.510 11:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:38.510 11:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:38.510 11:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:38.510 11:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:38.510 11:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:38.510 11:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:38.510 11:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.510 11:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:38.510 11:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:38.510 11:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:38.510 11:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:38.510 11:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:38.510 11:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.510 11:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.510 11:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:38.510 11:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.510 11:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:38.510 11:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:38.510 11:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:38.510 11:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:38.510 11:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:38.510 11:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.109 nvme0n1 00:24:39.109 11:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:39.109 11:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.109 11:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:39.109 11:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:39.109 11:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.109 11:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:39.109 11:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.109 11:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.109 11:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:39.109 11:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.109 11:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:39.109 11:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:39.109 11:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:24:39.109 11:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.109 11:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:39.109 11:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:39.109 11:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:39.109 11:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTRmN2RiNDQyOGQ1NWFhNDE5YWMzOTgzMWIwM2QyZDE3MDZmZTc2MmMxNTY1YmY3ilihiw==: 00:24:39.109 11:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTQwNjdkNGUyMTA1YjNmMjRjMWEzNDdhYjVhOGYzYWEwZDRlNzZlYWIxN2Q0OTYwDYye/w==: 00:24:39.109 11:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:39.109 11:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:39.109 11:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTRmN2RiNDQyOGQ1NWFhNDE5YWMzOTgzMWIwM2QyZDE3MDZmZTc2MmMxNTY1YmY3ilihiw==: 00:24:39.109 11:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTQwNjdkNGUyMTA1YjNmMjRjMWEzNDdhYjVhOGYzYWEwZDRlNzZlYWIxN2Q0OTYwDYye/w==: ]] 00:24:39.109 11:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTQwNjdkNGUyMTA1YjNmMjRjMWEzNDdhYjVhOGYzYWEwZDRlNzZlYWIxN2Q0OTYwDYye/w==: 00:24:39.109 11:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:24:39.109 11:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:39.109 11:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:39.109 11:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:39.109 11:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:39.109 11:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:39.109 11:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:39.109 11:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:39.109 11:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.109 11:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:39.109 11:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:39.109 11:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:39.109 11:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:39.109 11:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:39.109 11:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.109 11:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.109 11:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:39.109 11:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.109 11:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:39.109 11:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:39.109 11:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:39.109 11:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:39.109 11:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:39.109 11:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.676 nvme0n1 00:24:39.676 11:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:39.676 11:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.676 11:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:39.676 11:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:39.676 11:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.676 11:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:39.676 11:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.676 11:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.676 11:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:39.676 11:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.676 11:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:39.676 11:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:39.676 11:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:24:39.676 11:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.676 11:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:39.676 11:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:39.676 11:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:39.676 11:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTk1NjAyYTRkYWMxNjVlNDM4YTdlMWE3ZjdhZWZhNjdgbGhU: 00:24:39.676 11:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGJkMTZiNDhmNzI3MGFhZGEyMGFmMDk1ZWM5NzY3Njf+N3rr: 00:24:39.676 11:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:39.676 11:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:39.676 11:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTk1NjAyYTRkYWMxNjVlNDM4YTdlMWE3ZjdhZWZhNjdgbGhU: 00:24:39.676 11:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGJkMTZiNDhmNzI3MGFhZGEyMGFmMDk1ZWM5NzY3Njf+N3rr: ]] 00:24:39.676 11:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGJkMTZiNDhmNzI3MGFhZGEyMGFmMDk1ZWM5NzY3Njf+N3rr: 00:24:39.676 11:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:24:39.676 11:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:39.676 11:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:39.676 11:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:39.676 11:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:39.676 11:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:39.676 11:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:39.676 11:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:39.676 11:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.676 11:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:39.676 11:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:39.676 11:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:39.676 11:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:39.676 11:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:39.676 11:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.676 11:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.676 11:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:39.676 11:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.676 11:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:39.676 11:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:39.676 11:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:39.676 11:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:39.676 11:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:39.676 11:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.241 nvme0n1 00:24:40.241 11:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:40.241 11:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.241 11:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:40.241 11:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:40.241 11:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.498 11:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:40.498 11:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.498 11:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.498 11:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:40.498 11:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.498 11:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:40.498 11:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:40.498 11:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:40.498 11:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:40.498 11:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:40.498 11:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:40.498 11:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:40.498 11:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2YyYzkyMGFkNTRmNzZhNzNhODk0MmFlOGY4YmMyOGE4OTIwODhmOGQ4YzQxNWI2ZRCG4A==: 00:24:40.498 11:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGFhNWFkMzVmNzA0NTkwMzI4NzhmNzljMDc0OTBiM2IYnKrq: 00:24:40.498 11:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:40.498 11:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:40.498 11:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2YyYzkyMGFkNTRmNzZhNzNhODk0MmFlOGY4YmMyOGE4OTIwODhmOGQ4YzQxNWI2ZRCG4A==: 00:24:40.498 11:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGFhNWFkMzVmNzA0NTkwMzI4NzhmNzljMDc0OTBiM2IYnKrq: ]] 00:24:40.498 11:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGFhNWFkMzVmNzA0NTkwMzI4NzhmNzljMDc0OTBiM2IYnKrq: 00:24:40.498 11:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:24:40.498 11:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:40.498 11:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:40.498 11:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:40.498 11:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:40.498 11:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:40.498 11:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:40.498 11:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:40.498 11:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.498 11:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:40.498 11:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:40.498 11:16:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:40.498 11:16:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:40.498 11:16:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:40.498 11:16:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.498 11:16:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.498 11:16:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:40.498 11:16:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:40.498 11:16:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:40.498 11:16:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:40.498 11:16:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:40.498 11:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:40.498 11:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:40.498 11:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.063 nvme0n1 00:24:41.063 11:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:41.063 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:41.063 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.063 11:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:41.063 11:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.063 11:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:41.063 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.063 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.063 11:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:41.063 11:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.063 11:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:41.063 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:41.063 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:41.063 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:41.063 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:41.063 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:41.063 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:41.063 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjVkMTZhNzNiNGYyOGM1MDE4YTVlNTExMDNmYjdkODNkNGQxOTg0NTdiOTBmNjgwNTU5YzM3ZjYwZTkwMWI4ZR5MDp8=: 00:24:41.063 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:41.063 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:41.063 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:41.063 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjVkMTZhNzNiNGYyOGM1MDE4YTVlNTExMDNmYjdkODNkNGQxOTg0NTdiOTBmNjgwNTU5YzM3ZjYwZTkwMWI4ZR5MDp8=: 00:24:41.063 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:41.063 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:24:41.063 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:41.063 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:41.063 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:41.063 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:41.063 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:41.063 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:41.063 11:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:41.063 11:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.063 11:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:41.063 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:41.063 11:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:41.063 11:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:41.063 11:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:41.063 11:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.063 11:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.063 11:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:41.063 11:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.063 11:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:41.063 11:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:41.063 11:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:41.063 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:41.063 11:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:41.063 11:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.628 nvme0n1 00:24:41.628 11:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:41.628 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.628 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:41.628 11:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:41.629 11:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.629 11:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:41.629 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.629 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.629 11:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:41.629 11:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.629 11:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:41.629 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:41.629 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:41.629 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:41.629 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:24:41.629 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:41.629 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:41.629 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:41.629 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:41.629 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFkMzUxYjczYjUwMzkxZTIyYmFmNjc1NzQ1YWFlYzYdA+TR: 00:24:41.629 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQyYjU3OWQ0YWRmNjdjOWEwMDM1NzNmYjEyMDVhNWExMDQ5NDJjMDU3NTkyZGEwZmQ2ZjdkOGNiNWViYzg5Ng5uH30=: 00:24:41.629 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:41.629 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:41.629 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFkMzUxYjczYjUwMzkxZTIyYmFmNjc1NzQ1YWFlYzYdA+TR: 00:24:41.629 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQyYjU3OWQ0YWRmNjdjOWEwMDM1NzNmYjEyMDVhNWExMDQ5NDJjMDU3NTkyZGEwZmQ2ZjdkOGNiNWViYzg5Ng5uH30=: ]] 00:24:41.629 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQyYjU3OWQ0YWRmNjdjOWEwMDM1NzNmYjEyMDVhNWExMDQ5NDJjMDU3NTkyZGEwZmQ2ZjdkOGNiNWViYzg5Ng5uH30=: 00:24:41.629 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:24:41.629 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:41.629 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:41.629 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:41.629 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:41.629 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:41.629 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:41.629 11:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:41.629 11:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.629 11:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:41.629 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:41.629 11:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:41.629 11:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:41.629 11:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:41.629 11:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.629 11:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.629 11:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:41.629 11:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.629 11:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:41.629 11:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:41.629 11:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:41.629 11:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:41.629 11:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:41.629 11:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.887 nvme0n1 00:24:41.887 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:41.887 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.887 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:41.887 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:41.887 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.887 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:41.887 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.887 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.887 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:41.887 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.887 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:41.887 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:41.887 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:24:41.887 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:41.887 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:41.887 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:41.887 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:41.887 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTRmN2RiNDQyOGQ1NWFhNDE5YWMzOTgzMWIwM2QyZDE3MDZmZTc2MmMxNTY1YmY3ilihiw==: 00:24:41.887 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTQwNjdkNGUyMTA1YjNmMjRjMWEzNDdhYjVhOGYzYWEwZDRlNzZlYWIxN2Q0OTYwDYye/w==: 00:24:41.887 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:41.887 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:41.887 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTRmN2RiNDQyOGQ1NWFhNDE5YWMzOTgzMWIwM2QyZDE3MDZmZTc2MmMxNTY1YmY3ilihiw==: 00:24:41.887 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTQwNjdkNGUyMTA1YjNmMjRjMWEzNDdhYjVhOGYzYWEwZDRlNzZlYWIxN2Q0OTYwDYye/w==: ]] 00:24:41.887 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTQwNjdkNGUyMTA1YjNmMjRjMWEzNDdhYjVhOGYzYWEwZDRlNzZlYWIxN2Q0OTYwDYye/w==: 00:24:41.887 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:24:41.887 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:41.887 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:41.887 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:41.887 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:41.887 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:41.887 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:41.887 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:41.887 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.887 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:41.887 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:41.887 11:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:41.887 11:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:41.887 11:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:41.887 11:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.887 11:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.887 11:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:41.887 11:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.887 11:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:41.887 11:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:41.887 11:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:41.887 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:41.887 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:41.887 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.145 nvme0n1 00:24:42.145 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:42.145 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.145 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:42.145 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:42.145 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.145 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:42.145 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.145 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.145 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:42.145 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.145 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:42.145 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:42.145 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:24:42.145 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.145 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:42.145 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:42.145 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:42.145 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTk1NjAyYTRkYWMxNjVlNDM4YTdlMWE3ZjdhZWZhNjdgbGhU: 00:24:42.145 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGJkMTZiNDhmNzI3MGFhZGEyMGFmMDk1ZWM5NzY3Njf+N3rr: 00:24:42.145 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:42.145 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:42.145 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTk1NjAyYTRkYWMxNjVlNDM4YTdlMWE3ZjdhZWZhNjdgbGhU: 00:24:42.145 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGJkMTZiNDhmNzI3MGFhZGEyMGFmMDk1ZWM5NzY3Njf+N3rr: ]] 00:24:42.145 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGJkMTZiNDhmNzI3MGFhZGEyMGFmMDk1ZWM5NzY3Njf+N3rr: 00:24:42.145 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:24:42.145 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.145 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:42.145 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:42.145 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:42.145 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.145 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:42.145 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:42.145 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.145 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:42.145 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:42.145 11:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:42.145 11:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:42.145 11:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:42.145 11:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.145 11:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.145 11:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:42.145 11:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.145 11:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:42.145 11:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:42.145 11:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:42.145 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:42.145 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:42.145 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.403 nvme0n1 00:24:42.403 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:42.403 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:42.403 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.403 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:42.403 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.403 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:42.403 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.403 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.403 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:42.403 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.403 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:42.403 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:42.403 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:24:42.403 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.403 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:42.403 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:42.403 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:42.403 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2YyYzkyMGFkNTRmNzZhNzNhODk0MmFlOGY4YmMyOGE4OTIwODhmOGQ4YzQxNWI2ZRCG4A==: 00:24:42.403 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGFhNWFkMzVmNzA0NTkwMzI4NzhmNzljMDc0OTBiM2IYnKrq: 00:24:42.403 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:42.403 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:42.403 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2YyYzkyMGFkNTRmNzZhNzNhODk0MmFlOGY4YmMyOGE4OTIwODhmOGQ4YzQxNWI2ZRCG4A==: 00:24:42.403 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGFhNWFkMzVmNzA0NTkwMzI4NzhmNzljMDc0OTBiM2IYnKrq: ]] 00:24:42.403 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGFhNWFkMzVmNzA0NTkwMzI4NzhmNzljMDc0OTBiM2IYnKrq: 00:24:42.403 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:24:42.403 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.403 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:42.403 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:42.403 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:42.403 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.403 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:42.403 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:42.403 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.403 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:42.403 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:42.403 11:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:42.403 11:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:42.403 11:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:42.403 11:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.403 11:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.403 11:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:42.403 11:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.403 11:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:42.403 11:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:42.403 11:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:42.403 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:42.403 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:42.403 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.661 nvme0n1 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjVkMTZhNzNiNGYyOGM1MDE4YTVlNTExMDNmYjdkODNkNGQxOTg0NTdiOTBmNjgwNTU5YzM3ZjYwZTkwMWI4ZR5MDp8=: 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjVkMTZhNzNiNGYyOGM1MDE4YTVlNTExMDNmYjdkODNkNGQxOTg0NTdiOTBmNjgwNTU5YzM3ZjYwZTkwMWI4ZR5MDp8=: 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.661 nvme0n1 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.661 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:42.918 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.919 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.919 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:42.919 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.919 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:42.919 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:42.919 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:42.919 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:24:42.919 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.919 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:42.919 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:42.919 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:42.919 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFkMzUxYjczYjUwMzkxZTIyYmFmNjc1NzQ1YWFlYzYdA+TR: 00:24:42.919 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQyYjU3OWQ0YWRmNjdjOWEwMDM1NzNmYjEyMDVhNWExMDQ5NDJjMDU3NTkyZGEwZmQ2ZjdkOGNiNWViYzg5Ng5uH30=: 00:24:42.919 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:42.919 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:42.919 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFkMzUxYjczYjUwMzkxZTIyYmFmNjc1NzQ1YWFlYzYdA+TR: 00:24:42.919 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQyYjU3OWQ0YWRmNjdjOWEwMDM1NzNmYjEyMDVhNWExMDQ5NDJjMDU3NTkyZGEwZmQ2ZjdkOGNiNWViYzg5Ng5uH30=: ]] 00:24:42.919 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQyYjU3OWQ0YWRmNjdjOWEwMDM1NzNmYjEyMDVhNWExMDQ5NDJjMDU3NTkyZGEwZmQ2ZjdkOGNiNWViYzg5Ng5uH30=: 00:24:42.919 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:24:42.919 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.919 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:42.919 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:42.919 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:42.919 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.919 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:42.919 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:42.919 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.919 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:42.919 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:42.919 11:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:42.919 11:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:42.919 11:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:42.919 11:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.919 11:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.919 11:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:42.919 11:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.919 11:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:42.919 11:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:42.919 11:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:42.919 11:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:42.919 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:42.919 11:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.919 nvme0n1 00:24:42.919 11:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:42.919 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.919 11:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:42.919 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:42.919 11:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.919 11:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:43.177 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.177 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.177 11:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:43.177 11:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.177 11:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:43.177 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:43.177 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:24:43.177 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.177 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:43.177 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:43.177 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:43.177 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTRmN2RiNDQyOGQ1NWFhNDE5YWMzOTgzMWIwM2QyZDE3MDZmZTc2MmMxNTY1YmY3ilihiw==: 00:24:43.177 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTQwNjdkNGUyMTA1YjNmMjRjMWEzNDdhYjVhOGYzYWEwZDRlNzZlYWIxN2Q0OTYwDYye/w==: 00:24:43.177 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:43.177 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:43.177 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTRmN2RiNDQyOGQ1NWFhNDE5YWMzOTgzMWIwM2QyZDE3MDZmZTc2MmMxNTY1YmY3ilihiw==: 00:24:43.177 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTQwNjdkNGUyMTA1YjNmMjRjMWEzNDdhYjVhOGYzYWEwZDRlNzZlYWIxN2Q0OTYwDYye/w==: ]] 00:24:43.177 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTQwNjdkNGUyMTA1YjNmMjRjMWEzNDdhYjVhOGYzYWEwZDRlNzZlYWIxN2Q0OTYwDYye/w==: 00:24:43.177 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:24:43.177 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:43.177 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:43.177 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:43.177 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:43.177 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.177 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:43.177 11:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:43.177 11:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.177 11:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:43.177 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:43.177 11:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:43.177 11:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:43.177 11:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:43.177 11:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.177 11:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.177 11:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:43.177 11:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.177 11:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:43.177 11:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:43.177 11:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:43.177 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:43.177 11:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:43.177 11:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.177 nvme0n1 00:24:43.177 11:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:43.177 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.177 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:43.177 11:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:43.177 11:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.177 11:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:43.177 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.177 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.177 11:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:43.177 11:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.436 11:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:43.436 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:43.436 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:24:43.436 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.436 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:43.436 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:43.436 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:43.436 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTk1NjAyYTRkYWMxNjVlNDM4YTdlMWE3ZjdhZWZhNjdgbGhU: 00:24:43.436 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGJkMTZiNDhmNzI3MGFhZGEyMGFmMDk1ZWM5NzY3Njf+N3rr: 00:24:43.436 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:43.436 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:43.436 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTk1NjAyYTRkYWMxNjVlNDM4YTdlMWE3ZjdhZWZhNjdgbGhU: 00:24:43.436 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGJkMTZiNDhmNzI3MGFhZGEyMGFmMDk1ZWM5NzY3Njf+N3rr: ]] 00:24:43.436 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGJkMTZiNDhmNzI3MGFhZGEyMGFmMDk1ZWM5NzY3Njf+N3rr: 00:24:43.436 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:24:43.436 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:43.436 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:43.436 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:43.436 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:43.436 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.436 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:43.436 11:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:43.436 11:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.436 11:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:43.436 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:43.436 11:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:43.436 11:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:43.436 11:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:43.436 11:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.436 11:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.436 11:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:43.436 11:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.436 11:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:43.436 11:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:43.436 11:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:43.436 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:43.436 11:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:43.436 11:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.436 nvme0n1 00:24:43.436 11:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:43.436 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.436 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:43.436 11:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:43.436 11:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.436 11:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:43.436 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.436 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.436 11:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:43.436 11:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.694 11:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:43.694 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:43.694 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:24:43.694 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.694 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:43.694 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:43.694 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:43.694 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2YyYzkyMGFkNTRmNzZhNzNhODk0MmFlOGY4YmMyOGE4OTIwODhmOGQ4YzQxNWI2ZRCG4A==: 00:24:43.694 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGFhNWFkMzVmNzA0NTkwMzI4NzhmNzljMDc0OTBiM2IYnKrq: 00:24:43.694 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:43.694 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:43.694 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2YyYzkyMGFkNTRmNzZhNzNhODk0MmFlOGY4YmMyOGE4OTIwODhmOGQ4YzQxNWI2ZRCG4A==: 00:24:43.695 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGFhNWFkMzVmNzA0NTkwMzI4NzhmNzljMDc0OTBiM2IYnKrq: ]] 00:24:43.695 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGFhNWFkMzVmNzA0NTkwMzI4NzhmNzljMDc0OTBiM2IYnKrq: 00:24:43.695 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:24:43.695 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:43.695 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:43.695 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:43.695 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:43.695 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.695 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:43.695 11:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:43.695 11:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.695 11:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:43.695 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:43.695 11:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:43.695 11:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:43.695 11:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:43.695 11:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.695 11:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.695 11:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:43.695 11:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.695 11:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:43.695 11:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:43.695 11:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:43.695 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:43.695 11:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:43.695 11:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.695 nvme0n1 00:24:43.695 11:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:43.695 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.695 11:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:43.695 11:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.695 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:43.695 11:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:43.695 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.695 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.695 11:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:43.695 11:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.953 11:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:43.953 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:43.953 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:24:43.953 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.953 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:43.953 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:43.953 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:43.953 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjVkMTZhNzNiNGYyOGM1MDE4YTVlNTExMDNmYjdkODNkNGQxOTg0NTdiOTBmNjgwNTU5YzM3ZjYwZTkwMWI4ZR5MDp8=: 00:24:43.953 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:43.953 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:43.953 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:43.953 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjVkMTZhNzNiNGYyOGM1MDE4YTVlNTExMDNmYjdkODNkNGQxOTg0NTdiOTBmNjgwNTU5YzM3ZjYwZTkwMWI4ZR5MDp8=: 00:24:43.953 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:43.953 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:24:43.953 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:43.953 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:43.953 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:43.953 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:43.953 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.953 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:43.953 11:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:43.954 11:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.954 11:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:43.954 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:43.954 11:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:43.954 11:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:43.954 11:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:43.954 11:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.954 11:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.954 11:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:43.954 11:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.954 11:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:43.954 11:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:43.954 11:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:43.954 11:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:43.954 11:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:43.954 11:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.954 nvme0n1 00:24:43.954 11:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:43.954 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.954 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:43.954 11:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:43.954 11:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.954 11:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:43.954 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.954 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.954 11:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:43.954 11:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.954 11:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:43.954 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:43.954 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:43.954 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:24:43.954 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.954 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:43.954 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:43.954 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:43.954 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFkMzUxYjczYjUwMzkxZTIyYmFmNjc1NzQ1YWFlYzYdA+TR: 00:24:43.954 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQyYjU3OWQ0YWRmNjdjOWEwMDM1NzNmYjEyMDVhNWExMDQ5NDJjMDU3NTkyZGEwZmQ2ZjdkOGNiNWViYzg5Ng5uH30=: 00:24:43.954 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:43.954 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:43.954 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFkMzUxYjczYjUwMzkxZTIyYmFmNjc1NzQ1YWFlYzYdA+TR: 00:24:43.954 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQyYjU3OWQ0YWRmNjdjOWEwMDM1NzNmYjEyMDVhNWExMDQ5NDJjMDU3NTkyZGEwZmQ2ZjdkOGNiNWViYzg5Ng5uH30=: ]] 00:24:43.954 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQyYjU3OWQ0YWRmNjdjOWEwMDM1NzNmYjEyMDVhNWExMDQ5NDJjMDU3NTkyZGEwZmQ2ZjdkOGNiNWViYzg5Ng5uH30=: 00:24:43.954 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:24:43.954 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:43.954 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:43.954 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:43.954 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:43.954 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.954 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:43.954 11:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:43.954 11:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.212 11:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:44.212 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:44.212 11:16:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:44.212 11:16:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:44.212 11:16:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:44.212 11:16:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.212 11:16:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.212 11:16:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:44.212 11:16:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.212 11:16:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:44.212 11:16:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:44.212 11:16:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:44.212 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:44.212 11:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:44.212 11:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.212 nvme0n1 00:24:44.212 11:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:44.212 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.212 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:44.212 11:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:44.212 11:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.470 11:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:44.470 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.470 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.470 11:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:44.470 11:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.470 11:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:44.470 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:44.470 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:24:44.470 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.470 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:44.470 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:44.470 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:44.470 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTRmN2RiNDQyOGQ1NWFhNDE5YWMzOTgzMWIwM2QyZDE3MDZmZTc2MmMxNTY1YmY3ilihiw==: 00:24:44.470 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTQwNjdkNGUyMTA1YjNmMjRjMWEzNDdhYjVhOGYzYWEwZDRlNzZlYWIxN2Q0OTYwDYye/w==: 00:24:44.470 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:44.470 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:44.470 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTRmN2RiNDQyOGQ1NWFhNDE5YWMzOTgzMWIwM2QyZDE3MDZmZTc2MmMxNTY1YmY3ilihiw==: 00:24:44.470 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTQwNjdkNGUyMTA1YjNmMjRjMWEzNDdhYjVhOGYzYWEwZDRlNzZlYWIxN2Q0OTYwDYye/w==: ]] 00:24:44.470 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTQwNjdkNGUyMTA1YjNmMjRjMWEzNDdhYjVhOGYzYWEwZDRlNzZlYWIxN2Q0OTYwDYye/w==: 00:24:44.470 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:24:44.470 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:44.470 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:44.470 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:44.470 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:44.470 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.470 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:44.470 11:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:44.470 11:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.470 11:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:44.470 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:44.470 11:16:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:44.470 11:16:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:44.470 11:16:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:44.470 11:16:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.470 11:16:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.471 11:16:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:44.471 11:16:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.471 11:16:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:44.471 11:16:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:44.471 11:16:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:44.471 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:44.471 11:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:44.471 11:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.729 nvme0n1 00:24:44.729 11:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:44.729 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:44.729 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.729 11:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:44.729 11:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.729 11:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:44.729 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.729 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.729 11:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:44.729 11:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.729 11:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:44.729 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:44.729 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:24:44.729 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.729 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:44.729 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:44.729 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:44.729 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTk1NjAyYTRkYWMxNjVlNDM4YTdlMWE3ZjdhZWZhNjdgbGhU: 00:24:44.729 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGJkMTZiNDhmNzI3MGFhZGEyMGFmMDk1ZWM5NzY3Njf+N3rr: 00:24:44.729 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:44.729 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:44.729 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTk1NjAyYTRkYWMxNjVlNDM4YTdlMWE3ZjdhZWZhNjdgbGhU: 00:24:44.729 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGJkMTZiNDhmNzI3MGFhZGEyMGFmMDk1ZWM5NzY3Njf+N3rr: ]] 00:24:44.729 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGJkMTZiNDhmNzI3MGFhZGEyMGFmMDk1ZWM5NzY3Njf+N3rr: 00:24:44.729 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:24:44.729 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:44.729 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:44.729 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:44.729 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:44.729 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.729 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:44.729 11:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:44.729 11:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.729 11:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:44.729 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:44.729 11:16:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:44.729 11:16:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:44.729 11:16:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:44.729 11:16:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.729 11:16:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.729 11:16:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:44.729 11:16:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.729 11:16:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:44.729 11:16:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:44.729 11:16:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:44.729 11:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:44.729 11:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:44.729 11:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.987 nvme0n1 00:24:44.987 11:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:44.987 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.987 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:44.987 11:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:44.987 11:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.987 11:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:44.987 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.987 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.987 11:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:44.987 11:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.987 11:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:44.987 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:44.987 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:24:44.987 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.987 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:44.987 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:44.987 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:44.987 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2YyYzkyMGFkNTRmNzZhNzNhODk0MmFlOGY4YmMyOGE4OTIwODhmOGQ4YzQxNWI2ZRCG4A==: 00:24:44.987 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGFhNWFkMzVmNzA0NTkwMzI4NzhmNzljMDc0OTBiM2IYnKrq: 00:24:44.987 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:44.987 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:44.987 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2YyYzkyMGFkNTRmNzZhNzNhODk0MmFlOGY4YmMyOGE4OTIwODhmOGQ4YzQxNWI2ZRCG4A==: 00:24:44.987 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGFhNWFkMzVmNzA0NTkwMzI4NzhmNzljMDc0OTBiM2IYnKrq: ]] 00:24:44.987 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGFhNWFkMzVmNzA0NTkwMzI4NzhmNzljMDc0OTBiM2IYnKrq: 00:24:44.987 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:24:44.987 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:44.987 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:44.987 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:44.987 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:44.987 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.987 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:44.987 11:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:44.987 11:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.987 11:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:44.987 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:44.987 11:16:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:44.987 11:16:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:44.987 11:16:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:44.987 11:16:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.987 11:16:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.987 11:16:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:44.987 11:16:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.987 11:16:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:44.987 11:16:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:44.987 11:16:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:44.987 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:44.987 11:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:44.987 11:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.245 nvme0n1 00:24:45.245 11:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:45.245 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.245 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:45.245 11:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:45.245 11:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.245 11:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:45.245 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.245 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.245 11:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:45.245 11:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.245 11:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:45.245 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:45.245 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:24:45.245 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:45.245 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:45.245 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:45.245 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:45.245 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjVkMTZhNzNiNGYyOGM1MDE4YTVlNTExMDNmYjdkODNkNGQxOTg0NTdiOTBmNjgwNTU5YzM3ZjYwZTkwMWI4ZR5MDp8=: 00:24:45.245 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:45.245 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:45.245 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:45.245 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjVkMTZhNzNiNGYyOGM1MDE4YTVlNTExMDNmYjdkODNkNGQxOTg0NTdiOTBmNjgwNTU5YzM3ZjYwZTkwMWI4ZR5MDp8=: 00:24:45.245 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:45.245 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:24:45.245 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:45.245 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:45.245 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:45.245 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:45.245 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:45.245 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:45.245 11:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:45.245 11:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.504 11:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:45.504 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:45.504 11:16:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:45.504 11:16:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:45.504 11:16:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:45.504 11:16:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.504 11:16:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.504 11:16:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:45.504 11:16:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.504 11:16:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:45.504 11:16:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:45.504 11:16:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:45.504 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:45.504 11:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:45.504 11:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.762 nvme0n1 00:24:45.762 11:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:45.762 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.762 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:45.762 11:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:45.762 11:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.762 11:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:45.762 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.762 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.762 11:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:45.762 11:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.762 11:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:45.762 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:45.762 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:45.762 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:24:45.762 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:45.762 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:45.762 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:45.762 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:45.762 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFkMzUxYjczYjUwMzkxZTIyYmFmNjc1NzQ1YWFlYzYdA+TR: 00:24:45.762 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQyYjU3OWQ0YWRmNjdjOWEwMDM1NzNmYjEyMDVhNWExMDQ5NDJjMDU3NTkyZGEwZmQ2ZjdkOGNiNWViYzg5Ng5uH30=: 00:24:45.762 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:45.762 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:45.762 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFkMzUxYjczYjUwMzkxZTIyYmFmNjc1NzQ1YWFlYzYdA+TR: 00:24:45.762 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQyYjU3OWQ0YWRmNjdjOWEwMDM1NzNmYjEyMDVhNWExMDQ5NDJjMDU3NTkyZGEwZmQ2ZjdkOGNiNWViYzg5Ng5uH30=: ]] 00:24:45.762 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQyYjU3OWQ0YWRmNjdjOWEwMDM1NzNmYjEyMDVhNWExMDQ5NDJjMDU3NTkyZGEwZmQ2ZjdkOGNiNWViYzg5Ng5uH30=: 00:24:45.762 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:24:45.762 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:45.762 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:45.762 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:45.762 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:45.762 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:45.762 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:45.762 11:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:45.762 11:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.762 11:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:45.762 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:45.762 11:16:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:45.762 11:16:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:45.762 11:16:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:45.762 11:16:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.762 11:16:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.762 11:16:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:45.762 11:16:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.762 11:16:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:45.762 11:16:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:45.762 11:16:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:45.762 11:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:45.762 11:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:45.762 11:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.020 nvme0n1 00:24:46.020 11:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:46.020 11:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.020 11:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:46.020 11:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.020 11:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:46.020 11:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:46.020 11:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.020 11:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.020 11:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:46.020 11:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.278 11:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:46.278 11:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:46.278 11:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:24:46.278 11:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.278 11:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:46.278 11:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:46.278 11:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:46.278 11:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTRmN2RiNDQyOGQ1NWFhNDE5YWMzOTgzMWIwM2QyZDE3MDZmZTc2MmMxNTY1YmY3ilihiw==: 00:24:46.278 11:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTQwNjdkNGUyMTA1YjNmMjRjMWEzNDdhYjVhOGYzYWEwZDRlNzZlYWIxN2Q0OTYwDYye/w==: 00:24:46.278 11:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:46.278 11:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:46.278 11:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTRmN2RiNDQyOGQ1NWFhNDE5YWMzOTgzMWIwM2QyZDE3MDZmZTc2MmMxNTY1YmY3ilihiw==: 00:24:46.278 11:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTQwNjdkNGUyMTA1YjNmMjRjMWEzNDdhYjVhOGYzYWEwZDRlNzZlYWIxN2Q0OTYwDYye/w==: ]] 00:24:46.278 11:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTQwNjdkNGUyMTA1YjNmMjRjMWEzNDdhYjVhOGYzYWEwZDRlNzZlYWIxN2Q0OTYwDYye/w==: 00:24:46.278 11:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:24:46.278 11:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:46.278 11:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:46.278 11:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:46.278 11:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:46.278 11:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.278 11:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:46.278 11:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:46.278 11:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.278 11:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:46.278 11:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:46.278 11:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:46.278 11:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:46.278 11:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:46.278 11:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.278 11:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.278 11:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:46.278 11:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.278 11:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:46.278 11:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:46.278 11:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:46.278 11:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:46.278 11:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:46.278 11:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.537 nvme0n1 00:24:46.537 11:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:46.537 11:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.537 11:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:46.537 11:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:46.537 11:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.537 11:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:46.537 11:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.537 11:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.537 11:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:46.537 11:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.537 11:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:46.537 11:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:46.537 11:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:24:46.537 11:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.537 11:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:46.537 11:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:46.537 11:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:46.537 11:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTk1NjAyYTRkYWMxNjVlNDM4YTdlMWE3ZjdhZWZhNjdgbGhU: 00:24:46.537 11:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGJkMTZiNDhmNzI3MGFhZGEyMGFmMDk1ZWM5NzY3Njf+N3rr: 00:24:46.537 11:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:46.537 11:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:46.537 11:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTk1NjAyYTRkYWMxNjVlNDM4YTdlMWE3ZjdhZWZhNjdgbGhU: 00:24:46.537 11:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGJkMTZiNDhmNzI3MGFhZGEyMGFmMDk1ZWM5NzY3Njf+N3rr: ]] 00:24:46.537 11:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGJkMTZiNDhmNzI3MGFhZGEyMGFmMDk1ZWM5NzY3Njf+N3rr: 00:24:46.537 11:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:24:46.537 11:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:46.537 11:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:46.537 11:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:46.537 11:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:46.537 11:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.537 11:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:46.537 11:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:46.537 11:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.537 11:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:46.537 11:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:46.537 11:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:46.537 11:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:46.537 11:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:46.537 11:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.537 11:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.537 11:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:46.537 11:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.537 11:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:46.537 11:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:46.537 11:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:46.537 11:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:46.537 11:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:46.537 11:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.103 nvme0n1 00:24:47.104 11:16:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:47.104 11:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:47.104 11:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.104 11:16:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:47.104 11:16:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.104 11:16:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:47.104 11:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.104 11:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.104 11:16:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:47.104 11:16:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.104 11:16:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:47.104 11:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:47.104 11:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:24:47.104 11:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.104 11:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:47.104 11:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:47.104 11:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:47.104 11:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2YyYzkyMGFkNTRmNzZhNzNhODk0MmFlOGY4YmMyOGE4OTIwODhmOGQ4YzQxNWI2ZRCG4A==: 00:24:47.104 11:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGFhNWFkMzVmNzA0NTkwMzI4NzhmNzljMDc0OTBiM2IYnKrq: 00:24:47.104 11:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:47.104 11:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:47.104 11:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2YyYzkyMGFkNTRmNzZhNzNhODk0MmFlOGY4YmMyOGE4OTIwODhmOGQ4YzQxNWI2ZRCG4A==: 00:24:47.104 11:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGFhNWFkMzVmNzA0NTkwMzI4NzhmNzljMDc0OTBiM2IYnKrq: ]] 00:24:47.104 11:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGFhNWFkMzVmNzA0NTkwMzI4NzhmNzljMDc0OTBiM2IYnKrq: 00:24:47.104 11:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:24:47.104 11:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:47.104 11:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:47.104 11:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:47.104 11:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:47.104 11:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.104 11:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:47.104 11:16:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:47.104 11:16:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.104 11:16:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:47.104 11:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:47.104 11:16:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:47.104 11:16:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:47.104 11:16:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:47.104 11:16:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.104 11:16:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.104 11:16:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:47.104 11:16:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.104 11:16:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:47.104 11:16:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:47.104 11:16:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:47.104 11:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:47.104 11:16:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:47.104 11:16:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.362 nvme0n1 00:24:47.362 11:16:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:47.362 11:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.362 11:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:47.362 11:16:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:47.362 11:16:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.362 11:16:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:47.362 11:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.362 11:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.362 11:16:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:47.362 11:16:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.621 11:16:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:47.621 11:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:47.621 11:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:24:47.621 11:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.621 11:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:47.621 11:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:47.621 11:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:47.621 11:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjVkMTZhNzNiNGYyOGM1MDE4YTVlNTExMDNmYjdkODNkNGQxOTg0NTdiOTBmNjgwNTU5YzM3ZjYwZTkwMWI4ZR5MDp8=: 00:24:47.621 11:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:47.621 11:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:47.621 11:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:47.621 11:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjVkMTZhNzNiNGYyOGM1MDE4YTVlNTExMDNmYjdkODNkNGQxOTg0NTdiOTBmNjgwNTU5YzM3ZjYwZTkwMWI4ZR5MDp8=: 00:24:47.621 11:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:47.621 11:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:24:47.621 11:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:47.621 11:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:47.621 11:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:47.621 11:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:47.621 11:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.621 11:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:47.621 11:16:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:47.621 11:16:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.621 11:16:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:47.621 11:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:47.621 11:16:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:47.621 11:16:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:47.621 11:16:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:47.621 11:16:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.621 11:16:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.621 11:16:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:47.621 11:16:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.621 11:16:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:47.621 11:16:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:47.621 11:16:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:47.621 11:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:47.621 11:16:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:47.621 11:16:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.880 nvme0n1 00:24:47.880 11:16:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:47.880 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.880 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:47.880 11:16:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:47.880 11:16:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.880 11:16:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:47.880 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.880 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.880 11:16:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:47.880 11:16:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.880 11:16:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:47.880 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:47.880 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:47.880 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:24:47.880 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.880 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:47.880 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:47.880 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:47.880 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFkMzUxYjczYjUwMzkxZTIyYmFmNjc1NzQ1YWFlYzYdA+TR: 00:24:47.880 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQyYjU3OWQ0YWRmNjdjOWEwMDM1NzNmYjEyMDVhNWExMDQ5NDJjMDU3NTkyZGEwZmQ2ZjdkOGNiNWViYzg5Ng5uH30=: 00:24:47.880 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:47.880 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:47.880 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFkMzUxYjczYjUwMzkxZTIyYmFmNjc1NzQ1YWFlYzYdA+TR: 00:24:47.880 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQyYjU3OWQ0YWRmNjdjOWEwMDM1NzNmYjEyMDVhNWExMDQ5NDJjMDU3NTkyZGEwZmQ2ZjdkOGNiNWViYzg5Ng5uH30=: ]] 00:24:47.880 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQyYjU3OWQ0YWRmNjdjOWEwMDM1NzNmYjEyMDVhNWExMDQ5NDJjMDU3NTkyZGEwZmQ2ZjdkOGNiNWViYzg5Ng5uH30=: 00:24:47.880 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:24:47.880 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:47.880 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:47.880 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:47.880 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:47.880 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.880 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:47.880 11:16:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:47.880 11:16:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.880 11:16:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:47.880 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:47.880 11:16:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:47.880 11:16:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:47.880 11:16:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:47.880 11:16:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.880 11:16:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.880 11:16:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:47.880 11:16:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.880 11:16:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:47.880 11:16:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:47.880 11:16:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:47.880 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:47.880 11:16:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:47.880 11:16:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.446 nvme0n1 00:24:48.446 11:16:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:48.446 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.446 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:48.446 11:16:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:48.446 11:16:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.446 11:16:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:48.704 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.704 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.704 11:16:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:48.704 11:16:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.704 11:16:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:48.704 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:48.704 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:24:48.704 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:48.704 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:48.704 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:48.704 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:48.704 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTRmN2RiNDQyOGQ1NWFhNDE5YWMzOTgzMWIwM2QyZDE3MDZmZTc2MmMxNTY1YmY3ilihiw==: 00:24:48.704 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTQwNjdkNGUyMTA1YjNmMjRjMWEzNDdhYjVhOGYzYWEwZDRlNzZlYWIxN2Q0OTYwDYye/w==: 00:24:48.704 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:48.704 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:48.704 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTRmN2RiNDQyOGQ1NWFhNDE5YWMzOTgzMWIwM2QyZDE3MDZmZTc2MmMxNTY1YmY3ilihiw==: 00:24:48.704 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTQwNjdkNGUyMTA1YjNmMjRjMWEzNDdhYjVhOGYzYWEwZDRlNzZlYWIxN2Q0OTYwDYye/w==: ]] 00:24:48.704 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTQwNjdkNGUyMTA1YjNmMjRjMWEzNDdhYjVhOGYzYWEwZDRlNzZlYWIxN2Q0OTYwDYye/w==: 00:24:48.704 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:24:48.704 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:48.704 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:48.704 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:48.704 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:48.704 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:48.704 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:48.704 11:16:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:48.704 11:16:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.705 11:16:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:48.705 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:48.705 11:16:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:48.705 11:16:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:48.705 11:16:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:48.705 11:16:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.705 11:16:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.705 11:16:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:48.705 11:16:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:48.705 11:16:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:48.705 11:16:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:48.705 11:16:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:48.705 11:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:48.705 11:16:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:48.705 11:16:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.271 nvme0n1 00:24:49.271 11:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:49.271 11:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.271 11:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:49.271 11:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:49.271 11:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.271 11:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:49.271 11:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.271 11:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.271 11:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:49.271 11:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.271 11:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:49.271 11:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:49.271 11:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:24:49.271 11:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.271 11:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:49.271 11:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:49.271 11:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:49.271 11:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTk1NjAyYTRkYWMxNjVlNDM4YTdlMWE3ZjdhZWZhNjdgbGhU: 00:24:49.271 11:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGJkMTZiNDhmNzI3MGFhZGEyMGFmMDk1ZWM5NzY3Njf+N3rr: 00:24:49.271 11:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:49.271 11:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:49.271 11:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTk1NjAyYTRkYWMxNjVlNDM4YTdlMWE3ZjdhZWZhNjdgbGhU: 00:24:49.271 11:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGJkMTZiNDhmNzI3MGFhZGEyMGFmMDk1ZWM5NzY3Njf+N3rr: ]] 00:24:49.271 11:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGJkMTZiNDhmNzI3MGFhZGEyMGFmMDk1ZWM5NzY3Njf+N3rr: 00:24:49.272 11:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:24:49.272 11:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:49.272 11:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:49.272 11:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:49.272 11:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:49.272 11:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.272 11:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:49.272 11:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:49.272 11:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.272 11:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:49.272 11:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:49.272 11:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:49.272 11:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:49.272 11:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:49.272 11:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.272 11:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.272 11:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:49.272 11:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.272 11:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:49.272 11:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:49.272 11:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:49.272 11:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:49.272 11:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:49.272 11:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.838 nvme0n1 00:24:49.838 11:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:49.838 11:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.838 11:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:49.838 11:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:49.838 11:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.838 11:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:49.838 11:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.838 11:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.838 11:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:49.838 11:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.838 11:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:49.838 11:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:49.838 11:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:24:49.838 11:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.838 11:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:49.838 11:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:49.838 11:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:49.838 11:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2YyYzkyMGFkNTRmNzZhNzNhODk0MmFlOGY4YmMyOGE4OTIwODhmOGQ4YzQxNWI2ZRCG4A==: 00:24:49.838 11:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGFhNWFkMzVmNzA0NTkwMzI4NzhmNzljMDc0OTBiM2IYnKrq: 00:24:49.838 11:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:49.838 11:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:49.838 11:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2YyYzkyMGFkNTRmNzZhNzNhODk0MmFlOGY4YmMyOGE4OTIwODhmOGQ4YzQxNWI2ZRCG4A==: 00:24:49.838 11:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGFhNWFkMzVmNzA0NTkwMzI4NzhmNzljMDc0OTBiM2IYnKrq: ]] 00:24:49.838 11:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGFhNWFkMzVmNzA0NTkwMzI4NzhmNzljMDc0OTBiM2IYnKrq: 00:24:49.838 11:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:24:49.838 11:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:49.838 11:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:49.838 11:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:49.838 11:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:49.838 11:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.838 11:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:49.838 11:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:49.838 11:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.838 11:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:49.838 11:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:49.838 11:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:49.838 11:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:49.838 11:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:49.838 11:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.838 11:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.838 11:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:49.838 11:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.838 11:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:49.838 11:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:49.838 11:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:49.838 11:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:49.838 11:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:49.838 11:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.406 nvme0n1 00:24:50.406 11:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:50.406 11:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.406 11:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:50.406 11:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:50.406 11:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.665 11:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:50.665 11:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.665 11:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.665 11:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:50.665 11:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.665 11:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:50.665 11:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:50.665 11:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:24:50.665 11:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:50.665 11:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:50.665 11:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:50.665 11:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:50.665 11:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjVkMTZhNzNiNGYyOGM1MDE4YTVlNTExMDNmYjdkODNkNGQxOTg0NTdiOTBmNjgwNTU5YzM3ZjYwZTkwMWI4ZR5MDp8=: 00:24:50.665 11:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:50.665 11:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:50.665 11:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:50.665 11:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjVkMTZhNzNiNGYyOGM1MDE4YTVlNTExMDNmYjdkODNkNGQxOTg0NTdiOTBmNjgwNTU5YzM3ZjYwZTkwMWI4ZR5MDp8=: 00:24:50.665 11:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:50.665 11:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:24:50.665 11:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:50.665 11:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:50.665 11:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:50.665 11:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:50.665 11:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:50.665 11:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:50.665 11:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:50.665 11:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.665 11:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:50.665 11:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:50.665 11:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:50.665 11:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:50.665 11:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:50.665 11:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.665 11:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.665 11:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:50.665 11:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.665 11:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:50.665 11:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:50.665 11:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:50.665 11:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:50.665 11:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:50.665 11:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.232 nvme0n1 00:24:51.232 11:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:51.232 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.232 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.232 11:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:51.232 11:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.232 11:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:51.232 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.232 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.232 11:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:51.232 11:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.232 11:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:51.232 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:51.232 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:51.232 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:51.232 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:24:51.232 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.232 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:51.232 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:51.232 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:51.232 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFkMzUxYjczYjUwMzkxZTIyYmFmNjc1NzQ1YWFlYzYdA+TR: 00:24:51.232 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQyYjU3OWQ0YWRmNjdjOWEwMDM1NzNmYjEyMDVhNWExMDQ5NDJjMDU3NTkyZGEwZmQ2ZjdkOGNiNWViYzg5Ng5uH30=: 00:24:51.232 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:51.232 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:51.232 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFkMzUxYjczYjUwMzkxZTIyYmFmNjc1NzQ1YWFlYzYdA+TR: 00:24:51.232 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQyYjU3OWQ0YWRmNjdjOWEwMDM1NzNmYjEyMDVhNWExMDQ5NDJjMDU3NTkyZGEwZmQ2ZjdkOGNiNWViYzg5Ng5uH30=: ]] 00:24:51.232 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQyYjU3OWQ0YWRmNjdjOWEwMDM1NzNmYjEyMDVhNWExMDQ5NDJjMDU3NTkyZGEwZmQ2ZjdkOGNiNWViYzg5Ng5uH30=: 00:24:51.232 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:24:51.232 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.232 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:51.232 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:51.232 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:51.232 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.232 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:51.232 11:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:51.232 11:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.232 11:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:51.232 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:51.232 11:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:51.232 11:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:51.232 11:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:51.232 11:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.232 11:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.232 11:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:51.232 11:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.232 11:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:51.232 11:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:51.232 11:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:51.232 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:51.232 11:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:51.232 11:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.490 nvme0n1 00:24:51.490 11:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:51.490 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.490 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.490 11:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:51.490 11:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.490 11:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:51.490 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.490 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.490 11:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:51.490 11:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.490 11:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:51.490 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:51.490 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:24:51.490 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.490 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:51.490 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:51.490 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:51.490 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTRmN2RiNDQyOGQ1NWFhNDE5YWMzOTgzMWIwM2QyZDE3MDZmZTc2MmMxNTY1YmY3ilihiw==: 00:24:51.490 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTQwNjdkNGUyMTA1YjNmMjRjMWEzNDdhYjVhOGYzYWEwZDRlNzZlYWIxN2Q0OTYwDYye/w==: 00:24:51.490 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:51.490 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:51.490 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTRmN2RiNDQyOGQ1NWFhNDE5YWMzOTgzMWIwM2QyZDE3MDZmZTc2MmMxNTY1YmY3ilihiw==: 00:24:51.490 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTQwNjdkNGUyMTA1YjNmMjRjMWEzNDdhYjVhOGYzYWEwZDRlNzZlYWIxN2Q0OTYwDYye/w==: ]] 00:24:51.490 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTQwNjdkNGUyMTA1YjNmMjRjMWEzNDdhYjVhOGYzYWEwZDRlNzZlYWIxN2Q0OTYwDYye/w==: 00:24:51.490 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:24:51.490 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.490 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:51.490 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:51.490 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:51.490 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.490 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:51.490 11:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:51.490 11:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.490 11:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:51.490 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:51.490 11:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:51.490 11:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:51.490 11:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:51.490 11:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.490 11:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.490 11:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:51.490 11:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.490 11:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:51.490 11:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:51.490 11:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:51.491 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:51.491 11:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:51.491 11:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.749 nvme0n1 00:24:51.749 11:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:51.749 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.749 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.749 11:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:51.749 11:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.749 11:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:51.749 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.749 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.749 11:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:51.749 11:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.749 11:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:51.749 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:51.749 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:24:51.749 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.749 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:51.749 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:51.749 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:51.749 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTk1NjAyYTRkYWMxNjVlNDM4YTdlMWE3ZjdhZWZhNjdgbGhU: 00:24:51.749 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGJkMTZiNDhmNzI3MGFhZGEyMGFmMDk1ZWM5NzY3Njf+N3rr: 00:24:51.749 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:51.749 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:51.749 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTk1NjAyYTRkYWMxNjVlNDM4YTdlMWE3ZjdhZWZhNjdgbGhU: 00:24:51.749 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGJkMTZiNDhmNzI3MGFhZGEyMGFmMDk1ZWM5NzY3Njf+N3rr: ]] 00:24:51.749 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGJkMTZiNDhmNzI3MGFhZGEyMGFmMDk1ZWM5NzY3Njf+N3rr: 00:24:51.749 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:24:51.749 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.749 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:51.749 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:51.749 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:51.749 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.750 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:51.750 11:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:51.750 11:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.750 11:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:51.750 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:51.750 11:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:51.750 11:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:51.750 11:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:51.750 11:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.750 11:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.750 11:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:51.750 11:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.750 11:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:51.750 11:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:51.750 11:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:51.750 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:51.750 11:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:51.750 11:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.750 nvme0n1 00:24:51.750 11:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:51.750 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.750 11:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.750 11:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:51.750 11:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.750 11:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:52.008 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.008 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.008 11:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:52.008 11:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.008 11:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:52.008 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:52.008 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:24:52.008 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:52.008 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:52.008 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:52.008 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:52.008 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2YyYzkyMGFkNTRmNzZhNzNhODk0MmFlOGY4YmMyOGE4OTIwODhmOGQ4YzQxNWI2ZRCG4A==: 00:24:52.008 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGFhNWFkMzVmNzA0NTkwMzI4NzhmNzljMDc0OTBiM2IYnKrq: 00:24:52.008 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:52.008 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:52.008 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2YyYzkyMGFkNTRmNzZhNzNhODk0MmFlOGY4YmMyOGE4OTIwODhmOGQ4YzQxNWI2ZRCG4A==: 00:24:52.008 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGFhNWFkMzVmNzA0NTkwMzI4NzhmNzljMDc0OTBiM2IYnKrq: ]] 00:24:52.008 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGFhNWFkMzVmNzA0NTkwMzI4NzhmNzljMDc0OTBiM2IYnKrq: 00:24:52.009 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:24:52.009 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:52.009 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:52.009 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:52.009 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:52.009 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:52.009 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:52.009 11:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:52.009 11:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.009 11:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:52.009 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:52.009 11:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:52.009 11:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:52.009 11:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:52.009 11:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.009 11:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.009 11:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:52.009 11:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.009 11:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:52.009 11:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:52.009 11:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:52.009 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:52.009 11:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:52.009 11:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.009 nvme0n1 00:24:52.009 11:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:52.009 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.009 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:52.009 11:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:52.009 11:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.009 11:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:52.009 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.009 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.009 11:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:52.009 11:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.268 11:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:52.268 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:52.268 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:24:52.268 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:52.268 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:52.268 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:52.268 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:52.268 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjVkMTZhNzNiNGYyOGM1MDE4YTVlNTExMDNmYjdkODNkNGQxOTg0NTdiOTBmNjgwNTU5YzM3ZjYwZTkwMWI4ZR5MDp8=: 00:24:52.268 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:52.268 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:52.268 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:52.268 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjVkMTZhNzNiNGYyOGM1MDE4YTVlNTExMDNmYjdkODNkNGQxOTg0NTdiOTBmNjgwNTU5YzM3ZjYwZTkwMWI4ZR5MDp8=: 00:24:52.268 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:52.268 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:24:52.268 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:52.268 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:52.268 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:52.268 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:52.268 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:52.268 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:52.268 11:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:52.268 11:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.268 11:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:52.268 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:52.268 11:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:52.268 11:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:52.268 11:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:52.268 11:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.268 11:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.268 11:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:52.268 11:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.268 11:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:52.268 11:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:52.268 11:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:52.268 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:52.268 11:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:52.268 11:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.268 nvme0n1 00:24:52.268 11:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:52.268 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.268 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:52.268 11:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:52.268 11:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.268 11:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:52.268 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.268 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.269 11:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:52.269 11:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.269 11:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:52.269 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:52.269 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:52.269 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:24:52.269 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:52.269 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:52.269 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:52.269 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:52.269 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFkMzUxYjczYjUwMzkxZTIyYmFmNjc1NzQ1YWFlYzYdA+TR: 00:24:52.269 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQyYjU3OWQ0YWRmNjdjOWEwMDM1NzNmYjEyMDVhNWExMDQ5NDJjMDU3NTkyZGEwZmQ2ZjdkOGNiNWViYzg5Ng5uH30=: 00:24:52.269 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:52.269 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:52.269 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFkMzUxYjczYjUwMzkxZTIyYmFmNjc1NzQ1YWFlYzYdA+TR: 00:24:52.269 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQyYjU3OWQ0YWRmNjdjOWEwMDM1NzNmYjEyMDVhNWExMDQ5NDJjMDU3NTkyZGEwZmQ2ZjdkOGNiNWViYzg5Ng5uH30=: ]] 00:24:52.269 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQyYjU3OWQ0YWRmNjdjOWEwMDM1NzNmYjEyMDVhNWExMDQ5NDJjMDU3NTkyZGEwZmQ2ZjdkOGNiNWViYzg5Ng5uH30=: 00:24:52.269 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:24:52.269 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:52.269 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:52.269 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:52.269 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:52.269 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:52.269 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:52.269 11:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:52.269 11:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.269 11:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:52.269 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:52.269 11:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:52.269 11:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:52.269 11:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:52.269 11:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.269 11:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.269 11:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:52.269 11:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.269 11:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:52.269 11:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:52.269 11:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:52.269 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:52.269 11:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:52.269 11:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.527 nvme0n1 00:24:52.527 11:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:52.527 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.527 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:52.527 11:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:52.527 11:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.527 11:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:52.527 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.527 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.527 11:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:52.527 11:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.527 11:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:52.527 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:52.527 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:24:52.527 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:52.527 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:52.527 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:52.527 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:52.527 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTRmN2RiNDQyOGQ1NWFhNDE5YWMzOTgzMWIwM2QyZDE3MDZmZTc2MmMxNTY1YmY3ilihiw==: 00:24:52.527 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTQwNjdkNGUyMTA1YjNmMjRjMWEzNDdhYjVhOGYzYWEwZDRlNzZlYWIxN2Q0OTYwDYye/w==: 00:24:52.527 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:52.527 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:52.527 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTRmN2RiNDQyOGQ1NWFhNDE5YWMzOTgzMWIwM2QyZDE3MDZmZTc2MmMxNTY1YmY3ilihiw==: 00:24:52.527 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTQwNjdkNGUyMTA1YjNmMjRjMWEzNDdhYjVhOGYzYWEwZDRlNzZlYWIxN2Q0OTYwDYye/w==: ]] 00:24:52.527 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTQwNjdkNGUyMTA1YjNmMjRjMWEzNDdhYjVhOGYzYWEwZDRlNzZlYWIxN2Q0OTYwDYye/w==: 00:24:52.527 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:24:52.527 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:52.527 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:52.527 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:52.527 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:52.527 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:52.527 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:52.527 11:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:52.528 11:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.528 11:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:52.528 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:52.528 11:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:52.528 11:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:52.528 11:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:52.528 11:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.528 11:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.528 11:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:52.528 11:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.528 11:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:52.528 11:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:52.528 11:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:52.528 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:52.528 11:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:52.528 11:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.786 nvme0n1 00:24:52.786 11:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:52.786 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:52.786 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.786 11:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:52.786 11:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.786 11:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:52.786 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.786 11:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.786 11:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:52.786 11:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.786 11:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:52.786 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:52.786 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:24:52.786 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:52.786 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:52.786 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:52.786 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:52.786 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTk1NjAyYTRkYWMxNjVlNDM4YTdlMWE3ZjdhZWZhNjdgbGhU: 00:24:52.786 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGJkMTZiNDhmNzI3MGFhZGEyMGFmMDk1ZWM5NzY3Njf+N3rr: 00:24:52.786 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:52.786 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:52.786 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTk1NjAyYTRkYWMxNjVlNDM4YTdlMWE3ZjdhZWZhNjdgbGhU: 00:24:52.786 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGJkMTZiNDhmNzI3MGFhZGEyMGFmMDk1ZWM5NzY3Njf+N3rr: ]] 00:24:52.786 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGJkMTZiNDhmNzI3MGFhZGEyMGFmMDk1ZWM5NzY3Njf+N3rr: 00:24:52.786 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:24:52.786 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:52.786 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:52.786 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:52.786 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:52.786 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:52.786 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:52.786 11:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:52.786 11:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.786 11:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:52.786 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:52.786 11:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:52.786 11:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:52.786 11:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:52.786 11:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.786 11:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.786 11:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:52.786 11:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.786 11:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:52.786 11:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:52.786 11:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:52.786 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:52.786 11:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:52.786 11:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.045 nvme0n1 00:24:53.045 11:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:53.045 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.045 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:53.045 11:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:53.045 11:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.045 11:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:53.045 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.045 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:53.045 11:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:53.045 11:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.045 11:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:53.045 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:53.045 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:24:53.045 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:53.045 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:53.045 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:53.045 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:53.045 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2YyYzkyMGFkNTRmNzZhNzNhODk0MmFlOGY4YmMyOGE4OTIwODhmOGQ4YzQxNWI2ZRCG4A==: 00:24:53.045 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGFhNWFkMzVmNzA0NTkwMzI4NzhmNzljMDc0OTBiM2IYnKrq: 00:24:53.045 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:53.045 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:53.045 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2YyYzkyMGFkNTRmNzZhNzNhODk0MmFlOGY4YmMyOGE4OTIwODhmOGQ4YzQxNWI2ZRCG4A==: 00:24:53.045 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGFhNWFkMzVmNzA0NTkwMzI4NzhmNzljMDc0OTBiM2IYnKrq: ]] 00:24:53.045 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGFhNWFkMzVmNzA0NTkwMzI4NzhmNzljMDc0OTBiM2IYnKrq: 00:24:53.045 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:24:53.045 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:53.045 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:53.045 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:53.045 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:53.045 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:53.045 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:53.045 11:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:53.045 11:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.045 11:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:53.045 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:53.045 11:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:53.045 11:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:53.045 11:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:53.045 11:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:53.045 11:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:53.045 11:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:53.045 11:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:53.045 11:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:53.045 11:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:53.045 11:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:53.045 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:53.045 11:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:53.045 11:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.304 nvme0n1 00:24:53.304 11:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:53.304 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.304 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:53.304 11:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:53.304 11:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.304 11:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:53.304 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.304 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:53.304 11:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:53.304 11:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.304 11:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:53.304 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:53.304 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:24:53.304 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:53.304 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:53.304 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:53.304 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:53.304 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjVkMTZhNzNiNGYyOGM1MDE4YTVlNTExMDNmYjdkODNkNGQxOTg0NTdiOTBmNjgwNTU5YzM3ZjYwZTkwMWI4ZR5MDp8=: 00:24:53.304 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:53.304 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:53.304 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:53.304 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjVkMTZhNzNiNGYyOGM1MDE4YTVlNTExMDNmYjdkODNkNGQxOTg0NTdiOTBmNjgwNTU5YzM3ZjYwZTkwMWI4ZR5MDp8=: 00:24:53.304 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:53.304 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:24:53.304 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:53.304 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:53.304 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:53.304 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:53.304 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:53.304 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:53.304 11:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:53.304 11:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.304 11:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:53.304 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:53.304 11:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:53.304 11:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:53.304 11:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:53.304 11:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:53.304 11:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:53.304 11:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:53.304 11:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:53.304 11:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:53.304 11:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:53.304 11:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:53.304 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:53.304 11:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:53.304 11:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.563 nvme0n1 00:24:53.563 11:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:53.563 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.563 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:53.563 11:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:53.563 11:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.563 11:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:53.563 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.563 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:53.563 11:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:53.563 11:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.563 11:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:53.563 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:53.563 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:53.563 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:24:53.563 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:53.563 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:53.563 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:53.563 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:53.563 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFkMzUxYjczYjUwMzkxZTIyYmFmNjc1NzQ1YWFlYzYdA+TR: 00:24:53.563 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQyYjU3OWQ0YWRmNjdjOWEwMDM1NzNmYjEyMDVhNWExMDQ5NDJjMDU3NTkyZGEwZmQ2ZjdkOGNiNWViYzg5Ng5uH30=: 00:24:53.563 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:53.563 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:53.563 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFkMzUxYjczYjUwMzkxZTIyYmFmNjc1NzQ1YWFlYzYdA+TR: 00:24:53.563 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQyYjU3OWQ0YWRmNjdjOWEwMDM1NzNmYjEyMDVhNWExMDQ5NDJjMDU3NTkyZGEwZmQ2ZjdkOGNiNWViYzg5Ng5uH30=: ]] 00:24:53.563 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQyYjU3OWQ0YWRmNjdjOWEwMDM1NzNmYjEyMDVhNWExMDQ5NDJjMDU3NTkyZGEwZmQ2ZjdkOGNiNWViYzg5Ng5uH30=: 00:24:53.563 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:24:53.563 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:53.563 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:53.563 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:53.563 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:53.563 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:53.563 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:53.563 11:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:53.563 11:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.563 11:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:53.563 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:53.563 11:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:53.563 11:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:53.563 11:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:53.563 11:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:53.563 11:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:53.563 11:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:53.563 11:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:53.563 11:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:53.563 11:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:53.563 11:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:53.563 11:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:53.563 11:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:53.563 11:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.821 nvme0n1 00:24:53.821 11:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:53.821 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:53.821 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.821 11:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:53.821 11:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.821 11:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:53.821 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.821 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:53.821 11:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:53.821 11:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.821 11:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:53.821 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:53.821 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:24:53.821 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:53.821 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:53.821 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:53.821 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:53.821 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTRmN2RiNDQyOGQ1NWFhNDE5YWMzOTgzMWIwM2QyZDE3MDZmZTc2MmMxNTY1YmY3ilihiw==: 00:24:53.821 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTQwNjdkNGUyMTA1YjNmMjRjMWEzNDdhYjVhOGYzYWEwZDRlNzZlYWIxN2Q0OTYwDYye/w==: 00:24:53.821 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:53.821 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:53.821 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTRmN2RiNDQyOGQ1NWFhNDE5YWMzOTgzMWIwM2QyZDE3MDZmZTc2MmMxNTY1YmY3ilihiw==: 00:24:53.821 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTQwNjdkNGUyMTA1YjNmMjRjMWEzNDdhYjVhOGYzYWEwZDRlNzZlYWIxN2Q0OTYwDYye/w==: ]] 00:24:53.821 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTQwNjdkNGUyMTA1YjNmMjRjMWEzNDdhYjVhOGYzYWEwZDRlNzZlYWIxN2Q0OTYwDYye/w==: 00:24:53.821 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:24:53.821 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:53.821 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:53.821 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:53.821 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:53.822 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:53.822 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:53.822 11:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:53.822 11:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.080 11:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:54.080 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:54.080 11:16:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:54.080 11:16:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:54.080 11:16:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:54.080 11:16:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.080 11:16:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.080 11:16:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:54.080 11:16:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.080 11:16:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:54.080 11:16:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:54.080 11:16:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:54.080 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:54.080 11:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:54.080 11:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.339 nvme0n1 00:24:54.339 11:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:54.339 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.339 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:54.339 11:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:54.339 11:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.339 11:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:54.339 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.339 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.339 11:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:54.339 11:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.339 11:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:54.339 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:54.339 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:24:54.339 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.339 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:54.339 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:54.339 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:54.339 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTk1NjAyYTRkYWMxNjVlNDM4YTdlMWE3ZjdhZWZhNjdgbGhU: 00:24:54.339 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGJkMTZiNDhmNzI3MGFhZGEyMGFmMDk1ZWM5NzY3Njf+N3rr: 00:24:54.339 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:54.339 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:54.339 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTk1NjAyYTRkYWMxNjVlNDM4YTdlMWE3ZjdhZWZhNjdgbGhU: 00:24:54.339 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGJkMTZiNDhmNzI3MGFhZGEyMGFmMDk1ZWM5NzY3Njf+N3rr: ]] 00:24:54.339 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGJkMTZiNDhmNzI3MGFhZGEyMGFmMDk1ZWM5NzY3Njf+N3rr: 00:24:54.339 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:24:54.339 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.339 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:54.339 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:54.339 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:54.339 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.339 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:54.339 11:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:54.339 11:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.339 11:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:54.339 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:54.339 11:16:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:54.339 11:16:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:54.339 11:16:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:54.339 11:16:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.339 11:16:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.339 11:16:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:54.339 11:16:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.339 11:16:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:54.339 11:16:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:54.339 11:16:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:54.339 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:54.339 11:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:54.339 11:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.598 nvme0n1 00:24:54.598 11:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:54.598 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.598 11:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:54.598 11:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.598 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:54.598 11:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:54.598 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.598 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.598 11:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:54.598 11:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.598 11:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:54.598 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:54.598 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:24:54.598 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.598 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:54.598 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:54.598 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:54.598 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2YyYzkyMGFkNTRmNzZhNzNhODk0MmFlOGY4YmMyOGE4OTIwODhmOGQ4YzQxNWI2ZRCG4A==: 00:24:54.598 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGFhNWFkMzVmNzA0NTkwMzI4NzhmNzljMDc0OTBiM2IYnKrq: 00:24:54.598 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:54.598 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:54.598 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2YyYzkyMGFkNTRmNzZhNzNhODk0MmFlOGY4YmMyOGE4OTIwODhmOGQ4YzQxNWI2ZRCG4A==: 00:24:54.598 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGFhNWFkMzVmNzA0NTkwMzI4NzhmNzljMDc0OTBiM2IYnKrq: ]] 00:24:54.598 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGFhNWFkMzVmNzA0NTkwMzI4NzhmNzljMDc0OTBiM2IYnKrq: 00:24:54.598 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:24:54.598 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.598 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:54.598 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:54.598 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:54.598 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.598 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:54.598 11:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:54.598 11:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.598 11:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:54.598 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:54.598 11:16:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:54.598 11:16:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:54.598 11:16:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:54.598 11:16:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.598 11:16:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.598 11:16:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:54.598 11:16:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.598 11:16:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:54.598 11:16:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:54.598 11:16:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:54.598 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:54.598 11:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:54.598 11:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.856 nvme0n1 00:24:54.856 11:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:54.856 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.856 11:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:54.856 11:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:54.856 11:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.856 11:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:54.856 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.856 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.856 11:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:54.856 11:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.856 11:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:54.856 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:54.856 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:24:54.856 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.856 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:54.856 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:54.856 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:54.856 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjVkMTZhNzNiNGYyOGM1MDE4YTVlNTExMDNmYjdkODNkNGQxOTg0NTdiOTBmNjgwNTU5YzM3ZjYwZTkwMWI4ZR5MDp8=: 00:24:54.856 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:54.856 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:54.856 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:54.856 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjVkMTZhNzNiNGYyOGM1MDE4YTVlNTExMDNmYjdkODNkNGQxOTg0NTdiOTBmNjgwNTU5YzM3ZjYwZTkwMWI4ZR5MDp8=: 00:24:54.856 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:54.856 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:24:54.856 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.856 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:54.856 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:54.856 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:54.856 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.856 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:54.856 11:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:54.856 11:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.856 11:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:54.856 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:54.856 11:16:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:54.856 11:16:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:54.856 11:16:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:54.856 11:16:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.856 11:16:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.856 11:16:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:54.856 11:16:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.856 11:16:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:54.856 11:16:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:54.857 11:16:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:54.857 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:54.857 11:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:54.857 11:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.114 nvme0n1 00:24:55.114 11:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:55.114 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.114 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:55.114 11:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:55.114 11:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.114 11:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:55.114 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.114 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.114 11:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:55.114 11:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.114 11:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:55.114 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:55.114 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:55.114 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:24:55.114 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.114 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:55.114 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:55.114 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:55.114 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFkMzUxYjczYjUwMzkxZTIyYmFmNjc1NzQ1YWFlYzYdA+TR: 00:24:55.114 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQyYjU3OWQ0YWRmNjdjOWEwMDM1NzNmYjEyMDVhNWExMDQ5NDJjMDU3NTkyZGEwZmQ2ZjdkOGNiNWViYzg5Ng5uH30=: 00:24:55.114 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:55.114 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:55.371 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFkMzUxYjczYjUwMzkxZTIyYmFmNjc1NzQ1YWFlYzYdA+TR: 00:24:55.371 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQyYjU3OWQ0YWRmNjdjOWEwMDM1NzNmYjEyMDVhNWExMDQ5NDJjMDU3NTkyZGEwZmQ2ZjdkOGNiNWViYzg5Ng5uH30=: ]] 00:24:55.371 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQyYjU3OWQ0YWRmNjdjOWEwMDM1NzNmYjEyMDVhNWExMDQ5NDJjMDU3NTkyZGEwZmQ2ZjdkOGNiNWViYzg5Ng5uH30=: 00:24:55.371 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:24:55.371 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:55.371 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:55.371 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:55.371 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:55.371 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:55.371 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:55.371 11:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:55.371 11:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.371 11:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:55.371 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:55.371 11:16:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:55.371 11:16:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:55.371 11:16:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:55.371 11:16:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.371 11:16:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.371 11:16:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:55.371 11:16:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.371 11:16:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:55.371 11:16:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:55.371 11:16:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:55.371 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:55.371 11:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:55.371 11:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.628 nvme0n1 00:24:55.628 11:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:55.628 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:55.628 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.628 11:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:55.628 11:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.628 11:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:55.628 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.628 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.628 11:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:55.628 11:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.628 11:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:55.628 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:55.628 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:24:55.628 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.628 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:55.628 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:55.628 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:55.628 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTRmN2RiNDQyOGQ1NWFhNDE5YWMzOTgzMWIwM2QyZDE3MDZmZTc2MmMxNTY1YmY3ilihiw==: 00:24:55.628 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTQwNjdkNGUyMTA1YjNmMjRjMWEzNDdhYjVhOGYzYWEwZDRlNzZlYWIxN2Q0OTYwDYye/w==: 00:24:55.628 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:55.628 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:55.628 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTRmN2RiNDQyOGQ1NWFhNDE5YWMzOTgzMWIwM2QyZDE3MDZmZTc2MmMxNTY1YmY3ilihiw==: 00:24:55.628 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTQwNjdkNGUyMTA1YjNmMjRjMWEzNDdhYjVhOGYzYWEwZDRlNzZlYWIxN2Q0OTYwDYye/w==: ]] 00:24:55.628 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTQwNjdkNGUyMTA1YjNmMjRjMWEzNDdhYjVhOGYzYWEwZDRlNzZlYWIxN2Q0OTYwDYye/w==: 00:24:55.628 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:24:55.628 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:55.628 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:55.628 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:55.628 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:55.628 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:55.628 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:55.628 11:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:55.628 11:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.628 11:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:55.628 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:55.628 11:16:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:55.628 11:16:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:55.628 11:16:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:55.628 11:16:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.628 11:16:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.628 11:16:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:55.628 11:16:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.628 11:16:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:55.628 11:16:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:55.628 11:16:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:55.628 11:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:55.628 11:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:55.628 11:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.193 nvme0n1 00:24:56.193 11:16:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:56.193 11:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.193 11:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.193 11:16:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:56.193 11:16:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.193 11:16:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:56.193 11:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.193 11:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.193 11:16:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:56.193 11:16:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.193 11:16:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:56.193 11:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.193 11:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:24:56.193 11:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.193 11:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:56.193 11:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:56.193 11:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:56.193 11:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTk1NjAyYTRkYWMxNjVlNDM4YTdlMWE3ZjdhZWZhNjdgbGhU: 00:24:56.193 11:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGJkMTZiNDhmNzI3MGFhZGEyMGFmMDk1ZWM5NzY3Njf+N3rr: 00:24:56.193 11:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:56.193 11:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:56.193 11:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTk1NjAyYTRkYWMxNjVlNDM4YTdlMWE3ZjdhZWZhNjdgbGhU: 00:24:56.193 11:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGJkMTZiNDhmNzI3MGFhZGEyMGFmMDk1ZWM5NzY3Njf+N3rr: ]] 00:24:56.193 11:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGJkMTZiNDhmNzI3MGFhZGEyMGFmMDk1ZWM5NzY3Njf+N3rr: 00:24:56.193 11:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:24:56.193 11:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.193 11:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:56.193 11:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:56.193 11:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:56.193 11:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.193 11:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:56.193 11:16:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:56.193 11:16:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.193 11:16:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:56.193 11:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:56.193 11:16:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:56.193 11:16:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:56.193 11:16:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:56.193 11:16:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.193 11:16:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.193 11:16:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:56.193 11:16:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.193 11:16:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:56.193 11:16:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:56.193 11:16:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:56.193 11:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:56.193 11:16:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:56.193 11:16:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.451 nvme0n1 00:24:56.451 11:16:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:56.451 11:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.451 11:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.451 11:16:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:56.451 11:16:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.451 11:16:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:56.708 11:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.708 11:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.708 11:16:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:56.708 11:16:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.708 11:16:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:56.708 11:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.708 11:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:24:56.708 11:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.708 11:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:56.708 11:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:56.708 11:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:56.708 11:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2YyYzkyMGFkNTRmNzZhNzNhODk0MmFlOGY4YmMyOGE4OTIwODhmOGQ4YzQxNWI2ZRCG4A==: 00:24:56.708 11:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGFhNWFkMzVmNzA0NTkwMzI4NzhmNzljMDc0OTBiM2IYnKrq: 00:24:56.708 11:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:56.708 11:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:56.708 11:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2YyYzkyMGFkNTRmNzZhNzNhODk0MmFlOGY4YmMyOGE4OTIwODhmOGQ4YzQxNWI2ZRCG4A==: 00:24:56.708 11:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGFhNWFkMzVmNzA0NTkwMzI4NzhmNzljMDc0OTBiM2IYnKrq: ]] 00:24:56.708 11:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGFhNWFkMzVmNzA0NTkwMzI4NzhmNzljMDc0OTBiM2IYnKrq: 00:24:56.708 11:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:24:56.708 11:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.708 11:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:56.708 11:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:56.708 11:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:56.708 11:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.708 11:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:56.708 11:16:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:56.708 11:16:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.708 11:16:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:56.708 11:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:56.708 11:16:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:56.708 11:16:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:56.708 11:16:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:56.708 11:16:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.708 11:16:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.708 11:16:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:56.708 11:16:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.708 11:16:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:56.708 11:16:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:56.708 11:16:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:56.708 11:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:56.708 11:16:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:56.708 11:16:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.966 nvme0n1 00:24:56.966 11:16:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:56.966 11:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.966 11:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.966 11:16:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:56.966 11:16:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.966 11:16:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:56.966 11:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.966 11:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.966 11:16:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:56.966 11:16:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.966 11:16:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:56.966 11:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.966 11:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:24:56.966 11:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.966 11:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:56.966 11:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:56.966 11:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:56.966 11:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjVkMTZhNzNiNGYyOGM1MDE4YTVlNTExMDNmYjdkODNkNGQxOTg0NTdiOTBmNjgwNTU5YzM3ZjYwZTkwMWI4ZR5MDp8=: 00:24:56.966 11:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:56.966 11:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:56.966 11:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:56.966 11:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjVkMTZhNzNiNGYyOGM1MDE4YTVlNTExMDNmYjdkODNkNGQxOTg0NTdiOTBmNjgwNTU5YzM3ZjYwZTkwMWI4ZR5MDp8=: 00:24:56.966 11:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:56.966 11:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:24:56.966 11:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.966 11:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:56.966 11:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:56.966 11:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:56.966 11:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.966 11:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:56.966 11:16:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:56.966 11:16:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.966 11:16:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:56.966 11:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:56.966 11:16:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:56.966 11:16:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:56.967 11:16:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:56.967 11:16:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.967 11:16:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.967 11:16:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:56.967 11:16:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.967 11:16:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:56.967 11:16:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:56.967 11:16:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:56.967 11:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:56.967 11:16:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:56.967 11:16:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.595 nvme0n1 00:24:57.595 11:16:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:57.595 11:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.595 11:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:57.595 11:16:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:57.595 11:16:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.596 11:16:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:57.596 11:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.596 11:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:57.596 11:16:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:57.596 11:16:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.596 11:16:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:57.596 11:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:57.596 11:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:57.596 11:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:24:57.596 11:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:57.596 11:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:57.596 11:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:57.596 11:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:57.596 11:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFkMzUxYjczYjUwMzkxZTIyYmFmNjc1NzQ1YWFlYzYdA+TR: 00:24:57.596 11:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQyYjU3OWQ0YWRmNjdjOWEwMDM1NzNmYjEyMDVhNWExMDQ5NDJjMDU3NTkyZGEwZmQ2ZjdkOGNiNWViYzg5Ng5uH30=: 00:24:57.596 11:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:57.596 11:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:57.596 11:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFkMzUxYjczYjUwMzkxZTIyYmFmNjc1NzQ1YWFlYzYdA+TR: 00:24:57.596 11:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQyYjU3OWQ0YWRmNjdjOWEwMDM1NzNmYjEyMDVhNWExMDQ5NDJjMDU3NTkyZGEwZmQ2ZjdkOGNiNWViYzg5Ng5uH30=: ]] 00:24:57.596 11:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQyYjU3OWQ0YWRmNjdjOWEwMDM1NzNmYjEyMDVhNWExMDQ5NDJjMDU3NTkyZGEwZmQ2ZjdkOGNiNWViYzg5Ng5uH30=: 00:24:57.596 11:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:24:57.596 11:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:57.596 11:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:57.596 11:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:57.596 11:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:57.596 11:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:57.596 11:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:57.596 11:16:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:57.596 11:16:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.596 11:16:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:57.596 11:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:57.596 11:16:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:57.596 11:16:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:57.596 11:16:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:57.596 11:16:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.596 11:16:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.596 11:16:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:57.596 11:16:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.596 11:16:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:57.596 11:16:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:57.596 11:16:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:57.596 11:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:57.596 11:16:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:57.596 11:16:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.161 nvme0n1 00:24:58.161 11:16:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:58.161 11:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.162 11:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.162 11:16:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:58.162 11:16:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.162 11:16:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:58.162 11:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.162 11:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.162 11:16:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:58.162 11:16:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.162 11:16:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:58.162 11:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.162 11:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:24:58.162 11:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.162 11:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:58.162 11:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:58.162 11:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:58.162 11:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTRmN2RiNDQyOGQ1NWFhNDE5YWMzOTgzMWIwM2QyZDE3MDZmZTc2MmMxNTY1YmY3ilihiw==: 00:24:58.162 11:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTQwNjdkNGUyMTA1YjNmMjRjMWEzNDdhYjVhOGYzYWEwZDRlNzZlYWIxN2Q0OTYwDYye/w==: 00:24:58.162 11:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:58.162 11:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:58.162 11:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTRmN2RiNDQyOGQ1NWFhNDE5YWMzOTgzMWIwM2QyZDE3MDZmZTc2MmMxNTY1YmY3ilihiw==: 00:24:58.162 11:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTQwNjdkNGUyMTA1YjNmMjRjMWEzNDdhYjVhOGYzYWEwZDRlNzZlYWIxN2Q0OTYwDYye/w==: ]] 00:24:58.162 11:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTQwNjdkNGUyMTA1YjNmMjRjMWEzNDdhYjVhOGYzYWEwZDRlNzZlYWIxN2Q0OTYwDYye/w==: 00:24:58.162 11:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:24:58.162 11:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.162 11:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:58.162 11:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:58.162 11:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:58.162 11:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.162 11:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:58.162 11:16:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:58.162 11:16:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.162 11:16:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:58.162 11:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:58.162 11:16:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:58.162 11:16:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:58.162 11:16:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:58.162 11:16:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.162 11:16:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.162 11:16:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:58.162 11:16:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.162 11:16:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:58.162 11:16:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:58.162 11:16:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:58.162 11:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:58.162 11:16:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:58.162 11:16:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.727 nvme0n1 00:24:58.727 11:16:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:58.727 11:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.727 11:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.727 11:16:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:58.727 11:16:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.727 11:16:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:58.727 11:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.727 11:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.727 11:16:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:58.727 11:16:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.727 11:16:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:58.727 11:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.727 11:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:24:58.727 11:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.727 11:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:58.727 11:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:58.727 11:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:58.727 11:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTk1NjAyYTRkYWMxNjVlNDM4YTdlMWE3ZjdhZWZhNjdgbGhU: 00:24:58.727 11:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGJkMTZiNDhmNzI3MGFhZGEyMGFmMDk1ZWM5NzY3Njf+N3rr: 00:24:58.727 11:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:58.727 11:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:58.727 11:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTk1NjAyYTRkYWMxNjVlNDM4YTdlMWE3ZjdhZWZhNjdgbGhU: 00:24:58.727 11:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGJkMTZiNDhmNzI3MGFhZGEyMGFmMDk1ZWM5NzY3Njf+N3rr: ]] 00:24:58.727 11:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGJkMTZiNDhmNzI3MGFhZGEyMGFmMDk1ZWM5NzY3Njf+N3rr: 00:24:58.727 11:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:24:58.727 11:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.727 11:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:58.727 11:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:58.727 11:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:58.727 11:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.727 11:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:58.727 11:16:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:58.727 11:16:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.727 11:16:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:58.727 11:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:58.727 11:16:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:58.727 11:16:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:58.727 11:16:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:58.727 11:16:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.727 11:16:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.727 11:16:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:58.727 11:16:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.727 11:16:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:58.727 11:16:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:58.727 11:16:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:58.727 11:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:58.727 11:16:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:58.727 11:16:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.293 nvme0n1 00:24:59.293 11:16:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:59.293 11:16:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:59.293 11:16:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.293 11:16:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:59.293 11:16:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.293 11:16:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:59.293 11:16:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.293 11:16:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.293 11:16:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:59.293 11:16:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.551 11:16:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:59.551 11:16:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:59.551 11:16:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:24:59.551 11:16:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.551 11:16:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:59.551 11:16:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:59.551 11:16:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:59.551 11:16:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2YyYzkyMGFkNTRmNzZhNzNhODk0MmFlOGY4YmMyOGE4OTIwODhmOGQ4YzQxNWI2ZRCG4A==: 00:24:59.551 11:16:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGFhNWFkMzVmNzA0NTkwMzI4NzhmNzljMDc0OTBiM2IYnKrq: 00:24:59.551 11:16:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:59.551 11:16:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:59.551 11:16:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2YyYzkyMGFkNTRmNzZhNzNhODk0MmFlOGY4YmMyOGE4OTIwODhmOGQ4YzQxNWI2ZRCG4A==: 00:24:59.551 11:16:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGFhNWFkMzVmNzA0NTkwMzI4NzhmNzljMDc0OTBiM2IYnKrq: ]] 00:24:59.551 11:16:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGFhNWFkMzVmNzA0NTkwMzI4NzhmNzljMDc0OTBiM2IYnKrq: 00:24:59.551 11:16:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:24:59.551 11:16:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:59.551 11:16:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:59.551 11:16:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:59.551 11:16:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:59.551 11:16:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:59.551 11:16:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:59.551 11:16:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:59.551 11:16:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.551 11:16:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:59.551 11:16:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:59.551 11:16:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:59.551 11:16:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:59.551 11:16:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:59.551 11:16:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.551 11:16:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.551 11:16:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:59.551 11:16:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:59.551 11:16:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:59.551 11:16:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:59.551 11:16:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:59.551 11:16:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:59.551 11:16:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:59.551 11:16:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.116 nvme0n1 00:25:00.116 11:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:00.116 11:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.116 11:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.116 11:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:00.117 11:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.117 11:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:00.117 11:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.117 11:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.117 11:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:00.117 11:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.117 11:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:00.117 11:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.117 11:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:00.117 11:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.117 11:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:00.117 11:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:00.117 11:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:00.117 11:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjVkMTZhNzNiNGYyOGM1MDE4YTVlNTExMDNmYjdkODNkNGQxOTg0NTdiOTBmNjgwNTU5YzM3ZjYwZTkwMWI4ZR5MDp8=: 00:25:00.117 11:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:00.117 11:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:00.117 11:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:00.117 11:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjVkMTZhNzNiNGYyOGM1MDE4YTVlNTExMDNmYjdkODNkNGQxOTg0NTdiOTBmNjgwNTU5YzM3ZjYwZTkwMWI4ZR5MDp8=: 00:25:00.117 11:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:00.117 11:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:00.117 11:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.117 11:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:00.117 11:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:00.117 11:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:00.117 11:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.117 11:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:00.117 11:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:00.117 11:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.117 11:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:00.117 11:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.117 11:16:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:00.117 11:16:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:00.117 11:16:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:00.117 11:16:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.117 11:16:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.117 11:16:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:00.117 11:16:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.117 11:16:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:00.117 11:16:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:00.117 11:16:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:00.117 11:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:00.117 11:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:00.117 11:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.682 nvme0n1 00:25:00.682 11:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:00.683 11:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.683 11:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.683 11:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:00.683 11:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.683 11:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:00.683 11:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.683 11:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.683 11:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:00.683 11:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.683 11:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:00.683 11:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:00.683 11:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.683 11:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:00.683 11:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:00.683 11:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:00.683 11:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTRmN2RiNDQyOGQ1NWFhNDE5YWMzOTgzMWIwM2QyZDE3MDZmZTc2MmMxNTY1YmY3ilihiw==: 00:25:00.683 11:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTQwNjdkNGUyMTA1YjNmMjRjMWEzNDdhYjVhOGYzYWEwZDRlNzZlYWIxN2Q0OTYwDYye/w==: 00:25:00.683 11:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:00.683 11:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:00.683 11:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTRmN2RiNDQyOGQ1NWFhNDE5YWMzOTgzMWIwM2QyZDE3MDZmZTc2MmMxNTY1YmY3ilihiw==: 00:25:00.683 11:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTQwNjdkNGUyMTA1YjNmMjRjMWEzNDdhYjVhOGYzYWEwZDRlNzZlYWIxN2Q0OTYwDYye/w==: ]] 00:25:00.683 11:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTQwNjdkNGUyMTA1YjNmMjRjMWEzNDdhYjVhOGYzYWEwZDRlNzZlYWIxN2Q0OTYwDYye/w==: 00:25:00.683 11:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:00.683 11:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:00.683 11:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.683 11:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:00.683 11:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:25:00.683 11:16:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:00.683 11:16:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:00.683 11:16:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:00.683 11:16:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.683 11:16:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.683 11:16:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:00.683 11:16:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.683 11:16:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:00.683 11:16:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:00.683 11:16:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:00.683 11:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:00.683 11:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:25:00.683 11:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:00.683 11:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:25:00.683 11:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:00.683 11:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:25:00.683 11:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:00.683 11:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:00.683 11:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:00.683 11:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.942 request: 00:25:00.942 { 00:25:00.942 "name": "nvme0", 00:25:00.942 "trtype": "tcp", 00:25:00.942 "traddr": "10.0.0.1", 00:25:00.942 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:00.942 "adrfam": "ipv4", 00:25:00.942 "trsvcid": "4420", 00:25:00.942 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:00.942 "method": "bdev_nvme_attach_controller", 00:25:00.942 "req_id": 1 00:25:00.942 } 00:25:00.942 Got JSON-RPC error response 00:25:00.942 response: 00:25:00.942 { 00:25:00.942 "code": -32602, 00:25:00.942 "message": "Invalid parameters" 00:25:00.942 } 00:25:00.942 11:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:25:00.942 11:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:25:00.942 11:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:25:00.942 11:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:25:00.942 11:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:25:00.942 11:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.942 11:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:00.942 11:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:00.942 11:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.942 11:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.942 request: 00:25:00.942 { 00:25:00.942 "name": "nvme0", 00:25:00.942 "trtype": "tcp", 00:25:00.942 "traddr": "10.0.0.1", 00:25:00.942 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:00.942 "adrfam": "ipv4", 00:25:00.942 "trsvcid": "4420", 00:25:00.942 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:00.942 "dhchap_key": "key2", 00:25:00.942 "method": "bdev_nvme_attach_controller", 00:25:00.942 "req_id": 1 00:25:00.942 } 00:25:00.942 Got JSON-RPC error response 00:25:00.942 response: 00:25:00.942 { 00:25:00.942 "code": -32602, 00:25:00.942 "message": "Invalid parameters" 00:25:00.942 } 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:00.942 11:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.942 request: 00:25:00.942 { 00:25:00.942 "name": "nvme0", 00:25:00.942 "trtype": "tcp", 00:25:00.943 "traddr": "10.0.0.1", 00:25:00.943 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:00.943 "adrfam": "ipv4", 00:25:00.943 "trsvcid": "4420", 00:25:00.943 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:00.943 "dhchap_key": "key1", 00:25:00.943 "dhchap_ctrlr_key": "ckey2", 00:25:00.943 "method": "bdev_nvme_attach_controller", 00:25:00.943 "req_id": 1 00:25:00.943 } 00:25:00.943 Got JSON-RPC error response 00:25:00.943 response: 00:25:00.943 { 00:25:00.943 "code": -32602, 00:25:00.943 "message": "Invalid parameters" 00:25:00.943 } 00:25:00.943 11:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:25:01.201 11:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:25:01.201 11:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:25:01.201 11:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:25:01.201 11:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:25:01.201 11:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:25:01.201 11:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:25:01.201 11:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:01.201 11:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:01.201 11:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:25:01.201 11:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:01.201 11:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:25:01.201 11:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:01.201 11:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:01.201 rmmod nvme_tcp 00:25:01.201 rmmod nvme_fabrics 00:25:01.201 11:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:01.202 11:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:25:01.202 11:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:25:01.202 11:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2378828 ']' 00:25:01.202 11:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2378828 00:25:01.202 11:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@947 -- # '[' -z 2378828 ']' 00:25:01.202 11:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # kill -0 2378828 00:25:01.202 11:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # uname 00:25:01.202 11:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:25:01.202 11:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2378828 00:25:01.202 11:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:25:01.202 11:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:25:01.202 11:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2378828' 00:25:01.202 killing process with pid 2378828 00:25:01.202 11:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # kill 2378828 00:25:01.202 11:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@971 -- # wait 2378828 00:25:01.460 11:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:01.460 11:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:01.460 11:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:01.460 11:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:01.460 11:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:01.460 11:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:01.460 11:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:01.460 11:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.363 11:17:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:03.363 11:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:03.363 11:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:03.363 11:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:25:03.363 11:17:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:03.363 11:17:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:25:03.363 11:17:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:03.363 11:17:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:03.363 11:17:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:03.363 11:17:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:03.363 11:17:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:03.363 11:17:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:25:03.363 11:17:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:05.897 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:05.897 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:05.897 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:05.897 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:05.897 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:06.156 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:06.156 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:06.156 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:06.156 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:06.156 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:06.156 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:06.156 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:06.156 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:06.156 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:06.156 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:06.156 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:07.092 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:07.092 11:17:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.GfY /tmp/spdk.key-null.9J5 /tmp/spdk.key-sha256.L8J /tmp/spdk.key-sha384.D3Z /tmp/spdk.key-sha512.M3X /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:25:07.092 11:17:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:09.626 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:25:09.626 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:09.626 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:25:09.626 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:25:09.626 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:25:09.626 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:25:09.626 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:25:09.626 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:25:09.626 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:25:09.626 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:25:09.626 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:25:09.626 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:25:09.626 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:25:09.626 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:25:09.626 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:25:09.626 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:25:09.626 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:25:09.626 00:25:09.626 real 0m49.068s 00:25:09.626 user 0m44.143s 00:25:09.626 sys 0m11.416s 00:25:09.626 11:17:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # xtrace_disable 00:25:09.626 11:17:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.626 ************************************ 00:25:09.626 END TEST nvmf_auth_host 00:25:09.626 ************************************ 00:25:09.626 11:17:06 nvmf_tcp -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:25:09.626 11:17:06 nvmf_tcp -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:09.626 11:17:06 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:25:09.626 11:17:06 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:25:09.626 11:17:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:09.626 ************************************ 00:25:09.626 START TEST nvmf_digest 00:25:09.626 ************************************ 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:09.626 * Looking for test storage... 00:25:09.626 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:25:09.626 11:17:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:14.895 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:14.895 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:25:14.895 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:14.895 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:14.895 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:14.895 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:14.895 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:14.895 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:25:14.895 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:14.895 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:25:14.895 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:25:14.895 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:25:14.895 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:25:14.895 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:25:14.895 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:25:14.895 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:14.895 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:14.895 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:14.895 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:14.895 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:14.895 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:14.895 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:14.895 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:14.895 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:14.895 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:14.895 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:14.895 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:14.895 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:14.895 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:14.895 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:14.895 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:14.895 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:14.895 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:14.895 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:14.895 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:14.895 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:14.895 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:14.895 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.895 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.895 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:14.895 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:14.895 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:14.895 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:14.895 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:14.895 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:14.895 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.895 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.895 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:14.895 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:14.896 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:14.896 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:14.896 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:14.896 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.896 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:14.896 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:14.896 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:14.896 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:14.896 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.896 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:14.896 Found net devices under 0000:86:00.0: cvl_0_0 00:25:14.896 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.896 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:14.896 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.896 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:14.896 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:14.896 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:14.896 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:14.896 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.896 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:14.896 Found net devices under 0000:86:00.1: cvl_0_1 00:25:14.896 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.896 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:14.896 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:25:14.896 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:14.896 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:14.896 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:14.896 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:14.896 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:14.896 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:14.896 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:14.896 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:14.896 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:14.896 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:14.896 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:14.896 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:14.896 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:14.896 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:14.896 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:14.896 11:17:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:14.896 11:17:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:14.896 11:17:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:14.896 11:17:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:14.896 11:17:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:14.896 11:17:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:14.896 11:17:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:14.896 11:17:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:14.896 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:14.896 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:25:14.896 00:25:14.896 --- 10.0.0.2 ping statistics --- 00:25:14.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.896 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:25:14.896 11:17:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:14.896 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:14.896 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:25:14.896 00:25:14.896 --- 10.0.0.1 ping statistics --- 00:25:14.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.896 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:25:14.896 11:17:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:14.896 11:17:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:25:14.896 11:17:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:14.896 11:17:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:14.896 11:17:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:14.896 11:17:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:14.896 11:17:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:14.896 11:17:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:14.896 11:17:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:14.896 11:17:12 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:14.896 11:17:12 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:25:14.896 11:17:12 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:25:14.896 11:17:12 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:25:14.896 11:17:12 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1104 -- # xtrace_disable 00:25:14.896 11:17:12 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:15.155 ************************************ 00:25:15.155 START TEST nvmf_digest_clean 00:25:15.155 ************************************ 00:25:15.155 11:17:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # run_digest 00:25:15.155 11:17:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:25:15.155 11:17:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:25:15.155 11:17:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:25:15.155 11:17:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:25:15.155 11:17:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:25:15.155 11:17:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:15.155 11:17:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@721 -- # xtrace_disable 00:25:15.155 11:17:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:15.155 11:17:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=2392186 00:25:15.155 11:17:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 2392186 00:25:15.155 11:17:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:15.155 11:17:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # '[' -z 2392186 ']' 00:25:15.155 11:17:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:15.155 11:17:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:15.155 11:17:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:15.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:15.155 11:17:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:15.155 11:17:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:15.155 [2024-05-15 11:17:12.242192] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:25:15.155 [2024-05-15 11:17:12.242235] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:15.155 EAL: No free 2048 kB hugepages reported on node 1 00:25:15.155 [2024-05-15 11:17:12.299523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.155 [2024-05-15 11:17:12.379866] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:15.155 [2024-05-15 11:17:12.379901] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:15.155 [2024-05-15 11:17:12.379908] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:15.155 [2024-05-15 11:17:12.379913] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:15.155 [2024-05-15 11:17:12.379918] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:15.155 [2024-05-15 11:17:12.379935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:16.089 11:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:16.089 11:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@861 -- # return 0 00:25:16.089 11:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:16.089 11:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@727 -- # xtrace_disable 00:25:16.089 11:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:16.089 11:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:16.089 11:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:25:16.089 11:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:25:16.089 11:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:25:16.089 11:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:16.089 11:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:16.089 null0 00:25:16.089 [2024-05-15 11:17:13.174639] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:16.089 [2024-05-15 11:17:13.198646] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:16.089 [2024-05-15 11:17:13.198856] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:16.089 11:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:16.089 11:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:25:16.089 11:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:16.089 11:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:16.089 11:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:16.089 11:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:16.089 11:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:16.089 11:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:16.089 11:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2392433 00:25:16.089 11:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2392433 /var/tmp/bperf.sock 00:25:16.089 11:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:16.089 11:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # '[' -z 2392433 ']' 00:25:16.089 11:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:16.089 11:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:16.089 11:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:16.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:16.089 11:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:16.089 11:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:16.089 [2024-05-15 11:17:13.250482] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:25:16.089 [2024-05-15 11:17:13.250521] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2392433 ] 00:25:16.089 EAL: No free 2048 kB hugepages reported on node 1 00:25:16.089 [2024-05-15 11:17:13.303880] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.345 [2024-05-15 11:17:13.383001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:16.907 11:17:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:16.907 11:17:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@861 -- # return 0 00:25:16.907 11:17:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:16.907 11:17:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:16.907 11:17:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:17.164 11:17:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:17.164 11:17:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:17.420 nvme0n1 00:25:17.420 11:17:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:17.420 11:17:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:17.676 Running I/O for 2 seconds... 00:25:19.571 00:25:19.571 Latency(us) 00:25:19.571 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:19.571 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:19.571 nvme0n1 : 2.01 24853.27 97.08 0.00 0.00 5144.31 2607.19 12822.26 00:25:19.571 =================================================================================================================== 00:25:19.571 Total : 24853.27 97.08 0.00 0.00 5144.31 2607.19 12822.26 00:25:19.571 0 00:25:19.571 11:17:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:19.571 11:17:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:19.571 11:17:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:19.571 11:17:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:19.571 | select(.opcode=="crc32c") 00:25:19.571 | "\(.module_name) \(.executed)"' 00:25:19.571 11:17:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:19.828 11:17:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:19.828 11:17:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:19.828 11:17:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:19.828 11:17:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:19.828 11:17:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2392433 00:25:19.828 11:17:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' -z 2392433 ']' 00:25:19.828 11:17:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # kill -0 2392433 00:25:19.828 11:17:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # uname 00:25:19.828 11:17:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:25:19.828 11:17:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2392433 00:25:19.828 11:17:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:25:19.828 11:17:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:25:19.828 11:17:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2392433' 00:25:19.828 killing process with pid 2392433 00:25:19.828 11:17:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # kill 2392433 00:25:19.828 Received shutdown signal, test time was about 2.000000 seconds 00:25:19.828 00:25:19.828 Latency(us) 00:25:19.828 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:19.828 =================================================================================================================== 00:25:19.828 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:19.828 11:17:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # wait 2392433 00:25:20.085 11:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:25:20.086 11:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:20.086 11:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:20.086 11:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:20.086 11:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:20.086 11:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:20.086 11:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:20.086 11:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2393021 00:25:20.086 11:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2393021 /var/tmp/bperf.sock 00:25:20.086 11:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:20.086 11:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # '[' -z 2393021 ']' 00:25:20.086 11:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:20.086 11:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:20.086 11:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:20.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:20.086 11:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:20.086 11:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:20.086 [2024-05-15 11:17:17.196620] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:25:20.086 [2024-05-15 11:17:17.196668] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2393021 ] 00:25:20.086 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:20.086 Zero copy mechanism will not be used. 00:25:20.086 EAL: No free 2048 kB hugepages reported on node 1 00:25:20.086 [2024-05-15 11:17:17.251355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:20.086 [2024-05-15 11:17:17.328929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:21.018 11:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:21.018 11:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@861 -- # return 0 00:25:21.018 11:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:21.018 11:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:21.018 11:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:21.018 11:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:21.018 11:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:21.275 nvme0n1 00:25:21.532 11:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:21.532 11:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:21.532 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:21.532 Zero copy mechanism will not be used. 00:25:21.532 Running I/O for 2 seconds... 00:25:23.487 00:25:23.487 Latency(us) 00:25:23.487 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:23.487 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:23.487 nvme0n1 : 2.00 5110.77 638.85 0.00 0.00 3127.83 908.24 8662.15 00:25:23.487 =================================================================================================================== 00:25:23.487 Total : 5110.77 638.85 0.00 0.00 3127.83 908.24 8662.15 00:25:23.487 0 00:25:23.487 11:17:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:23.487 11:17:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:23.487 11:17:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:23.487 11:17:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:23.487 | select(.opcode=="crc32c") 00:25:23.487 | "\(.module_name) \(.executed)"' 00:25:23.487 11:17:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:23.755 11:17:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:23.755 11:17:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:23.755 11:17:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:23.755 11:17:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:23.755 11:17:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2393021 00:25:23.755 11:17:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' -z 2393021 ']' 00:25:23.755 11:17:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # kill -0 2393021 00:25:23.755 11:17:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # uname 00:25:23.755 11:17:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:25:23.755 11:17:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2393021 00:25:23.755 11:17:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:25:23.755 11:17:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:25:23.755 11:17:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2393021' 00:25:23.755 killing process with pid 2393021 00:25:23.755 11:17:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # kill 2393021 00:25:23.755 Received shutdown signal, test time was about 2.000000 seconds 00:25:23.755 00:25:23.755 Latency(us) 00:25:23.755 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:23.755 =================================================================================================================== 00:25:23.755 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:23.755 11:17:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # wait 2393021 00:25:24.013 11:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:25:24.013 11:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:24.013 11:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:24.013 11:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:24.013 11:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:24.013 11:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:24.013 11:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:24.013 11:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2393618 00:25:24.013 11:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2393618 /var/tmp/bperf.sock 00:25:24.013 11:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:24.013 11:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # '[' -z 2393618 ']' 00:25:24.013 11:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:24.013 11:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:24.013 11:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:24.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:24.013 11:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:24.013 11:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:24.013 [2024-05-15 11:17:21.137400] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:25:24.013 [2024-05-15 11:17:21.137459] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2393618 ] 00:25:24.013 EAL: No free 2048 kB hugepages reported on node 1 00:25:24.013 [2024-05-15 11:17:21.191670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.013 [2024-05-15 11:17:21.271660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:24.947 11:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:24.947 11:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@861 -- # return 0 00:25:24.947 11:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:24.947 11:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:24.947 11:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:24.947 11:17:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:24.947 11:17:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:25.513 nvme0n1 00:25:25.513 11:17:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:25.513 11:17:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:25.513 Running I/O for 2 seconds... 00:25:28.041 00:25:28.041 Latency(us) 00:25:28.041 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:28.041 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:28.041 nvme0n1 : 2.00 26978.95 105.39 0.00 0.00 4735.94 4530.53 11283.59 00:25:28.041 =================================================================================================================== 00:25:28.041 Total : 26978.95 105.39 0.00 0.00 4735.94 4530.53 11283.59 00:25:28.041 0 00:25:28.041 11:17:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:28.041 11:17:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:28.041 11:17:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:28.041 11:17:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:28.041 | select(.opcode=="crc32c") 00:25:28.041 | "\(.module_name) \(.executed)"' 00:25:28.041 11:17:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:28.041 11:17:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:28.041 11:17:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:28.041 11:17:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:28.041 11:17:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:28.041 11:17:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2393618 00:25:28.041 11:17:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' -z 2393618 ']' 00:25:28.041 11:17:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # kill -0 2393618 00:25:28.041 11:17:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # uname 00:25:28.041 11:17:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:25:28.041 11:17:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2393618 00:25:28.041 11:17:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:25:28.041 11:17:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:25:28.041 11:17:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2393618' 00:25:28.041 killing process with pid 2393618 00:25:28.041 11:17:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # kill 2393618 00:25:28.041 Received shutdown signal, test time was about 2.000000 seconds 00:25:28.041 00:25:28.041 Latency(us) 00:25:28.042 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:28.042 =================================================================================================================== 00:25:28.042 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:28.042 11:17:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # wait 2393618 00:25:28.042 11:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:25:28.042 11:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:28.042 11:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:28.042 11:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:28.042 11:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:28.042 11:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:28.042 11:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:28.042 11:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2394312 00:25:28.042 11:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2394312 /var/tmp/bperf.sock 00:25:28.042 11:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:28.042 11:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # '[' -z 2394312 ']' 00:25:28.042 11:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:28.042 11:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:28.042 11:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:28.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:28.042 11:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:28.042 11:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:28.042 [2024-05-15 11:17:25.183696] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:25:28.042 [2024-05-15 11:17:25.183746] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2394312 ] 00:25:28.042 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:28.042 Zero copy mechanism will not be used. 00:25:28.042 EAL: No free 2048 kB hugepages reported on node 1 00:25:28.042 [2024-05-15 11:17:25.237621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.300 [2024-05-15 11:17:25.306589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:28.866 11:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:28.866 11:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@861 -- # return 0 00:25:28.866 11:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:28.866 11:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:28.866 11:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:29.124 11:17:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:29.124 11:17:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:29.382 nvme0n1 00:25:29.640 11:17:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:29.640 11:17:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:29.640 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:29.640 Zero copy mechanism will not be used. 00:25:29.640 Running I/O for 2 seconds... 00:25:31.540 00:25:31.540 Latency(us) 00:25:31.540 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:31.540 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:31.540 nvme0n1 : 2.00 5911.75 738.97 0.00 0.00 2702.04 1937.59 7579.38 00:25:31.540 =================================================================================================================== 00:25:31.540 Total : 5911.75 738.97 0.00 0.00 2702.04 1937.59 7579.38 00:25:31.540 0 00:25:31.540 11:17:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:31.540 11:17:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:31.540 11:17:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:31.540 11:17:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:31.540 | select(.opcode=="crc32c") 00:25:31.540 | "\(.module_name) \(.executed)"' 00:25:31.540 11:17:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:31.798 11:17:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:31.798 11:17:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:31.798 11:17:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:31.798 11:17:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:31.798 11:17:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2394312 00:25:31.798 11:17:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' -z 2394312 ']' 00:25:31.798 11:17:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # kill -0 2394312 00:25:31.798 11:17:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # uname 00:25:31.798 11:17:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:25:31.798 11:17:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2394312 00:25:31.798 11:17:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:25:31.798 11:17:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:25:31.798 11:17:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2394312' 00:25:31.798 killing process with pid 2394312 00:25:31.798 11:17:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # kill 2394312 00:25:31.798 Received shutdown signal, test time was about 2.000000 seconds 00:25:31.798 00:25:31.798 Latency(us) 00:25:31.798 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:31.798 =================================================================================================================== 00:25:31.798 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:31.798 11:17:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # wait 2394312 00:25:32.056 11:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2392186 00:25:32.056 11:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' -z 2392186 ']' 00:25:32.056 11:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # kill -0 2392186 00:25:32.056 11:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # uname 00:25:32.056 11:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:25:32.056 11:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2392186 00:25:32.056 11:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:25:32.056 11:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:25:32.056 11:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2392186' 00:25:32.056 killing process with pid 2392186 00:25:32.056 11:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # kill 2392186 00:25:32.056 [2024-05-15 11:17:29.213183] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:32.056 11:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # wait 2392186 00:25:32.315 00:25:32.315 real 0m17.234s 00:25:32.315 user 0m33.060s 00:25:32.315 sys 0m4.482s 00:25:32.315 11:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # xtrace_disable 00:25:32.315 11:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:32.315 ************************************ 00:25:32.315 END TEST nvmf_digest_clean 00:25:32.315 ************************************ 00:25:32.315 11:17:29 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:25:32.315 11:17:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:25:32.315 11:17:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1104 -- # xtrace_disable 00:25:32.315 11:17:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:32.315 ************************************ 00:25:32.315 START TEST nvmf_digest_error 00:25:32.315 ************************************ 00:25:32.315 11:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # run_digest_error 00:25:32.315 11:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:25:32.315 11:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:32.315 11:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@721 -- # xtrace_disable 00:25:32.315 11:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:32.315 11:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=2395039 00:25:32.315 11:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 2395039 00:25:32.315 11:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 2395039 ']' 00:25:32.315 11:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:32.315 11:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:32.315 11:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:32.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:32.315 11:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:32.315 11:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:32.315 11:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:32.315 [2024-05-15 11:17:29.535705] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:25:32.315 [2024-05-15 11:17:29.535744] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:32.315 EAL: No free 2048 kB hugepages reported on node 1 00:25:32.575 [2024-05-15 11:17:29.592284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.575 [2024-05-15 11:17:29.670032] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:32.575 [2024-05-15 11:17:29.670064] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:32.575 [2024-05-15 11:17:29.670071] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:32.575 [2024-05-15 11:17:29.670077] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:32.575 [2024-05-15 11:17:29.670082] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:32.575 [2024-05-15 11:17:29.670106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.143 11:17:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:33.143 11:17:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:25:33.143 11:17:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:33.143 11:17:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@727 -- # xtrace_disable 00:25:33.143 11:17:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:33.143 11:17:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:33.143 11:17:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:25:33.143 11:17:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:33.143 11:17:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:33.143 [2024-05-15 11:17:30.372154] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:25:33.143 11:17:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:33.143 11:17:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:25:33.143 11:17:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:25:33.143 11:17:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:33.143 11:17:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:33.402 null0 00:25:33.402 [2024-05-15 11:17:30.462985] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:33.402 [2024-05-15 11:17:30.486993] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:33.402 [2024-05-15 11:17:30.487223] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:33.402 11:17:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:33.402 11:17:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:25:33.402 11:17:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:33.402 11:17:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:33.402 11:17:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:33.402 11:17:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:33.402 11:17:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2395282 00:25:33.402 11:17:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2395282 /var/tmp/bperf.sock 00:25:33.402 11:17:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 2395282 ']' 00:25:33.402 11:17:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:33.402 11:17:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:33.402 11:17:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:33.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:33.402 11:17:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:33.402 11:17:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:33.402 11:17:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:25:33.402 [2024-05-15 11:17:30.534611] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:25:33.402 [2024-05-15 11:17:30.534654] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2395282 ] 00:25:33.402 EAL: No free 2048 kB hugepages reported on node 1 00:25:33.402 [2024-05-15 11:17:30.586837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.402 [2024-05-15 11:17:30.658831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:34.336 11:17:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:34.336 11:17:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:25:34.336 11:17:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:34.336 11:17:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:34.336 11:17:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:34.336 11:17:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:34.336 11:17:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:34.336 11:17:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:34.336 11:17:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:34.336 11:17:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:34.595 nvme0n1 00:25:34.595 11:17:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:34.595 11:17:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:34.595 11:17:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:34.595 11:17:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:34.595 11:17:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:34.595 11:17:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:34.595 Running I/O for 2 seconds... 00:25:34.854 [2024-05-15 11:17:31.871042] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:34.854 [2024-05-15 11:17:31.871073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.854 [2024-05-15 11:17:31.871084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.854 [2024-05-15 11:17:31.883930] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:34.854 [2024-05-15 11:17:31.883953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.854 [2024-05-15 11:17:31.883963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.854 [2024-05-15 11:17:31.896666] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:34.854 [2024-05-15 11:17:31.896688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.854 [2024-05-15 11:17:31.896696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.854 [2024-05-15 11:17:31.904593] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:34.854 [2024-05-15 11:17:31.904613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.854 [2024-05-15 11:17:31.904622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.854 [2024-05-15 11:17:31.916704] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:34.854 [2024-05-15 11:17:31.916724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.854 [2024-05-15 11:17:31.916733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.854 [2024-05-15 11:17:31.926489] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:34.854 [2024-05-15 11:17:31.926509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.854 [2024-05-15 11:17:31.926518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.854 [2024-05-15 11:17:31.935742] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:34.854 [2024-05-15 11:17:31.935764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.854 [2024-05-15 11:17:31.935773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.854 [2024-05-15 11:17:31.944981] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:34.854 [2024-05-15 11:17:31.945000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.854 [2024-05-15 11:17:31.945008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.854 [2024-05-15 11:17:31.954268] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:34.854 [2024-05-15 11:17:31.954292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.854 [2024-05-15 11:17:31.954300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.854 [2024-05-15 11:17:31.963592] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:34.854 [2024-05-15 11:17:31.963611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.854 [2024-05-15 11:17:31.963619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.854 [2024-05-15 11:17:31.972473] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:34.854 [2024-05-15 11:17:31.972492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.854 [2024-05-15 11:17:31.972499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.854 [2024-05-15 11:17:31.982636] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:34.854 [2024-05-15 11:17:31.982655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:25501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.854 [2024-05-15 11:17:31.982663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.854 [2024-05-15 11:17:31.991115] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:34.854 [2024-05-15 11:17:31.991137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.854 [2024-05-15 11:17:31.991146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.854 [2024-05-15 11:17:32.002234] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:34.854 [2024-05-15 11:17:32.002254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.854 [2024-05-15 11:17:32.002261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.854 [2024-05-15 11:17:32.015027] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:34.854 [2024-05-15 11:17:32.015046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.854 [2024-05-15 11:17:32.015054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.854 [2024-05-15 11:17:32.027560] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:34.854 [2024-05-15 11:17:32.027580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.854 [2024-05-15 11:17:32.027588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.854 [2024-05-15 11:17:32.036270] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:34.854 [2024-05-15 11:17:32.036290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.854 [2024-05-15 11:17:32.036298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.854 [2024-05-15 11:17:32.046026] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:34.854 [2024-05-15 11:17:32.046046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.854 [2024-05-15 11:17:32.046054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.854 [2024-05-15 11:17:32.058550] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:34.854 [2024-05-15 11:17:32.058570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.854 [2024-05-15 11:17:32.058578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.854 [2024-05-15 11:17:32.070098] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:34.854 [2024-05-15 11:17:32.070117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:25393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.854 [2024-05-15 11:17:32.070126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.854 [2024-05-15 11:17:32.078629] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:34.854 [2024-05-15 11:17:32.078649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.854 [2024-05-15 11:17:32.078657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.854 [2024-05-15 11:17:32.091355] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:34.854 [2024-05-15 11:17:32.091373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.854 [2024-05-15 11:17:32.091380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.854 [2024-05-15 11:17:32.104206] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:34.854 [2024-05-15 11:17:32.104225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.854 [2024-05-15 11:17:32.104233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.854 [2024-05-15 11:17:32.116063] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:34.854 [2024-05-15 11:17:32.116082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.854 [2024-05-15 11:17:32.116091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.113 [2024-05-15 11:17:32.128733] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.113 [2024-05-15 11:17:32.128752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.113 [2024-05-15 11:17:32.128760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.113 [2024-05-15 11:17:32.137511] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.113 [2024-05-15 11:17:32.137530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.113 [2024-05-15 11:17:32.137542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.113 [2024-05-15 11:17:32.146993] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.113 [2024-05-15 11:17:32.147012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.113 [2024-05-15 11:17:32.147020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.113 [2024-05-15 11:17:32.157104] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.113 [2024-05-15 11:17:32.157124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.113 [2024-05-15 11:17:32.157132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.113 [2024-05-15 11:17:32.166322] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.113 [2024-05-15 11:17:32.166341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.113 [2024-05-15 11:17:32.166349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.113 [2024-05-15 11:17:32.175634] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.113 [2024-05-15 11:17:32.175653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.113 [2024-05-15 11:17:32.175661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.113 [2024-05-15 11:17:32.184598] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.113 [2024-05-15 11:17:32.184617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.113 [2024-05-15 11:17:32.184625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.113 [2024-05-15 11:17:32.194786] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.114 [2024-05-15 11:17:32.194805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.114 [2024-05-15 11:17:32.194813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.114 [2024-05-15 11:17:32.203880] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.114 [2024-05-15 11:17:32.203899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.114 [2024-05-15 11:17:32.203907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.114 [2024-05-15 11:17:32.213389] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.114 [2024-05-15 11:17:32.213408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:19299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.114 [2024-05-15 11:17:32.213416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.114 [2024-05-15 11:17:32.222018] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.114 [2024-05-15 11:17:32.222040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.114 [2024-05-15 11:17:32.222048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.114 [2024-05-15 11:17:32.231456] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.114 [2024-05-15 11:17:32.231475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.114 [2024-05-15 11:17:32.231482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.114 [2024-05-15 11:17:32.239969] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.114 [2024-05-15 11:17:32.239988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.114 [2024-05-15 11:17:32.239996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.114 [2024-05-15 11:17:32.250530] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.114 [2024-05-15 11:17:32.250550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:7015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.114 [2024-05-15 11:17:32.250558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.114 [2024-05-15 11:17:32.261705] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.114 [2024-05-15 11:17:32.261726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.114 [2024-05-15 11:17:32.261734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.114 [2024-05-15 11:17:32.269830] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.114 [2024-05-15 11:17:32.269851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.114 [2024-05-15 11:17:32.269859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.114 [2024-05-15 11:17:32.278921] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.114 [2024-05-15 11:17:32.278941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.114 [2024-05-15 11:17:32.278949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.114 [2024-05-15 11:17:32.289843] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.114 [2024-05-15 11:17:32.289863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.114 [2024-05-15 11:17:32.289872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.114 [2024-05-15 11:17:32.301851] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.114 [2024-05-15 11:17:32.301871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.114 [2024-05-15 11:17:32.301879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.114 [2024-05-15 11:17:32.309705] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.114 [2024-05-15 11:17:32.309724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.114 [2024-05-15 11:17:32.309733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.114 [2024-05-15 11:17:32.321281] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.114 [2024-05-15 11:17:32.321300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.114 [2024-05-15 11:17:32.321308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.114 [2024-05-15 11:17:32.331469] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.114 [2024-05-15 11:17:32.331488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.114 [2024-05-15 11:17:32.331496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.114 [2024-05-15 11:17:32.342844] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.114 [2024-05-15 11:17:32.342864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.114 [2024-05-15 11:17:32.342872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.114 [2024-05-15 11:17:32.351989] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.114 [2024-05-15 11:17:32.352009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.114 [2024-05-15 11:17:32.352017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.114 [2024-05-15 11:17:32.361243] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.114 [2024-05-15 11:17:32.361263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.114 [2024-05-15 11:17:32.361271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.114 [2024-05-15 11:17:32.369827] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.114 [2024-05-15 11:17:32.369845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.114 [2024-05-15 11:17:32.369854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.373 [2024-05-15 11:17:32.380947] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.373 [2024-05-15 11:17:32.380967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.373 [2024-05-15 11:17:32.380975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.373 [2024-05-15 11:17:32.392256] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.373 [2024-05-15 11:17:32.392276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.373 [2024-05-15 11:17:32.392287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.373 [2024-05-15 11:17:32.400923] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.373 [2024-05-15 11:17:32.400942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.373 [2024-05-15 11:17:32.400950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.373 [2024-05-15 11:17:32.413830] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.373 [2024-05-15 11:17:32.413849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.373 [2024-05-15 11:17:32.413857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.373 [2024-05-15 11:17:32.421510] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.373 [2024-05-15 11:17:32.421529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.373 [2024-05-15 11:17:32.421538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.373 [2024-05-15 11:17:32.433571] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.373 [2024-05-15 11:17:32.433591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.373 [2024-05-15 11:17:32.433600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.373 [2024-05-15 11:17:32.445211] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.373 [2024-05-15 11:17:32.445230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.373 [2024-05-15 11:17:32.445238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.373 [2024-05-15 11:17:32.453750] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.373 [2024-05-15 11:17:32.453769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:8779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.373 [2024-05-15 11:17:32.453777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.373 [2024-05-15 11:17:32.462892] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.373 [2024-05-15 11:17:32.462912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.373 [2024-05-15 11:17:32.462920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.373 [2024-05-15 11:17:32.473271] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.373 [2024-05-15 11:17:32.473291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.373 [2024-05-15 11:17:32.473299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.373 [2024-05-15 11:17:32.483098] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.373 [2024-05-15 11:17:32.483118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.373 [2024-05-15 11:17:32.483126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.373 [2024-05-15 11:17:32.491909] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.373 [2024-05-15 11:17:32.491927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.373 [2024-05-15 11:17:32.491934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.373 [2024-05-15 11:17:32.502146] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.373 [2024-05-15 11:17:32.502170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.374 [2024-05-15 11:17:32.502179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.374 [2024-05-15 11:17:32.511314] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.374 [2024-05-15 11:17:32.511332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.374 [2024-05-15 11:17:32.511340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.374 [2024-05-15 11:17:32.519823] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.374 [2024-05-15 11:17:32.519842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.374 [2024-05-15 11:17:32.519850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.374 [2024-05-15 11:17:32.530743] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.374 [2024-05-15 11:17:32.530763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.374 [2024-05-15 11:17:32.530771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.374 [2024-05-15 11:17:32.539248] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.374 [2024-05-15 11:17:32.539267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:6066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.374 [2024-05-15 11:17:32.539276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.374 [2024-05-15 11:17:32.550291] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.374 [2024-05-15 11:17:32.550313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.374 [2024-05-15 11:17:32.550321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.374 [2024-05-15 11:17:32.561494] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.374 [2024-05-15 11:17:32.561514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.374 [2024-05-15 11:17:32.561526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.374 [2024-05-15 11:17:32.570438] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.374 [2024-05-15 11:17:32.570459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.374 [2024-05-15 11:17:32.570467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.374 [2024-05-15 11:17:32.578918] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.374 [2024-05-15 11:17:32.578939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.374 [2024-05-15 11:17:32.578947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.374 [2024-05-15 11:17:32.588516] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.374 [2024-05-15 11:17:32.588537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.374 [2024-05-15 11:17:32.588545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.374 [2024-05-15 11:17:32.600034] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.374 [2024-05-15 11:17:32.600055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.374 [2024-05-15 11:17:32.600063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.374 [2024-05-15 11:17:32.608263] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.374 [2024-05-15 11:17:32.608284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.374 [2024-05-15 11:17:32.608292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.374 [2024-05-15 11:17:32.619114] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.374 [2024-05-15 11:17:32.619134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.374 [2024-05-15 11:17:32.619142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.374 [2024-05-15 11:17:32.629239] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.374 [2024-05-15 11:17:32.629259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.374 [2024-05-15 11:17:32.629266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.632 [2024-05-15 11:17:32.638735] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.632 [2024-05-15 11:17:32.638756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:17739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.632 [2024-05-15 11:17:32.638763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.632 [2024-05-15 11:17:32.648224] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.632 [2024-05-15 11:17:32.648249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.632 [2024-05-15 11:17:32.648258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.632 [2024-05-15 11:17:32.658260] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.632 [2024-05-15 11:17:32.658279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.632 [2024-05-15 11:17:32.658288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.632 [2024-05-15 11:17:32.666916] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.632 [2024-05-15 11:17:32.666936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.632 [2024-05-15 11:17:32.666945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.632 [2024-05-15 11:17:32.677764] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.632 [2024-05-15 11:17:32.677785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.632 [2024-05-15 11:17:32.677794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.632 [2024-05-15 11:17:32.687921] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.632 [2024-05-15 11:17:32.687941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.632 [2024-05-15 11:17:32.687949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.633 [2024-05-15 11:17:32.696913] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.633 [2024-05-15 11:17:32.696932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.633 [2024-05-15 11:17:32.696939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.633 [2024-05-15 11:17:32.707427] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.633 [2024-05-15 11:17:32.707446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.633 [2024-05-15 11:17:32.707455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.633 [2024-05-15 11:17:32.716218] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.633 [2024-05-15 11:17:32.716239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.633 [2024-05-15 11:17:32.716247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.633 [2024-05-15 11:17:32.725815] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.633 [2024-05-15 11:17:32.725835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.633 [2024-05-15 11:17:32.725843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.633 [2024-05-15 11:17:32.734209] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.633 [2024-05-15 11:17:32.734228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.633 [2024-05-15 11:17:32.734236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.633 [2024-05-15 11:17:32.745652] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.633 [2024-05-15 11:17:32.745672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.633 [2024-05-15 11:17:32.745681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.633 [2024-05-15 11:17:32.754358] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.633 [2024-05-15 11:17:32.754377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.633 [2024-05-15 11:17:32.754386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.633 [2024-05-15 11:17:32.766097] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.633 [2024-05-15 11:17:32.766118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.633 [2024-05-15 11:17:32.766126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.633 [2024-05-15 11:17:32.776951] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.633 [2024-05-15 11:17:32.776971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.633 [2024-05-15 11:17:32.776980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.633 [2024-05-15 11:17:32.785801] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.633 [2024-05-15 11:17:32.785822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.633 [2024-05-15 11:17:32.785833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.633 [2024-05-15 11:17:32.797651] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.633 [2024-05-15 11:17:32.797671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.633 [2024-05-15 11:17:32.797679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.633 [2024-05-15 11:17:32.806764] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.633 [2024-05-15 11:17:32.806783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.633 [2024-05-15 11:17:32.806791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.633 [2024-05-15 11:17:32.818248] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.633 [2024-05-15 11:17:32.818269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.633 [2024-05-15 11:17:32.818282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.633 [2024-05-15 11:17:32.828833] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.633 [2024-05-15 11:17:32.828853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.633 [2024-05-15 11:17:32.828861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.633 [2024-05-15 11:17:32.837214] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.633 [2024-05-15 11:17:32.837234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.633 [2024-05-15 11:17:32.837242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.633 [2024-05-15 11:17:32.847040] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.633 [2024-05-15 11:17:32.847060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.633 [2024-05-15 11:17:32.847068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.633 [2024-05-15 11:17:32.855204] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.633 [2024-05-15 11:17:32.855223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.633 [2024-05-15 11:17:32.855231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.633 [2024-05-15 11:17:32.865629] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.633 [2024-05-15 11:17:32.865649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.633 [2024-05-15 11:17:32.865657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.633 [2024-05-15 11:17:32.876765] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.633 [2024-05-15 11:17:32.876785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.633 [2024-05-15 11:17:32.876793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.633 [2024-05-15 11:17:32.889254] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.633 [2024-05-15 11:17:32.889274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.633 [2024-05-15 11:17:32.889282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.892 [2024-05-15 11:17:32.901812] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.892 [2024-05-15 11:17:32.901833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.892 [2024-05-15 11:17:32.901841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.892 [2024-05-15 11:17:32.910258] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.892 [2024-05-15 11:17:32.910278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.892 [2024-05-15 11:17:32.910286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.892 [2024-05-15 11:17:32.921973] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.892 [2024-05-15 11:17:32.921993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.892 [2024-05-15 11:17:32.922001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.892 [2024-05-15 11:17:32.933778] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.892 [2024-05-15 11:17:32.933798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.892 [2024-05-15 11:17:32.933807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.892 [2024-05-15 11:17:32.944140] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.892 [2024-05-15 11:17:32.944160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.892 [2024-05-15 11:17:32.944174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.892 [2024-05-15 11:17:32.952938] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.892 [2024-05-15 11:17:32.952957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.892 [2024-05-15 11:17:32.952965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.892 [2024-05-15 11:17:32.965460] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.892 [2024-05-15 11:17:32.965480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.892 [2024-05-15 11:17:32.965488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.892 [2024-05-15 11:17:32.973959] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.892 [2024-05-15 11:17:32.973979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.892 [2024-05-15 11:17:32.973988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.892 [2024-05-15 11:17:32.985271] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.892 [2024-05-15 11:17:32.985291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.892 [2024-05-15 11:17:32.985299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.892 [2024-05-15 11:17:32.993670] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.892 [2024-05-15 11:17:32.993689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.892 [2024-05-15 11:17:32.993701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.892 [2024-05-15 11:17:33.005202] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.892 [2024-05-15 11:17:33.005222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.892 [2024-05-15 11:17:33.005230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.892 [2024-05-15 11:17:33.017855] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.892 [2024-05-15 11:17:33.017874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.892 [2024-05-15 11:17:33.017882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.892 [2024-05-15 11:17:33.029767] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.892 [2024-05-15 11:17:33.029786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.892 [2024-05-15 11:17:33.029794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.892 [2024-05-15 11:17:33.038103] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.892 [2024-05-15 11:17:33.038122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.892 [2024-05-15 11:17:33.038129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.892 [2024-05-15 11:17:33.048653] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.892 [2024-05-15 11:17:33.048672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.892 [2024-05-15 11:17:33.048680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.892 [2024-05-15 11:17:33.056507] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.892 [2024-05-15 11:17:33.056526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.892 [2024-05-15 11:17:33.056534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.892 [2024-05-15 11:17:33.068022] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.892 [2024-05-15 11:17:33.068041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.892 [2024-05-15 11:17:33.068049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.892 [2024-05-15 11:17:33.078443] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.892 [2024-05-15 11:17:33.078461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.892 [2024-05-15 11:17:33.078469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.892 [2024-05-15 11:17:33.086642] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.892 [2024-05-15 11:17:33.086665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.892 [2024-05-15 11:17:33.086673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.892 [2024-05-15 11:17:33.096966] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.892 [2024-05-15 11:17:33.096985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:25069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.892 [2024-05-15 11:17:33.096993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.892 [2024-05-15 11:17:33.105375] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.892 [2024-05-15 11:17:33.105394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.892 [2024-05-15 11:17:33.105402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.892 [2024-05-15 11:17:33.117220] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.892 [2024-05-15 11:17:33.117240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.892 [2024-05-15 11:17:33.117248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.892 [2024-05-15 11:17:33.128968] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.892 [2024-05-15 11:17:33.128987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.892 [2024-05-15 11:17:33.128995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.892 [2024-05-15 11:17:33.137605] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.892 [2024-05-15 11:17:33.137624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.892 [2024-05-15 11:17:33.137632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.892 [2024-05-15 11:17:33.149917] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:35.892 [2024-05-15 11:17:33.149935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.893 [2024-05-15 11:17:33.149943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.151 [2024-05-15 11:17:33.162984] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.151 [2024-05-15 11:17:33.163003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.151 [2024-05-15 11:17:33.163011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.151 [2024-05-15 11:17:33.175662] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.151 [2024-05-15 11:17:33.175681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.151 [2024-05-15 11:17:33.175689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.151 [2024-05-15 11:17:33.183989] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.151 [2024-05-15 11:17:33.184008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.151 [2024-05-15 11:17:33.184016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.151 [2024-05-15 11:17:33.195258] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.151 [2024-05-15 11:17:33.195277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.151 [2024-05-15 11:17:33.195285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.151 [2024-05-15 11:17:33.204413] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.151 [2024-05-15 11:17:33.204432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.151 [2024-05-15 11:17:33.204440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.151 [2024-05-15 11:17:33.214376] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.151 [2024-05-15 11:17:33.214395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.151 [2024-05-15 11:17:33.214403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.151 [2024-05-15 11:17:33.223339] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.151 [2024-05-15 11:17:33.223359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.151 [2024-05-15 11:17:33.223366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.151 [2024-05-15 11:17:33.235133] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.151 [2024-05-15 11:17:33.235153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.151 [2024-05-15 11:17:33.235161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.151 [2024-05-15 11:17:33.245955] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.151 [2024-05-15 11:17:33.245975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:11095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.151 [2024-05-15 11:17:33.245984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.151 [2024-05-15 11:17:33.254735] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.151 [2024-05-15 11:17:33.254754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.151 [2024-05-15 11:17:33.254762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.151 [2024-05-15 11:17:33.267860] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.151 [2024-05-15 11:17:33.267880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.151 [2024-05-15 11:17:33.267892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.151 [2024-05-15 11:17:33.277776] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.151 [2024-05-15 11:17:33.277795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.151 [2024-05-15 11:17:33.277803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.151 [2024-05-15 11:17:33.285788] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.151 [2024-05-15 11:17:33.285809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.151 [2024-05-15 11:17:33.285817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.151 [2024-05-15 11:17:33.295862] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.151 [2024-05-15 11:17:33.295883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.151 [2024-05-15 11:17:33.295891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.151 [2024-05-15 11:17:33.306181] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.151 [2024-05-15 11:17:33.306201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.151 [2024-05-15 11:17:33.306209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.151 [2024-05-15 11:17:33.314352] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.151 [2024-05-15 11:17:33.314372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.151 [2024-05-15 11:17:33.314380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.151 [2024-05-15 11:17:33.326039] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.151 [2024-05-15 11:17:33.326058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.151 [2024-05-15 11:17:33.326066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.151 [2024-05-15 11:17:33.336804] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.151 [2024-05-15 11:17:33.336823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.151 [2024-05-15 11:17:33.336830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.151 [2024-05-15 11:17:33.345413] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.151 [2024-05-15 11:17:33.345432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.151 [2024-05-15 11:17:33.345440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.151 [2024-05-15 11:17:33.355405] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.151 [2024-05-15 11:17:33.355428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.151 [2024-05-15 11:17:33.355436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.151 [2024-05-15 11:17:33.364582] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.151 [2024-05-15 11:17:33.364602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.151 [2024-05-15 11:17:33.364611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.151 [2024-05-15 11:17:33.373832] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.151 [2024-05-15 11:17:33.373852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.151 [2024-05-15 11:17:33.373860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.151 [2024-05-15 11:17:33.383057] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.151 [2024-05-15 11:17:33.383076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.151 [2024-05-15 11:17:33.383083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.151 [2024-05-15 11:17:33.392379] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.151 [2024-05-15 11:17:33.392400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.151 [2024-05-15 11:17:33.392409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.151 [2024-05-15 11:17:33.401909] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.151 [2024-05-15 11:17:33.401929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.151 [2024-05-15 11:17:33.401937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.151 [2024-05-15 11:17:33.411653] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.151 [2024-05-15 11:17:33.411673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.151 [2024-05-15 11:17:33.411681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.410 [2024-05-15 11:17:33.422141] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.410 [2024-05-15 11:17:33.422161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.410 [2024-05-15 11:17:33.422176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.410 [2024-05-15 11:17:33.430492] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.410 [2024-05-15 11:17:33.430511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.410 [2024-05-15 11:17:33.430522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.410 [2024-05-15 11:17:33.442555] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.410 [2024-05-15 11:17:33.442574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.410 [2024-05-15 11:17:33.442583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.410 [2024-05-15 11:17:33.453180] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.410 [2024-05-15 11:17:33.453199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.410 [2024-05-15 11:17:33.453207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.410 [2024-05-15 11:17:33.461306] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.410 [2024-05-15 11:17:33.461325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.410 [2024-05-15 11:17:33.461333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.410 [2024-05-15 11:17:33.471663] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.410 [2024-05-15 11:17:33.471683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.410 [2024-05-15 11:17:33.471691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.410 [2024-05-15 11:17:33.482707] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.410 [2024-05-15 11:17:33.482727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.410 [2024-05-15 11:17:33.482735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.410 [2024-05-15 11:17:33.495859] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.410 [2024-05-15 11:17:33.495881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.410 [2024-05-15 11:17:33.495889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.410 [2024-05-15 11:17:33.505434] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.410 [2024-05-15 11:17:33.505454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.410 [2024-05-15 11:17:33.505463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.410 [2024-05-15 11:17:33.514264] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.410 [2024-05-15 11:17:33.514284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.410 [2024-05-15 11:17:33.514293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.410 [2024-05-15 11:17:33.524656] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.410 [2024-05-15 11:17:33.524679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.410 [2024-05-15 11:17:33.524687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.410 [2024-05-15 11:17:33.534782] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.410 [2024-05-15 11:17:33.534801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.410 [2024-05-15 11:17:33.534810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.410 [2024-05-15 11:17:33.543106] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.410 [2024-05-15 11:17:33.543124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.410 [2024-05-15 11:17:33.543133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.410 [2024-05-15 11:17:33.555172] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.410 [2024-05-15 11:17:33.555191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:8763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.410 [2024-05-15 11:17:33.555200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.410 [2024-05-15 11:17:33.567044] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.410 [2024-05-15 11:17:33.567063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.410 [2024-05-15 11:17:33.567071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.410 [2024-05-15 11:17:33.575999] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.410 [2024-05-15 11:17:33.576018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.410 [2024-05-15 11:17:33.576026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.410 [2024-05-15 11:17:33.587796] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.410 [2024-05-15 11:17:33.587815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.410 [2024-05-15 11:17:33.587823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.410 [2024-05-15 11:17:33.598456] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.410 [2024-05-15 11:17:33.598475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.410 [2024-05-15 11:17:33.598483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.410 [2024-05-15 11:17:33.607430] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.410 [2024-05-15 11:17:33.607450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.410 [2024-05-15 11:17:33.607457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.410 [2024-05-15 11:17:33.617282] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.410 [2024-05-15 11:17:33.617301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.410 [2024-05-15 11:17:33.617310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.410 [2024-05-15 11:17:33.626501] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.410 [2024-05-15 11:17:33.626520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.410 [2024-05-15 11:17:33.626528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.410 [2024-05-15 11:17:33.635924] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.410 [2024-05-15 11:17:33.635944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.410 [2024-05-15 11:17:33.635952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.410 [2024-05-15 11:17:33.646062] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.410 [2024-05-15 11:17:33.646082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.410 [2024-05-15 11:17:33.646089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.410 [2024-05-15 11:17:33.655125] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.410 [2024-05-15 11:17:33.655144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.410 [2024-05-15 11:17:33.655152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.410 [2024-05-15 11:17:33.667064] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.410 [2024-05-15 11:17:33.667084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.411 [2024-05-15 11:17:33.667092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.669 [2024-05-15 11:17:33.675480] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.669 [2024-05-15 11:17:33.675500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.669 [2024-05-15 11:17:33.675508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.669 [2024-05-15 11:17:33.686979] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.669 [2024-05-15 11:17:33.686998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.669 [2024-05-15 11:17:33.687006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.669 [2024-05-15 11:17:33.697563] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.669 [2024-05-15 11:17:33.697582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.669 [2024-05-15 11:17:33.697594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.669 [2024-05-15 11:17:33.706076] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.669 [2024-05-15 11:17:33.706094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.669 [2024-05-15 11:17:33.706103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.669 [2024-05-15 11:17:33.718092] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.669 [2024-05-15 11:17:33.718112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.669 [2024-05-15 11:17:33.718119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.669 [2024-05-15 11:17:33.730046] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.669 [2024-05-15 11:17:33.730065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.669 [2024-05-15 11:17:33.730073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.669 [2024-05-15 11:17:33.738240] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.669 [2024-05-15 11:17:33.738260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.669 [2024-05-15 11:17:33.738268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.669 [2024-05-15 11:17:33.749061] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.670 [2024-05-15 11:17:33.749080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.670 [2024-05-15 11:17:33.749088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.670 [2024-05-15 11:17:33.757520] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.670 [2024-05-15 11:17:33.757539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.670 [2024-05-15 11:17:33.757547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.670 [2024-05-15 11:17:33.767862] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.670 [2024-05-15 11:17:33.767882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.670 [2024-05-15 11:17:33.767890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.670 [2024-05-15 11:17:33.778387] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.670 [2024-05-15 11:17:33.778406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.670 [2024-05-15 11:17:33.778414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.670 [2024-05-15 11:17:33.786641] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.670 [2024-05-15 11:17:33.786664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.670 [2024-05-15 11:17:33.786673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.670 [2024-05-15 11:17:33.795930] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.670 [2024-05-15 11:17:33.795949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.670 [2024-05-15 11:17:33.795957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.670 [2024-05-15 11:17:33.805957] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.670 [2024-05-15 11:17:33.805976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.670 [2024-05-15 11:17:33.805984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.670 [2024-05-15 11:17:33.814711] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.670 [2024-05-15 11:17:33.814729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.670 [2024-05-15 11:17:33.814737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.670 [2024-05-15 11:17:33.824922] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.670 [2024-05-15 11:17:33.824941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.670 [2024-05-15 11:17:33.824949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.670 [2024-05-15 11:17:33.833161] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.670 [2024-05-15 11:17:33.833185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:8146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.670 [2024-05-15 11:17:33.833193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.670 [2024-05-15 11:17:33.843089] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.670 [2024-05-15 11:17:33.843109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.670 [2024-05-15 11:17:33.843118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.670 [2024-05-15 11:17:33.852296] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.670 [2024-05-15 11:17:33.852315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.670 [2024-05-15 11:17:33.852324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.670 [2024-05-15 11:17:33.861543] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x217e9a0) 00:25:36.670 [2024-05-15 11:17:33.861562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.670 [2024-05-15 11:17:33.861570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.670 00:25:36.670 Latency(us) 00:25:36.670 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:36.670 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:36.670 nvme0n1 : 2.00 25276.66 98.74 0.00 0.00 5058.35 2436.23 17096.35 00:25:36.670 =================================================================================================================== 00:25:36.670 Total : 25276.66 98.74 0.00 0.00 5058.35 2436.23 17096.35 00:25:36.670 0 00:25:36.670 11:17:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:36.670 11:17:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:36.670 11:17:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:36.670 | .driver_specific 00:25:36.670 | .nvme_error 00:25:36.670 | .status_code 00:25:36.670 | .command_transient_transport_error' 00:25:36.670 11:17:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:36.928 11:17:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 198 > 0 )) 00:25:36.929 11:17:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2395282 00:25:36.929 11:17:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 2395282 ']' 00:25:36.929 11:17:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 2395282 00:25:36.929 11:17:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:25:36.929 11:17:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:25:36.929 11:17:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2395282 00:25:36.929 11:17:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:25:36.929 11:17:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:25:36.929 11:17:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2395282' 00:25:36.929 killing process with pid 2395282 00:25:36.929 11:17:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 2395282 00:25:36.929 Received shutdown signal, test time was about 2.000000 seconds 00:25:36.929 00:25:36.929 Latency(us) 00:25:36.929 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:36.929 =================================================================================================================== 00:25:36.929 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:36.929 11:17:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 2395282 00:25:37.187 11:17:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:25:37.187 11:17:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:37.187 11:17:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:37.187 11:17:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:37.187 11:17:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:37.187 11:17:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2395922 00:25:37.187 11:17:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2395922 /var/tmp/bperf.sock 00:25:37.187 11:17:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 2395922 ']' 00:25:37.187 11:17:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:37.187 11:17:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:37.187 11:17:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:37.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:37.187 11:17:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:37.187 11:17:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:25:37.187 11:17:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:37.187 [2024-05-15 11:17:34.343383] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:25:37.187 [2024-05-15 11:17:34.343430] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2395922 ] 00:25:37.187 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:37.187 Zero copy mechanism will not be used. 00:25:37.187 EAL: No free 2048 kB hugepages reported on node 1 00:25:37.187 [2024-05-15 11:17:34.397511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.446 [2024-05-15 11:17:34.477335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:38.012 11:17:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:38.012 11:17:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:25:38.012 11:17:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:38.012 11:17:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:38.271 11:17:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:38.271 11:17:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:38.271 11:17:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:38.271 11:17:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:38.271 11:17:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:38.271 11:17:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:38.531 nvme0n1 00:25:38.531 11:17:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:38.531 11:17:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:38.531 11:17:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:38.531 11:17:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:38.531 11:17:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:38.531 11:17:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:38.531 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:38.531 Zero copy mechanism will not be used. 00:25:38.531 Running I/O for 2 seconds... 00:25:38.531 [2024-05-15 11:17:35.761827] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:38.531 [2024-05-15 11:17:35.761859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.531 [2024-05-15 11:17:35.761873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:38.531 [2024-05-15 11:17:35.768791] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:38.531 [2024-05-15 11:17:35.768815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.531 [2024-05-15 11:17:35.768825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:38.531 [2024-05-15 11:17:35.777178] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:38.531 [2024-05-15 11:17:35.777200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.531 [2024-05-15 11:17:35.777208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:38.531 [2024-05-15 11:17:35.785550] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:38.531 [2024-05-15 11:17:35.785571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.531 [2024-05-15 11:17:35.785580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.531 [2024-05-15 11:17:35.793662] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:38.531 [2024-05-15 11:17:35.793684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.531 [2024-05-15 11:17:35.793693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:38.790 [2024-05-15 11:17:35.802200] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:38.790 [2024-05-15 11:17:35.802223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.790 [2024-05-15 11:17:35.802232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:38.790 [2024-05-15 11:17:35.809618] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:38.790 [2024-05-15 11:17:35.809638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.790 [2024-05-15 11:17:35.809646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:38.790 [2024-05-15 11:17:35.815909] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:38.790 [2024-05-15 11:17:35.815931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.790 [2024-05-15 11:17:35.815939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.790 [2024-05-15 11:17:35.822052] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:38.790 [2024-05-15 11:17:35.822072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.790 [2024-05-15 11:17:35.822080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:38.790 [2024-05-15 11:17:35.828401] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:38.790 [2024-05-15 11:17:35.828425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.790 [2024-05-15 11:17:35.828434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:38.790 [2024-05-15 11:17:35.834570] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:38.790 [2024-05-15 11:17:35.834590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.790 [2024-05-15 11:17:35.834597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:38.790 [2024-05-15 11:17:35.840968] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:38.790 [2024-05-15 11:17:35.840988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.790 [2024-05-15 11:17:35.840996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.790 [2024-05-15 11:17:35.847236] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:38.790 [2024-05-15 11:17:35.847256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.790 [2024-05-15 11:17:35.847264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:38.790 [2024-05-15 11:17:35.853144] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:38.790 [2024-05-15 11:17:35.853168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.790 [2024-05-15 11:17:35.853177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:38.790 [2024-05-15 11:17:35.858852] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:38.790 [2024-05-15 11:17:35.858873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.790 [2024-05-15 11:17:35.858881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:38.790 [2024-05-15 11:17:35.864688] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:38.790 [2024-05-15 11:17:35.864708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.790 [2024-05-15 11:17:35.864716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.790 [2024-05-15 11:17:35.870234] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:38.790 [2024-05-15 11:17:35.870255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.790 [2024-05-15 11:17:35.870263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:38.790 [2024-05-15 11:17:35.875244] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:38.790 [2024-05-15 11:17:35.875265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.790 [2024-05-15 11:17:35.875273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:38.790 [2024-05-15 11:17:35.880795] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:38.790 [2024-05-15 11:17:35.880817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.790 [2024-05-15 11:17:35.880825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:38.790 [2024-05-15 11:17:35.886123] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:38.790 [2024-05-15 11:17:35.886144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.790 [2024-05-15 11:17:35.886153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.790 [2024-05-15 11:17:35.891373] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:38.790 [2024-05-15 11:17:35.891395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.790 [2024-05-15 11:17:35.891403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:38.790 [2024-05-15 11:17:35.896574] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:38.790 [2024-05-15 11:17:35.896595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.790 [2024-05-15 11:17:35.896604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:38.790 [2024-05-15 11:17:35.901736] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:38.790 [2024-05-15 11:17:35.901757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.791 [2024-05-15 11:17:35.901765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:38.791 [2024-05-15 11:17:35.906977] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:38.791 [2024-05-15 11:17:35.906998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.791 [2024-05-15 11:17:35.907006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.791 [2024-05-15 11:17:35.912327] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:38.791 [2024-05-15 11:17:35.912348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.791 [2024-05-15 11:17:35.912356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:38.791 [2024-05-15 11:17:35.917833] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:38.791 [2024-05-15 11:17:35.917853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.791 [2024-05-15 11:17:35.917861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:38.791 [2024-05-15 11:17:35.923396] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:38.791 [2024-05-15 11:17:35.923417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.791 [2024-05-15 11:17:35.923428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:38.791 [2024-05-15 11:17:35.928911] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:38.791 [2024-05-15 11:17:35.928932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.791 [2024-05-15 11:17:35.928939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.791 [2024-05-15 11:17:35.934358] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:38.791 [2024-05-15 11:17:35.934379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.791 [2024-05-15 11:17:35.934386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:38.791 [2024-05-15 11:17:35.939663] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:38.791 [2024-05-15 11:17:35.939684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.791 [2024-05-15 11:17:35.939692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:38.791 [2024-05-15 11:17:35.944983] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:38.791 [2024-05-15 11:17:35.945003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.791 [2024-05-15 11:17:35.945011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:38.791 [2024-05-15 11:17:35.950247] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:38.791 [2024-05-15 11:17:35.950267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.791 [2024-05-15 11:17:35.950275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.791 [2024-05-15 11:17:35.955570] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:38.791 [2024-05-15 11:17:35.955590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.791 [2024-05-15 11:17:35.955598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:38.791 [2024-05-15 11:17:35.961064] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:38.791 [2024-05-15 11:17:35.961085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.791 [2024-05-15 11:17:35.961093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:38.791 [2024-05-15 11:17:35.966555] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:38.791 [2024-05-15 11:17:35.966575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.791 [2024-05-15 11:17:35.966583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:38.791 [2024-05-15 11:17:35.972108] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:38.791 [2024-05-15 11:17:35.972132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.791 [2024-05-15 11:17:35.972140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.791 [2024-05-15 11:17:35.977598] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:38.791 [2024-05-15 11:17:35.977619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.791 [2024-05-15 11:17:35.977627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:38.791 [2024-05-15 11:17:35.983137] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:38.791 [2024-05-15 11:17:35.983157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.791 [2024-05-15 11:17:35.983171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:38.791 [2024-05-15 11:17:35.988637] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:38.791 [2024-05-15 11:17:35.988657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.791 [2024-05-15 11:17:35.988665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:38.791 [2024-05-15 11:17:35.994121] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:38.791 [2024-05-15 11:17:35.994141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.791 [2024-05-15 11:17:35.994149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.791 [2024-05-15 11:17:35.999736] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:38.791 [2024-05-15 11:17:35.999757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.791 [2024-05-15 11:17:35.999765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:38.791 [2024-05-15 11:17:36.005181] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:38.791 [2024-05-15 11:17:36.005201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.791 [2024-05-15 11:17:36.005209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:38.791 [2024-05-15 11:17:36.010546] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:38.791 [2024-05-15 11:17:36.010567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.791 [2024-05-15 11:17:36.010575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:38.791 [2024-05-15 11:17:36.015899] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:38.791 [2024-05-15 11:17:36.015920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.791 [2024-05-15 11:17:36.015928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.791 [2024-05-15 11:17:36.021459] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:38.791 [2024-05-15 11:17:36.021480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.791 [2024-05-15 11:17:36.021488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:38.791 [2024-05-15 11:17:36.026989] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:38.791 [2024-05-15 11:17:36.027010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.791 [2024-05-15 11:17:36.027018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:38.791 [2024-05-15 11:17:36.032781] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:38.791 [2024-05-15 11:17:36.032802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.791 [2024-05-15 11:17:36.032810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:38.791 [2024-05-15 11:17:36.038713] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:38.791 [2024-05-15 11:17:36.038736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.791 [2024-05-15 11:17:36.038744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.791 [2024-05-15 11:17:36.046247] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:38.792 [2024-05-15 11:17:36.046275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.792 [2024-05-15 11:17:36.046284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:38.792 [2024-05-15 11:17:36.053373] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:38.792 [2024-05-15 11:17:36.053396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.792 [2024-05-15 11:17:36.053404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.051 [2024-05-15 11:17:36.060065] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.051 [2024-05-15 11:17:36.060087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.051 [2024-05-15 11:17:36.060096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.051 [2024-05-15 11:17:36.065914] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.051 [2024-05-15 11:17:36.065935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.051 [2024-05-15 11:17:36.065943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.051 [2024-05-15 11:17:36.070071] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.051 [2024-05-15 11:17:36.070097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.051 [2024-05-15 11:17:36.070105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.051 [2024-05-15 11:17:36.077995] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.051 [2024-05-15 11:17:36.078016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.051 [2024-05-15 11:17:36.078025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.051 [2024-05-15 11:17:36.085717] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.051 [2024-05-15 11:17:36.085738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.051 [2024-05-15 11:17:36.085745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.051 [2024-05-15 11:17:36.094525] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.051 [2024-05-15 11:17:36.094545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.051 [2024-05-15 11:17:36.094553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.051 [2024-05-15 11:17:36.101497] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.051 [2024-05-15 11:17:36.101518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.051 [2024-05-15 11:17:36.101526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.051 [2024-05-15 11:17:36.109749] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.051 [2024-05-15 11:17:36.109770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.051 [2024-05-15 11:17:36.109779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.051 [2024-05-15 11:17:36.117840] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.051 [2024-05-15 11:17:36.117861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.051 [2024-05-15 11:17:36.117869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.051 [2024-05-15 11:17:36.125982] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.051 [2024-05-15 11:17:36.126003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.051 [2024-05-15 11:17:36.126011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.051 [2024-05-15 11:17:36.133772] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.051 [2024-05-15 11:17:36.133793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.051 [2024-05-15 11:17:36.133802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.051 [2024-05-15 11:17:36.140316] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.051 [2024-05-15 11:17:36.140335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.051 [2024-05-15 11:17:36.140343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.051 [2024-05-15 11:17:36.146422] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.051 [2024-05-15 11:17:36.146442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.051 [2024-05-15 11:17:36.146450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.051 [2024-05-15 11:17:36.152319] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.051 [2024-05-15 11:17:36.152339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.051 [2024-05-15 11:17:36.152347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.051 [2024-05-15 11:17:36.158382] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.051 [2024-05-15 11:17:36.158402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.051 [2024-05-15 11:17:36.158411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.051 [2024-05-15 11:17:36.164730] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.051 [2024-05-15 11:17:36.164750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.051 [2024-05-15 11:17:36.164758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.051 [2024-05-15 11:17:36.170754] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.051 [2024-05-15 11:17:36.170775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.051 [2024-05-15 11:17:36.170784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.051 [2024-05-15 11:17:36.176674] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.051 [2024-05-15 11:17:36.176694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.051 [2024-05-15 11:17:36.176702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.051 [2024-05-15 11:17:36.183491] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.051 [2024-05-15 11:17:36.183512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.051 [2024-05-15 11:17:36.183520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.051 [2024-05-15 11:17:36.188927] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.051 [2024-05-15 11:17:36.188947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.051 [2024-05-15 11:17:36.188959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.051 [2024-05-15 11:17:36.194481] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.051 [2024-05-15 11:17:36.194503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.051 [2024-05-15 11:17:36.194511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.051 [2024-05-15 11:17:36.200344] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.052 [2024-05-15 11:17:36.200365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.052 [2024-05-15 11:17:36.200373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.052 [2024-05-15 11:17:36.206285] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.052 [2024-05-15 11:17:36.206305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.052 [2024-05-15 11:17:36.206313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.052 [2024-05-15 11:17:36.212125] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.052 [2024-05-15 11:17:36.212145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.052 [2024-05-15 11:17:36.212154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.052 [2024-05-15 11:17:36.218082] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.052 [2024-05-15 11:17:36.218102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.052 [2024-05-15 11:17:36.218110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.052 [2024-05-15 11:17:36.224297] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.052 [2024-05-15 11:17:36.224319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.052 [2024-05-15 11:17:36.224327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.052 [2024-05-15 11:17:36.230697] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.052 [2024-05-15 11:17:36.230717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.052 [2024-05-15 11:17:36.230725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.052 [2024-05-15 11:17:36.236716] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.052 [2024-05-15 11:17:36.236736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.052 [2024-05-15 11:17:36.236745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.052 [2024-05-15 11:17:36.242931] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.052 [2024-05-15 11:17:36.242956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.052 [2024-05-15 11:17:36.242964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.052 [2024-05-15 11:17:36.248514] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.052 [2024-05-15 11:17:36.248535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.052 [2024-05-15 11:17:36.248543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.052 [2024-05-15 11:17:36.254789] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.052 [2024-05-15 11:17:36.254810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.052 [2024-05-15 11:17:36.254818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.052 [2024-05-15 11:17:36.260706] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.052 [2024-05-15 11:17:36.260727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.052 [2024-05-15 11:17:36.260735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.052 [2024-05-15 11:17:36.266635] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.052 [2024-05-15 11:17:36.266656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.052 [2024-05-15 11:17:36.266664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.052 [2024-05-15 11:17:36.273448] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.052 [2024-05-15 11:17:36.273471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.052 [2024-05-15 11:17:36.273480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.052 [2024-05-15 11:17:36.280697] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.052 [2024-05-15 11:17:36.280719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.052 [2024-05-15 11:17:36.280727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.052 [2024-05-15 11:17:36.287538] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.052 [2024-05-15 11:17:36.287560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.052 [2024-05-15 11:17:36.287568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.052 [2024-05-15 11:17:36.296533] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.052 [2024-05-15 11:17:36.296554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.052 [2024-05-15 11:17:36.296563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.052 [2024-05-15 11:17:36.304496] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.052 [2024-05-15 11:17:36.304518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.052 [2024-05-15 11:17:36.304526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.052 [2024-05-15 11:17:36.311993] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.052 [2024-05-15 11:17:36.312015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.052 [2024-05-15 11:17:36.312023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.311 [2024-05-15 11:17:36.319889] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.311 [2024-05-15 11:17:36.319912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.311 [2024-05-15 11:17:36.319920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.311 [2024-05-15 11:17:36.328349] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.311 [2024-05-15 11:17:36.328371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.311 [2024-05-15 11:17:36.328379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.311 [2024-05-15 11:17:36.335984] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.311 [2024-05-15 11:17:36.336008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.311 [2024-05-15 11:17:36.336016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.311 [2024-05-15 11:17:36.343201] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.311 [2024-05-15 11:17:36.343225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.311 [2024-05-15 11:17:36.343234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.311 [2024-05-15 11:17:36.349941] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.311 [2024-05-15 11:17:36.349962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.311 [2024-05-15 11:17:36.349970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.311 [2024-05-15 11:17:36.356250] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.311 [2024-05-15 11:17:36.356270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.311 [2024-05-15 11:17:36.356279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.311 [2024-05-15 11:17:36.360659] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.311 [2024-05-15 11:17:36.360680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.311 [2024-05-15 11:17:36.360692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.311 [2024-05-15 11:17:36.367975] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.311 [2024-05-15 11:17:36.367997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.311 [2024-05-15 11:17:36.368005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.311 [2024-05-15 11:17:36.375283] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.311 [2024-05-15 11:17:36.375305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.311 [2024-05-15 11:17:36.375313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.311 [2024-05-15 11:17:36.383043] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.311 [2024-05-15 11:17:36.383065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.311 [2024-05-15 11:17:36.383074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.311 [2024-05-15 11:17:36.390832] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.311 [2024-05-15 11:17:36.390852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.311 [2024-05-15 11:17:36.390861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.311 [2024-05-15 11:17:36.397787] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.311 [2024-05-15 11:17:36.397809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.311 [2024-05-15 11:17:36.397817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.311 [2024-05-15 11:17:36.404475] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.311 [2024-05-15 11:17:36.404496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.311 [2024-05-15 11:17:36.404504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.311 [2024-05-15 11:17:36.410666] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.311 [2024-05-15 11:17:36.410686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.311 [2024-05-15 11:17:36.410694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.311 [2024-05-15 11:17:36.416473] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.312 [2024-05-15 11:17:36.416493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.312 [2024-05-15 11:17:36.416502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.312 [2024-05-15 11:17:36.422383] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.312 [2024-05-15 11:17:36.422407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.312 [2024-05-15 11:17:36.422415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.312 [2024-05-15 11:17:36.428395] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.312 [2024-05-15 11:17:36.428415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.312 [2024-05-15 11:17:36.428423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.312 [2024-05-15 11:17:36.434049] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.312 [2024-05-15 11:17:36.434070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.312 [2024-05-15 11:17:36.434077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.312 [2024-05-15 11:17:36.439753] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.312 [2024-05-15 11:17:36.439773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.312 [2024-05-15 11:17:36.439781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.312 [2024-05-15 11:17:36.445527] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.312 [2024-05-15 11:17:36.445548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.312 [2024-05-15 11:17:36.445556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.312 [2024-05-15 11:17:36.451104] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.312 [2024-05-15 11:17:36.451126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.312 [2024-05-15 11:17:36.451134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.312 [2024-05-15 11:17:36.456787] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.312 [2024-05-15 11:17:36.456808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.312 [2024-05-15 11:17:36.456816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.312 [2024-05-15 11:17:36.462550] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.312 [2024-05-15 11:17:36.462571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.312 [2024-05-15 11:17:36.462580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.312 [2024-05-15 11:17:36.468350] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.312 [2024-05-15 11:17:36.468371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.312 [2024-05-15 11:17:36.468379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.312 [2024-05-15 11:17:36.473990] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.312 [2024-05-15 11:17:36.474011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.312 [2024-05-15 11:17:36.474019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.312 [2024-05-15 11:17:36.479581] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.312 [2024-05-15 11:17:36.479602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.312 [2024-05-15 11:17:36.479610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.312 [2024-05-15 11:17:36.485226] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.312 [2024-05-15 11:17:36.485245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.312 [2024-05-15 11:17:36.485254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.312 [2024-05-15 11:17:36.490955] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.312 [2024-05-15 11:17:36.490976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.312 [2024-05-15 11:17:36.490984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.312 [2024-05-15 11:17:36.496502] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.312 [2024-05-15 11:17:36.496522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.312 [2024-05-15 11:17:36.496530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.312 [2024-05-15 11:17:36.502043] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.312 [2024-05-15 11:17:36.502064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.312 [2024-05-15 11:17:36.502072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.312 [2024-05-15 11:17:36.507556] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.312 [2024-05-15 11:17:36.507577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.312 [2024-05-15 11:17:36.507585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.312 [2024-05-15 11:17:36.513190] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.312 [2024-05-15 11:17:36.513210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.312 [2024-05-15 11:17:36.513219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.312 [2024-05-15 11:17:36.518951] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.312 [2024-05-15 11:17:36.518976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.312 [2024-05-15 11:17:36.518983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.312 [2024-05-15 11:17:36.524598] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.312 [2024-05-15 11:17:36.524619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.312 [2024-05-15 11:17:36.524627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.312 [2024-05-15 11:17:36.530162] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.312 [2024-05-15 11:17:36.530188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.312 [2024-05-15 11:17:36.530212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.312 [2024-05-15 11:17:36.535866] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.312 [2024-05-15 11:17:36.535887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.312 [2024-05-15 11:17:36.535895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.312 [2024-05-15 11:17:36.541650] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.312 [2024-05-15 11:17:36.541670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.312 [2024-05-15 11:17:36.541678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.312 [2024-05-15 11:17:36.547211] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.312 [2024-05-15 11:17:36.547231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.312 [2024-05-15 11:17:36.547239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.312 [2024-05-15 11:17:36.552726] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.312 [2024-05-15 11:17:36.552746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.312 [2024-05-15 11:17:36.552753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.312 [2024-05-15 11:17:36.557446] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.312 [2024-05-15 11:17:36.557466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.312 [2024-05-15 11:17:36.557475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.312 [2024-05-15 11:17:36.562945] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.313 [2024-05-15 11:17:36.562964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.313 [2024-05-15 11:17:36.562972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.313 [2024-05-15 11:17:36.568269] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.313 [2024-05-15 11:17:36.568290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.313 [2024-05-15 11:17:36.568298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.313 [2024-05-15 11:17:36.573763] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.313 [2024-05-15 11:17:36.573785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.313 [2024-05-15 11:17:36.573793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.609 [2024-05-15 11:17:36.579438] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.609 [2024-05-15 11:17:36.579459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.609 [2024-05-15 11:17:36.579468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.609 [2024-05-15 11:17:36.584865] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.609 [2024-05-15 11:17:36.584886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.609 [2024-05-15 11:17:36.584894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.609 [2024-05-15 11:17:36.590184] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.609 [2024-05-15 11:17:36.590205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.609 [2024-05-15 11:17:36.590213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.609 [2024-05-15 11:17:36.595650] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.609 [2024-05-15 11:17:36.595671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.609 [2024-05-15 11:17:36.595680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.609 [2024-05-15 11:17:36.601121] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.609 [2024-05-15 11:17:36.601141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.609 [2024-05-15 11:17:36.601150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.609 [2024-05-15 11:17:36.606719] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.609 [2024-05-15 11:17:36.606740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.609 [2024-05-15 11:17:36.606749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.609 [2024-05-15 11:17:36.612449] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.609 [2024-05-15 11:17:36.612470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.609 [2024-05-15 11:17:36.612483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.609 [2024-05-15 11:17:36.618018] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.609 [2024-05-15 11:17:36.618039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.609 [2024-05-15 11:17:36.618048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.609 [2024-05-15 11:17:36.623529] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.609 [2024-05-15 11:17:36.623549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.609 [2024-05-15 11:17:36.623557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.609 [2024-05-15 11:17:36.629043] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.609 [2024-05-15 11:17:36.629064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.609 [2024-05-15 11:17:36.629072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.609 [2024-05-15 11:17:36.634588] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.609 [2024-05-15 11:17:36.634609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.609 [2024-05-15 11:17:36.634617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.609 [2024-05-15 11:17:36.640028] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.609 [2024-05-15 11:17:36.640049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.609 [2024-05-15 11:17:36.640057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.609 [2024-05-15 11:17:36.645401] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.609 [2024-05-15 11:17:36.645421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.609 [2024-05-15 11:17:36.645430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.609 [2024-05-15 11:17:36.650799] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.609 [2024-05-15 11:17:36.650820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.609 [2024-05-15 11:17:36.650828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.609 [2024-05-15 11:17:36.656181] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.609 [2024-05-15 11:17:36.656202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.609 [2024-05-15 11:17:36.656210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.609 [2024-05-15 11:17:36.661640] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.609 [2024-05-15 11:17:36.661666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.609 [2024-05-15 11:17:36.661674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.609 [2024-05-15 11:17:36.667249] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.609 [2024-05-15 11:17:36.667269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.609 [2024-05-15 11:17:36.667277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.609 [2024-05-15 11:17:36.672931] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.609 [2024-05-15 11:17:36.672952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.609 [2024-05-15 11:17:36.672960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.609 [2024-05-15 11:17:36.678474] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.609 [2024-05-15 11:17:36.678495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.609 [2024-05-15 11:17:36.678503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.609 [2024-05-15 11:17:36.683997] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.609 [2024-05-15 11:17:36.684018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.609 [2024-05-15 11:17:36.684026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.609 [2024-05-15 11:17:36.689655] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.609 [2024-05-15 11:17:36.689675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.609 [2024-05-15 11:17:36.689683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.609 [2024-05-15 11:17:36.695315] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.609 [2024-05-15 11:17:36.695336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.610 [2024-05-15 11:17:36.695344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.610 [2024-05-15 11:17:36.701007] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.610 [2024-05-15 11:17:36.701028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.610 [2024-05-15 11:17:36.701036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.610 [2024-05-15 11:17:36.706663] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.610 [2024-05-15 11:17:36.706684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.610 [2024-05-15 11:17:36.706692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.610 [2024-05-15 11:17:36.712331] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.610 [2024-05-15 11:17:36.712352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.610 [2024-05-15 11:17:36.712360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.610 [2024-05-15 11:17:36.718071] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.610 [2024-05-15 11:17:36.718092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.610 [2024-05-15 11:17:36.718101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.610 [2024-05-15 11:17:36.723783] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.610 [2024-05-15 11:17:36.723803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.610 [2024-05-15 11:17:36.723811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.610 [2024-05-15 11:17:36.729325] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.610 [2024-05-15 11:17:36.729346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.610 [2024-05-15 11:17:36.729354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.610 [2024-05-15 11:17:36.734943] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.610 [2024-05-15 11:17:36.734963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.610 [2024-05-15 11:17:36.734971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.610 [2024-05-15 11:17:36.740862] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.610 [2024-05-15 11:17:36.740883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.610 [2024-05-15 11:17:36.740890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.610 [2024-05-15 11:17:36.746954] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.610 [2024-05-15 11:17:36.746975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.610 [2024-05-15 11:17:36.746982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.610 [2024-05-15 11:17:36.753385] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.610 [2024-05-15 11:17:36.753406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.610 [2024-05-15 11:17:36.753426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.610 [2024-05-15 11:17:36.759564] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.610 [2024-05-15 11:17:36.759585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.610 [2024-05-15 11:17:36.759596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.610 [2024-05-15 11:17:36.765516] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.610 [2024-05-15 11:17:36.765537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.610 [2024-05-15 11:17:36.765546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.610 [2024-05-15 11:17:36.771750] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.610 [2024-05-15 11:17:36.771772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.610 [2024-05-15 11:17:36.771780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.610 [2024-05-15 11:17:36.777983] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.610 [2024-05-15 11:17:36.778005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.610 [2024-05-15 11:17:36.778014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.610 [2024-05-15 11:17:36.783933] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.610 [2024-05-15 11:17:36.783954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.610 [2024-05-15 11:17:36.783962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.610 [2024-05-15 11:17:36.790016] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.610 [2024-05-15 11:17:36.790038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.610 [2024-05-15 11:17:36.790047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.610 [2024-05-15 11:17:36.796485] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.610 [2024-05-15 11:17:36.796506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.610 [2024-05-15 11:17:36.796514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.610 [2024-05-15 11:17:36.803235] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.610 [2024-05-15 11:17:36.803256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.610 [2024-05-15 11:17:36.803265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.610 [2024-05-15 11:17:36.809734] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.610 [2024-05-15 11:17:36.809755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.610 [2024-05-15 11:17:36.809763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.610 [2024-05-15 11:17:36.816321] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.610 [2024-05-15 11:17:36.816341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.610 [2024-05-15 11:17:36.816349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.610 [2024-05-15 11:17:36.822788] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.610 [2024-05-15 11:17:36.822809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.610 [2024-05-15 11:17:36.822817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.610 [2024-05-15 11:17:36.828935] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.610 [2024-05-15 11:17:36.828957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.610 [2024-05-15 11:17:36.828965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.610 [2024-05-15 11:17:36.835000] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.610 [2024-05-15 11:17:36.835021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.610 [2024-05-15 11:17:36.835029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.610 [2024-05-15 11:17:36.841000] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.610 [2024-05-15 11:17:36.841021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.610 [2024-05-15 11:17:36.841029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.610 [2024-05-15 11:17:36.846905] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.610 [2024-05-15 11:17:36.846928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.610 [2024-05-15 11:17:36.846937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.610 [2024-05-15 11:17:36.853163] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.610 [2024-05-15 11:17:36.853192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.610 [2024-05-15 11:17:36.853200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.903 [2024-05-15 11:17:36.859130] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.903 [2024-05-15 11:17:36.859154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.903 [2024-05-15 11:17:36.859163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.903 [2024-05-15 11:17:36.864814] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.903 [2024-05-15 11:17:36.864837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.903 [2024-05-15 11:17:36.864849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.903 [2024-05-15 11:17:36.870676] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.903 [2024-05-15 11:17:36.870699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.903 [2024-05-15 11:17:36.870708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.903 [2024-05-15 11:17:36.876514] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.903 [2024-05-15 11:17:36.876536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.903 [2024-05-15 11:17:36.876544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.903 [2024-05-15 11:17:36.882193] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.903 [2024-05-15 11:17:36.882214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.903 [2024-05-15 11:17:36.882222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.903 [2024-05-15 11:17:36.887829] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.903 [2024-05-15 11:17:36.887850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.903 [2024-05-15 11:17:36.887859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.903 [2024-05-15 11:17:36.893192] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.903 [2024-05-15 11:17:36.893213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.903 [2024-05-15 11:17:36.893222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.903 [2024-05-15 11:17:36.898438] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.903 [2024-05-15 11:17:36.898459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.903 [2024-05-15 11:17:36.898467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.903 [2024-05-15 11:17:36.903872] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.903 [2024-05-15 11:17:36.903894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.903 [2024-05-15 11:17:36.903902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.903 [2024-05-15 11:17:36.909694] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.903 [2024-05-15 11:17:36.909716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.903 [2024-05-15 11:17:36.909724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.903 [2024-05-15 11:17:36.915210] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.903 [2024-05-15 11:17:36.915235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.903 [2024-05-15 11:17:36.915243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.903 [2024-05-15 11:17:36.920949] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.903 [2024-05-15 11:17:36.920970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.903 [2024-05-15 11:17:36.920978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.903 [2024-05-15 11:17:36.926592] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.903 [2024-05-15 11:17:36.926613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.903 [2024-05-15 11:17:36.926621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.903 [2024-05-15 11:17:36.931938] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.903 [2024-05-15 11:17:36.931959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.903 [2024-05-15 11:17:36.931967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.903 [2024-05-15 11:17:36.937333] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.903 [2024-05-15 11:17:36.937354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.903 [2024-05-15 11:17:36.937362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.903 [2024-05-15 11:17:36.942803] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.903 [2024-05-15 11:17:36.942824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.903 [2024-05-15 11:17:36.942832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.903 [2024-05-15 11:17:36.948285] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.903 [2024-05-15 11:17:36.948306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.903 [2024-05-15 11:17:36.948314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.903 [2024-05-15 11:17:36.953895] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.903 [2024-05-15 11:17:36.953915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.903 [2024-05-15 11:17:36.953923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.903 [2024-05-15 11:17:36.959588] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.903 [2024-05-15 11:17:36.959609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.903 [2024-05-15 11:17:36.959617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.903 [2024-05-15 11:17:36.965362] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.903 [2024-05-15 11:17:36.965383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.903 [2024-05-15 11:17:36.965391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.903 [2024-05-15 11:17:36.970826] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.903 [2024-05-15 11:17:36.970847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.903 [2024-05-15 11:17:36.970856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.903 [2024-05-15 11:17:36.976551] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.904 [2024-05-15 11:17:36.976573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.904 [2024-05-15 11:17:36.976581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.904 [2024-05-15 11:17:36.982135] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.904 [2024-05-15 11:17:36.982156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.904 [2024-05-15 11:17:36.982170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.904 [2024-05-15 11:17:36.987939] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.904 [2024-05-15 11:17:36.987960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.904 [2024-05-15 11:17:36.987968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.904 [2024-05-15 11:17:36.993776] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.904 [2024-05-15 11:17:36.993797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.904 [2024-05-15 11:17:36.993806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.904 [2024-05-15 11:17:36.999460] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.904 [2024-05-15 11:17:36.999481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.904 [2024-05-15 11:17:36.999489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.904 [2024-05-15 11:17:37.005227] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.904 [2024-05-15 11:17:37.005247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.904 [2024-05-15 11:17:37.005255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.904 [2024-05-15 11:17:37.010962] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.904 [2024-05-15 11:17:37.010982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.904 [2024-05-15 11:17:37.010994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.904 [2024-05-15 11:17:37.016570] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.904 [2024-05-15 11:17:37.016591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.904 [2024-05-15 11:17:37.016599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.904 [2024-05-15 11:17:37.022146] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.904 [2024-05-15 11:17:37.022174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.904 [2024-05-15 11:17:37.022182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.904 [2024-05-15 11:17:37.027926] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.904 [2024-05-15 11:17:37.027948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.904 [2024-05-15 11:17:37.027956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.904 [2024-05-15 11:17:37.033862] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.904 [2024-05-15 11:17:37.033883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.904 [2024-05-15 11:17:37.033891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.904 [2024-05-15 11:17:37.039621] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.904 [2024-05-15 11:17:37.039643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.904 [2024-05-15 11:17:37.039651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.904 [2024-05-15 11:17:37.045461] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.904 [2024-05-15 11:17:37.045481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.904 [2024-05-15 11:17:37.045489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.904 [2024-05-15 11:17:37.051047] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.904 [2024-05-15 11:17:37.051069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.904 [2024-05-15 11:17:37.051077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.904 [2024-05-15 11:17:37.056590] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.904 [2024-05-15 11:17:37.056610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.904 [2024-05-15 11:17:37.056619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.904 [2024-05-15 11:17:37.062116] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.904 [2024-05-15 11:17:37.062141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.904 [2024-05-15 11:17:37.062149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.904 [2024-05-15 11:17:37.068180] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.904 [2024-05-15 11:17:37.068202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.904 [2024-05-15 11:17:37.068210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.904 [2024-05-15 11:17:37.074374] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.904 [2024-05-15 11:17:37.074396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.904 [2024-05-15 11:17:37.074404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.904 [2024-05-15 11:17:37.080987] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.904 [2024-05-15 11:17:37.081010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.904 [2024-05-15 11:17:37.081018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.904 [2024-05-15 11:17:37.088952] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.904 [2024-05-15 11:17:37.088974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.904 [2024-05-15 11:17:37.088983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.904 [2024-05-15 11:17:37.096335] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.904 [2024-05-15 11:17:37.096356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.904 [2024-05-15 11:17:37.096364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.904 [2024-05-15 11:17:37.102572] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.904 [2024-05-15 11:17:37.102593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.904 [2024-05-15 11:17:37.102602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.904 [2024-05-15 11:17:37.106285] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.904 [2024-05-15 11:17:37.106304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.904 [2024-05-15 11:17:37.106312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.904 [2024-05-15 11:17:37.112445] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.904 [2024-05-15 11:17:37.112475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.904 [2024-05-15 11:17:37.112484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.904 [2024-05-15 11:17:37.118400] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.904 [2024-05-15 11:17:37.118421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.904 [2024-05-15 11:17:37.118429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.904 [2024-05-15 11:17:37.124658] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.904 [2024-05-15 11:17:37.124679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.904 [2024-05-15 11:17:37.124687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.904 [2024-05-15 11:17:37.130531] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.904 [2024-05-15 11:17:37.130551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.904 [2024-05-15 11:17:37.130559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.904 [2024-05-15 11:17:37.136120] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.904 [2024-05-15 11:17:37.136139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.905 [2024-05-15 11:17:37.136147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.905 [2024-05-15 11:17:37.141820] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.905 [2024-05-15 11:17:37.141840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.905 [2024-05-15 11:17:37.141848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.905 [2024-05-15 11:17:37.147428] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.905 [2024-05-15 11:17:37.147448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.905 [2024-05-15 11:17:37.147456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.905 [2024-05-15 11:17:37.152882] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.905 [2024-05-15 11:17:37.152901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.905 [2024-05-15 11:17:37.152909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.905 [2024-05-15 11:17:37.158269] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.905 [2024-05-15 11:17:37.158289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.905 [2024-05-15 11:17:37.158297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.905 [2024-05-15 11:17:37.163650] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:39.905 [2024-05-15 11:17:37.163670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.905 [2024-05-15 11:17:37.163682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.164 [2024-05-15 11:17:37.169114] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.164 [2024-05-15 11:17:37.169133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.164 [2024-05-15 11:17:37.169141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.164 [2024-05-15 11:17:37.174792] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.164 [2024-05-15 11:17:37.174812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.164 [2024-05-15 11:17:37.174821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.164 [2024-05-15 11:17:37.180413] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.164 [2024-05-15 11:17:37.180434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.164 [2024-05-15 11:17:37.180442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.164 [2024-05-15 11:17:37.186153] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.164 [2024-05-15 11:17:37.186180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.164 [2024-05-15 11:17:37.186189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.164 [2024-05-15 11:17:37.191716] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.164 [2024-05-15 11:17:37.191736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.164 [2024-05-15 11:17:37.191744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.164 [2024-05-15 11:17:37.197384] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.164 [2024-05-15 11:17:37.197403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.164 [2024-05-15 11:17:37.197411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.164 [2024-05-15 11:17:37.203190] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.164 [2024-05-15 11:17:37.203210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.164 [2024-05-15 11:17:37.203217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.164 [2024-05-15 11:17:37.208892] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.164 [2024-05-15 11:17:37.208912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.165 [2024-05-15 11:17:37.208920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.165 [2024-05-15 11:17:37.214461] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.165 [2024-05-15 11:17:37.214481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.165 [2024-05-15 11:17:37.214489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.165 [2024-05-15 11:17:37.219518] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.165 [2024-05-15 11:17:37.219540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.165 [2024-05-15 11:17:37.219548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.165 [2024-05-15 11:17:37.224810] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.165 [2024-05-15 11:17:37.224831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.165 [2024-05-15 11:17:37.224839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.165 [2024-05-15 11:17:37.230085] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.165 [2024-05-15 11:17:37.230106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.165 [2024-05-15 11:17:37.230114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.165 [2024-05-15 11:17:37.235506] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.165 [2024-05-15 11:17:37.235527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.165 [2024-05-15 11:17:37.235534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.165 [2024-05-15 11:17:37.240580] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.165 [2024-05-15 11:17:37.240601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.165 [2024-05-15 11:17:37.240609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.165 [2024-05-15 11:17:37.245663] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.165 [2024-05-15 11:17:37.245685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.165 [2024-05-15 11:17:37.245693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.165 [2024-05-15 11:17:37.250951] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.165 [2024-05-15 11:17:37.250973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.165 [2024-05-15 11:17:37.250982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.165 [2024-05-15 11:17:37.256259] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.165 [2024-05-15 11:17:37.256280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.165 [2024-05-15 11:17:37.256292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.165 [2024-05-15 11:17:37.261747] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.165 [2024-05-15 11:17:37.261768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.165 [2024-05-15 11:17:37.261776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.165 [2024-05-15 11:17:37.267332] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.165 [2024-05-15 11:17:37.267353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.165 [2024-05-15 11:17:37.267361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.165 [2024-05-15 11:17:37.272968] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.165 [2024-05-15 11:17:37.272989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.165 [2024-05-15 11:17:37.272997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.165 [2024-05-15 11:17:37.278941] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.165 [2024-05-15 11:17:37.278961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.165 [2024-05-15 11:17:37.278969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.165 [2024-05-15 11:17:37.284425] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.165 [2024-05-15 11:17:37.284446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.165 [2024-05-15 11:17:37.284454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.165 [2024-05-15 11:17:37.289813] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.165 [2024-05-15 11:17:37.289835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.165 [2024-05-15 11:17:37.289843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.165 [2024-05-15 11:17:37.295160] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.165 [2024-05-15 11:17:37.295185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.165 [2024-05-15 11:17:37.295193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.165 [2024-05-15 11:17:37.300671] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.165 [2024-05-15 11:17:37.300692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.165 [2024-05-15 11:17:37.300700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.165 [2024-05-15 11:17:37.306413] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.165 [2024-05-15 11:17:37.306448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.165 [2024-05-15 11:17:37.306457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.165 [2024-05-15 11:17:37.311310] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.165 [2024-05-15 11:17:37.311330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.165 [2024-05-15 11:17:37.311338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.165 [2024-05-15 11:17:37.317002] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.165 [2024-05-15 11:17:37.317024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.165 [2024-05-15 11:17:37.317032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.165 [2024-05-15 11:17:37.322675] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.165 [2024-05-15 11:17:37.322695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.165 [2024-05-15 11:17:37.322704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.165 [2024-05-15 11:17:37.328287] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.165 [2024-05-15 11:17:37.328308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.165 [2024-05-15 11:17:37.328316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.165 [2024-05-15 11:17:37.333795] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.165 [2024-05-15 11:17:37.333816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.165 [2024-05-15 11:17:37.333824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.165 [2024-05-15 11:17:37.339253] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.165 [2024-05-15 11:17:37.339274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.165 [2024-05-15 11:17:37.339282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.165 [2024-05-15 11:17:37.344661] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.165 [2024-05-15 11:17:37.344681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.165 [2024-05-15 11:17:37.344688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.165 [2024-05-15 11:17:37.350513] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.165 [2024-05-15 11:17:37.350533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.165 [2024-05-15 11:17:37.350542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.165 [2024-05-15 11:17:37.355806] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.165 [2024-05-15 11:17:37.355825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.165 [2024-05-15 11:17:37.355833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.165 [2024-05-15 11:17:37.361118] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.165 [2024-05-15 11:17:37.361142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.166 [2024-05-15 11:17:37.361153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.166 [2024-05-15 11:17:37.366475] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.166 [2024-05-15 11:17:37.366497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.166 [2024-05-15 11:17:37.366506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.166 [2024-05-15 11:17:37.371827] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.166 [2024-05-15 11:17:37.371849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.166 [2024-05-15 11:17:37.371856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.166 [2024-05-15 11:17:37.377463] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.166 [2024-05-15 11:17:37.377484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.166 [2024-05-15 11:17:37.377492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.166 [2024-05-15 11:17:37.382554] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.166 [2024-05-15 11:17:37.382575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.166 [2024-05-15 11:17:37.382583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.166 [2024-05-15 11:17:37.387739] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.166 [2024-05-15 11:17:37.387759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.166 [2024-05-15 11:17:37.387767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.166 [2024-05-15 11:17:37.393287] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.166 [2024-05-15 11:17:37.393309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.166 [2024-05-15 11:17:37.393317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.166 [2024-05-15 11:17:37.399033] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.166 [2024-05-15 11:17:37.399053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.166 [2024-05-15 11:17:37.399068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.166 [2024-05-15 11:17:37.404627] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.166 [2024-05-15 11:17:37.404648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.166 [2024-05-15 11:17:37.404656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.166 [2024-05-15 11:17:37.410143] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.166 [2024-05-15 11:17:37.410169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.166 [2024-05-15 11:17:37.410178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.166 [2024-05-15 11:17:37.415615] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.166 [2024-05-15 11:17:37.415636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.166 [2024-05-15 11:17:37.415645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.166 [2024-05-15 11:17:37.421543] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.166 [2024-05-15 11:17:37.421564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.166 [2024-05-15 11:17:37.421571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.166 [2024-05-15 11:17:37.427184] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.166 [2024-05-15 11:17:37.427205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.166 [2024-05-15 11:17:37.427213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.426 [2024-05-15 11:17:37.432988] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.426 [2024-05-15 11:17:37.433009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.426 [2024-05-15 11:17:37.433017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.426 [2024-05-15 11:17:37.438658] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.426 [2024-05-15 11:17:37.438678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.426 [2024-05-15 11:17:37.438687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.426 [2024-05-15 11:17:37.444528] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.426 [2024-05-15 11:17:37.444549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.426 [2024-05-15 11:17:37.444557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.426 [2024-05-15 11:17:37.449885] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.426 [2024-05-15 11:17:37.449909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.426 [2024-05-15 11:17:37.449917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.426 [2024-05-15 11:17:37.455137] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.426 [2024-05-15 11:17:37.455159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.426 [2024-05-15 11:17:37.455173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.426 [2024-05-15 11:17:37.460580] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.426 [2024-05-15 11:17:37.460602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.426 [2024-05-15 11:17:37.460609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.426 [2024-05-15 11:17:37.466773] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.426 [2024-05-15 11:17:37.466794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.426 [2024-05-15 11:17:37.466802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.426 [2024-05-15 11:17:37.473390] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.426 [2024-05-15 11:17:37.473412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.426 [2024-05-15 11:17:37.473420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.426 [2024-05-15 11:17:37.479962] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.426 [2024-05-15 11:17:37.479983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.426 [2024-05-15 11:17:37.479991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.426 [2024-05-15 11:17:37.486388] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.426 [2024-05-15 11:17:37.486409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.426 [2024-05-15 11:17:37.486417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.426 [2024-05-15 11:17:37.492527] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.426 [2024-05-15 11:17:37.492548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.426 [2024-05-15 11:17:37.492556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.426 [2024-05-15 11:17:37.498858] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.426 [2024-05-15 11:17:37.498879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.426 [2024-05-15 11:17:37.498887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.426 [2024-05-15 11:17:37.505063] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.426 [2024-05-15 11:17:37.505082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.426 [2024-05-15 11:17:37.505091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.426 [2024-05-15 11:17:37.510686] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.426 [2024-05-15 11:17:37.510707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.426 [2024-05-15 11:17:37.510715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.426 [2024-05-15 11:17:37.516896] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.426 [2024-05-15 11:17:37.516916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.426 [2024-05-15 11:17:37.516925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.426 [2024-05-15 11:17:37.524073] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.426 [2024-05-15 11:17:37.524095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.426 [2024-05-15 11:17:37.524103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.426 [2024-05-15 11:17:37.531742] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.426 [2024-05-15 11:17:37.531765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.426 [2024-05-15 11:17:37.531774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.426 [2024-05-15 11:17:37.539798] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.426 [2024-05-15 11:17:37.539819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.426 [2024-05-15 11:17:37.539828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.426 [2024-05-15 11:17:37.546923] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.426 [2024-05-15 11:17:37.546943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.426 [2024-05-15 11:17:37.546951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.426 [2024-05-15 11:17:37.553984] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.426 [2024-05-15 11:17:37.554006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.427 [2024-05-15 11:17:37.554014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.427 [2024-05-15 11:17:37.561269] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.427 [2024-05-15 11:17:37.561294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.427 [2024-05-15 11:17:37.561302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.427 [2024-05-15 11:17:37.568602] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.427 [2024-05-15 11:17:37.568624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.427 [2024-05-15 11:17:37.568632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.427 [2024-05-15 11:17:37.575335] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.427 [2024-05-15 11:17:37.575356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.427 [2024-05-15 11:17:37.575364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.427 [2024-05-15 11:17:37.581354] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.427 [2024-05-15 11:17:37.581376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.427 [2024-05-15 11:17:37.581384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.427 [2024-05-15 11:17:37.587563] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.427 [2024-05-15 11:17:37.587584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.427 [2024-05-15 11:17:37.587592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.427 [2024-05-15 11:17:37.593774] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.427 [2024-05-15 11:17:37.593793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.427 [2024-05-15 11:17:37.593801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.427 [2024-05-15 11:17:37.599576] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.427 [2024-05-15 11:17:37.599597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.427 [2024-05-15 11:17:37.599605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.427 [2024-05-15 11:17:37.605273] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.427 [2024-05-15 11:17:37.605295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.427 [2024-05-15 11:17:37.605303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.427 [2024-05-15 11:17:37.611392] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.427 [2024-05-15 11:17:37.611413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.427 [2024-05-15 11:17:37.611421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.427 [2024-05-15 11:17:37.617406] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.427 [2024-05-15 11:17:37.617426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.427 [2024-05-15 11:17:37.617434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.427 [2024-05-15 11:17:37.620374] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.427 [2024-05-15 11:17:37.620396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.427 [2024-05-15 11:17:37.620404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.427 [2024-05-15 11:17:37.626373] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.427 [2024-05-15 11:17:37.626393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.427 [2024-05-15 11:17:37.626401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.427 [2024-05-15 11:17:37.631633] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.427 [2024-05-15 11:17:37.631654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.427 [2024-05-15 11:17:37.631662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.427 [2024-05-15 11:17:37.637553] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.427 [2024-05-15 11:17:37.637574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.427 [2024-05-15 11:17:37.637582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.427 [2024-05-15 11:17:37.643448] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.427 [2024-05-15 11:17:37.643470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.427 [2024-05-15 11:17:37.643479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.427 [2024-05-15 11:17:37.648997] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.427 [2024-05-15 11:17:37.649019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.427 [2024-05-15 11:17:37.649029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.427 [2024-05-15 11:17:37.653710] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.427 [2024-05-15 11:17:37.653731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.427 [2024-05-15 11:17:37.653740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.427 [2024-05-15 11:17:37.659760] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.427 [2024-05-15 11:17:37.659782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.427 [2024-05-15 11:17:37.659794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.427 [2024-05-15 11:17:37.666132] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.427 [2024-05-15 11:17:37.666153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.427 [2024-05-15 11:17:37.666161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.427 [2024-05-15 11:17:37.673667] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.427 [2024-05-15 11:17:37.673690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.427 [2024-05-15 11:17:37.673698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.427 [2024-05-15 11:17:37.681746] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.427 [2024-05-15 11:17:37.681769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.428 [2024-05-15 11:17:37.681779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.428 [2024-05-15 11:17:37.689400] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.428 [2024-05-15 11:17:37.689421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.428 [2024-05-15 11:17:37.689430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.686 [2024-05-15 11:17:37.695795] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.686 [2024-05-15 11:17:37.695817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.686 [2024-05-15 11:17:37.695825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.686 [2024-05-15 11:17:37.702452] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.686 [2024-05-15 11:17:37.702473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.686 [2024-05-15 11:17:37.702481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.686 [2024-05-15 11:17:37.708827] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.686 [2024-05-15 11:17:37.708848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.686 [2024-05-15 11:17:37.708856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.686 [2024-05-15 11:17:37.714879] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.686 [2024-05-15 11:17:37.714900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.686 [2024-05-15 11:17:37.714908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.686 [2024-05-15 11:17:37.721252] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.686 [2024-05-15 11:17:37.721277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.686 [2024-05-15 11:17:37.721286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.686 [2024-05-15 11:17:37.727473] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.686 [2024-05-15 11:17:37.727494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.686 [2024-05-15 11:17:37.727503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.686 [2024-05-15 11:17:37.734270] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.686 [2024-05-15 11:17:37.734290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.686 [2024-05-15 11:17:37.734298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.686 [2024-05-15 11:17:37.741139] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.686 [2024-05-15 11:17:37.741161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.686 [2024-05-15 11:17:37.741175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.686 [2024-05-15 11:17:37.747531] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.686 [2024-05-15 11:17:37.747553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.686 [2024-05-15 11:17:37.747561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.686 [2024-05-15 11:17:37.754002] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d60f0) 00:25:40.686 [2024-05-15 11:17:37.754024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.686 [2024-05-15 11:17:37.754033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.686 00:25:40.686 Latency(us) 00:25:40.686 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:40.686 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:40.686 nvme0n1 : 2.00 5175.83 646.98 0.00 0.00 3087.69 676.73 11796.48 00:25:40.686 =================================================================================================================== 00:25:40.686 Total : 5175.83 646.98 0.00 0.00 3087.69 676.73 11796.48 00:25:40.686 0 00:25:40.686 11:17:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:40.686 11:17:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:40.686 11:17:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:40.686 | .driver_specific 00:25:40.686 | .nvme_error 00:25:40.686 | .status_code 00:25:40.686 | .command_transient_transport_error' 00:25:40.686 11:17:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:40.944 11:17:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 334 > 0 )) 00:25:40.944 11:17:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2395922 00:25:40.944 11:17:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 2395922 ']' 00:25:40.944 11:17:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 2395922 00:25:40.944 11:17:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:25:40.944 11:17:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:25:40.944 11:17:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2395922 00:25:40.944 11:17:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:25:40.944 11:17:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:25:40.944 11:17:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2395922' 00:25:40.944 killing process with pid 2395922 00:25:40.944 11:17:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 2395922 00:25:40.944 Received shutdown signal, test time was about 2.000000 seconds 00:25:40.944 00:25:40.944 Latency(us) 00:25:40.944 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:40.944 =================================================================================================================== 00:25:40.944 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:40.944 11:17:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 2395922 00:25:40.944 11:17:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:25:40.944 11:17:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:40.944 11:17:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:40.944 11:17:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:40.944 11:17:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:40.944 11:17:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2396460 00:25:40.944 11:17:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2396460 /var/tmp/bperf.sock 00:25:40.944 11:17:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 2396460 ']' 00:25:40.944 11:17:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:40.944 11:17:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:40.944 11:17:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:40.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:40.944 11:17:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:40.944 11:17:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:40.944 11:17:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:25:41.203 [2024-05-15 11:17:38.236000] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:25:41.203 [2024-05-15 11:17:38.236046] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2396460 ] 00:25:41.203 EAL: No free 2048 kB hugepages reported on node 1 00:25:41.203 [2024-05-15 11:17:38.289953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.203 [2024-05-15 11:17:38.369410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:42.137 11:17:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:42.137 11:17:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:25:42.137 11:17:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:42.137 11:17:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:42.137 11:17:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:42.137 11:17:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:42.137 11:17:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:42.137 11:17:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:42.137 11:17:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:42.137 11:17:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:42.396 nvme0n1 00:25:42.396 11:17:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:42.396 11:17:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:42.396 11:17:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:42.396 11:17:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:42.396 11:17:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:42.396 11:17:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:42.396 Running I/O for 2 seconds... 00:25:42.396 [2024-05-15 11:17:39.550777] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190ed0b0 00:25:42.396 [2024-05-15 11:17:39.551618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.396 [2024-05-15 11:17:39.551646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:42.396 [2024-05-15 11:17:39.559427] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f96f8 00:25:42.396 [2024-05-15 11:17:39.560248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.396 [2024-05-15 11:17:39.560268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:42.396 [2024-05-15 11:17:39.569622] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190feb58 00:25:42.396 [2024-05-15 11:17:39.570573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.396 [2024-05-15 11:17:39.570592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:42.396 [2024-05-15 11:17:39.579077] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190ee190 00:25:42.396 [2024-05-15 11:17:39.580141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.396 [2024-05-15 11:17:39.580159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:42.396 [2024-05-15 11:17:39.588661] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190fef90 00:25:42.396 [2024-05-15 11:17:39.589850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.396 [2024-05-15 11:17:39.589869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:42.396 [2024-05-15 11:17:39.596432] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f0ff8 00:25:42.396 [2024-05-15 11:17:39.596944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.397 [2024-05-15 11:17:39.596962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:42.397 [2024-05-15 11:17:39.606856] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f96f8 00:25:42.397 [2024-05-15 11:17:39.608155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.397 [2024-05-15 11:17:39.608176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:42.397 [2024-05-15 11:17:39.616410] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f4f40 00:25:42.397 [2024-05-15 11:17:39.617829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.397 [2024-05-15 11:17:39.617848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:42.397 [2024-05-15 11:17:39.625962] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f3e60 00:25:42.397 [2024-05-15 11:17:39.627509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.397 [2024-05-15 11:17:39.627526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:42.397 [2024-05-15 11:17:39.632382] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190eaab8 00:25:42.397 [2024-05-15 11:17:39.633076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.397 [2024-05-15 11:17:39.633094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:42.397 [2024-05-15 11:17:39.641948] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190eee38 00:25:42.397 [2024-05-15 11:17:39.642829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.397 [2024-05-15 11:17:39.642848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:42.397 [2024-05-15 11:17:39.650554] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190fa3a0 00:25:42.397 [2024-05-15 11:17:39.651371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:15158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.397 [2024-05-15 11:17:39.651388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:42.397 [2024-05-15 11:17:39.660920] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e9e10 00:25:42.655 [2024-05-15 11:17:39.661909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.655 [2024-05-15 11:17:39.661927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:42.655 [2024-05-15 11:17:39.670566] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190edd58 00:25:42.655 [2024-05-15 11:17:39.671634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.655 [2024-05-15 11:17:39.671653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:42.655 [2024-05-15 11:17:39.680123] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190ea680 00:25:42.655 [2024-05-15 11:17:39.681387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.655 [2024-05-15 11:17:39.681405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:42.656 [2024-05-15 11:17:39.687604] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190fb048 00:25:42.656 [2024-05-15 11:17:39.688348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.656 [2024-05-15 11:17:39.688366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:42.656 [2024-05-15 11:17:39.696693] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f3e60 00:25:42.656 [2024-05-15 11:17:39.697421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.656 [2024-05-15 11:17:39.697439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:42.656 [2024-05-15 11:17:39.705762] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e73e0 00:25:42.656 [2024-05-15 11:17:39.706521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.656 [2024-05-15 11:17:39.706539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:42.656 [2024-05-15 11:17:39.714902] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e27f0 00:25:42.656 [2024-05-15 11:17:39.715634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.656 [2024-05-15 11:17:39.715652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:42.656 [2024-05-15 11:17:39.723998] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190df118 00:25:42.656 [2024-05-15 11:17:39.724729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.656 [2024-05-15 11:17:39.724746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:42.656 [2024-05-15 11:17:39.733113] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190ebb98 00:25:42.656 [2024-05-15 11:17:39.733848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.656 [2024-05-15 11:17:39.733866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:42.656 [2024-05-15 11:17:39.742463] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e4140 00:25:42.656 [2024-05-15 11:17:39.743207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.656 [2024-05-15 11:17:39.743229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:42.656 [2024-05-15 11:17:39.752915] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190eff18 00:25:42.656 [2024-05-15 11:17:39.754028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.656 [2024-05-15 11:17:39.754047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:42.656 [2024-05-15 11:17:39.761381] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f9b30 00:25:42.656 [2024-05-15 11:17:39.762217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.656 [2024-05-15 11:17:39.762235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:42.656 [2024-05-15 11:17:39.770368] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e9e10 00:25:42.656 [2024-05-15 11:17:39.771214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.656 [2024-05-15 11:17:39.771232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:42.656 [2024-05-15 11:17:39.779458] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190eaef0 00:25:42.656 [2024-05-15 11:17:39.780228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.656 [2024-05-15 11:17:39.780247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:42.656 [2024-05-15 11:17:39.788506] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190fcdd0 00:25:42.656 [2024-05-15 11:17:39.789381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.656 [2024-05-15 11:17:39.789400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:42.656 [2024-05-15 11:17:39.797864] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e12d8 00:25:42.656 [2024-05-15 11:17:39.798496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.656 [2024-05-15 11:17:39.798514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:42.656 [2024-05-15 11:17:39.807206] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f0350 00:25:42.656 [2024-05-15 11:17:39.808137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.656 [2024-05-15 11:17:39.808155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:42.656 [2024-05-15 11:17:39.815800] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f46d0 00:25:42.656 [2024-05-15 11:17:39.816721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.656 [2024-05-15 11:17:39.816739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:42.656 [2024-05-15 11:17:39.825480] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190df988 00:25:42.656 [2024-05-15 11:17:39.826510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.656 [2024-05-15 11:17:39.826529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:42.656 [2024-05-15 11:17:39.835198] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190ea248 00:25:42.656 [2024-05-15 11:17:39.836327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.656 [2024-05-15 11:17:39.836347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:42.656 [2024-05-15 11:17:39.844928] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f6890 00:25:42.656 [2024-05-15 11:17:39.846266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.656 [2024-05-15 11:17:39.846284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:42.656 [2024-05-15 11:17:39.853630] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e27f0 00:25:42.656 [2024-05-15 11:17:39.854521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.656 [2024-05-15 11:17:39.854539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:42.656 [2024-05-15 11:17:39.862769] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e01f8 00:25:42.656 [2024-05-15 11:17:39.863676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.656 [2024-05-15 11:17:39.863695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:42.656 [2024-05-15 11:17:39.872398] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190fc560 00:25:42.656 [2024-05-15 11:17:39.873176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.656 [2024-05-15 11:17:39.873194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:42.656 [2024-05-15 11:17:39.883085] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190fe2e8 00:25:42.656 [2024-05-15 11:17:39.884604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.656 [2024-05-15 11:17:39.884622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:42.656 [2024-05-15 11:17:39.889669] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190ec840 00:25:42.656 [2024-05-15 11:17:39.890314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:18960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.656 [2024-05-15 11:17:39.890332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:42.656 [2024-05-15 11:17:39.898913] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190fa3a0 00:25:42.656 [2024-05-15 11:17:39.899538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.656 [2024-05-15 11:17:39.899557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:42.656 [2024-05-15 11:17:39.907357] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e5220 00:25:42.656 [2024-05-15 11:17:39.907963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.656 [2024-05-15 11:17:39.907980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:42.656 [2024-05-15 11:17:39.916938] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190ee190 00:25:42.656 [2024-05-15 11:17:39.917702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:3528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.656 [2024-05-15 11:17:39.917721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:42.915 [2024-05-15 11:17:39.928107] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190ee190 00:25:42.915 [2024-05-15 11:17:39.929328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.915 [2024-05-15 11:17:39.929345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:42.915 [2024-05-15 11:17:39.937589] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190fb8b8 00:25:42.915 [2024-05-15 11:17:39.938916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.915 [2024-05-15 11:17:39.938933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:42.915 [2024-05-15 11:17:39.947123] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f5378 00:25:42.915 [2024-05-15 11:17:39.948576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.915 [2024-05-15 11:17:39.948595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:42.915 [2024-05-15 11:17:39.953544] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190eaef0 00:25:42.915 [2024-05-15 11:17:39.954145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:3072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.916 [2024-05-15 11:17:39.954163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:42.916 [2024-05-15 11:17:39.962690] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e27f0 00:25:42.916 [2024-05-15 11:17:39.963302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.916 [2024-05-15 11:17:39.963321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:42.916 [2024-05-15 11:17:39.971145] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190ea680 00:25:42.916 [2024-05-15 11:17:39.971746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.916 [2024-05-15 11:17:39.971764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:42.916 [2024-05-15 11:17:39.980630] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e49b0 00:25:42.916 [2024-05-15 11:17:39.981351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.916 [2024-05-15 11:17:39.981372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:42.916 [2024-05-15 11:17:39.990098] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190fc998 00:25:42.916 [2024-05-15 11:17:39.990941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.916 [2024-05-15 11:17:39.990960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:42.916 [2024-05-15 11:17:39.999593] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190fcdd0 00:25:42.916 [2024-05-15 11:17:40.000550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.916 [2024-05-15 11:17:40.000568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:42.916 [2024-05-15 11:17:40.009338] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e38d0 00:25:42.916 [2024-05-15 11:17:40.010450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.916 [2024-05-15 11:17:40.010468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:42.916 [2024-05-15 11:17:40.019256] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e49b0 00:25:42.916 [2024-05-15 11:17:40.020491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.916 [2024-05-15 11:17:40.020513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:42.916 [2024-05-15 11:17:40.029000] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e6b70 00:25:42.916 [2024-05-15 11:17:40.030364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.916 [2024-05-15 11:17:40.030383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:42.916 [2024-05-15 11:17:40.038799] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190ddc00 00:25:42.916 [2024-05-15 11:17:40.040296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.916 [2024-05-15 11:17:40.040315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:42.916 [2024-05-15 11:17:40.046125] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f9f68 00:25:42.916 [2024-05-15 11:17:40.046749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.916 [2024-05-15 11:17:40.046769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:42.916 [2024-05-15 11:17:40.055909] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f7100 00:25:42.916 [2024-05-15 11:17:40.056655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.916 [2024-05-15 11:17:40.056674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:42.916 [2024-05-15 11:17:40.065715] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e88f8 00:25:42.916 [2024-05-15 11:17:40.066593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.916 [2024-05-15 11:17:40.066612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:42.916 [2024-05-15 11:17:40.075171] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f8618 00:25:42.916 [2024-05-15 11:17:40.076033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.916 [2024-05-15 11:17:40.076052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:42.916 [2024-05-15 11:17:40.084493] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f8618 00:25:42.916 [2024-05-15 11:17:40.085357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.916 [2024-05-15 11:17:40.085376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:42.916 [2024-05-15 11:17:40.095350] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190fdeb0 00:25:42.916 [2024-05-15 11:17:40.096226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.916 [2024-05-15 11:17:40.096246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:42.916 [2024-05-15 11:17:40.103978] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190fb8b8 00:25:42.916 [2024-05-15 11:17:40.104830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:15631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.916 [2024-05-15 11:17:40.104848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:42.916 [2024-05-15 11:17:40.113601] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f7100 00:25:42.916 [2024-05-15 11:17:40.114579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:14358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.916 [2024-05-15 11:17:40.114597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:42.916 [2024-05-15 11:17:40.123477] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e88f8 00:25:42.916 [2024-05-15 11:17:40.124579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.916 [2024-05-15 11:17:40.124597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:42.916 [2024-05-15 11:17:40.133163] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f4298 00:25:42.916 [2024-05-15 11:17:40.134393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.916 [2024-05-15 11:17:40.134412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:42.916 [2024-05-15 11:17:40.142803] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190fda78 00:25:42.916 [2024-05-15 11:17:40.144114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.916 [2024-05-15 11:17:40.144132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:42.916 [2024-05-15 11:17:40.152445] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190efae0 00:25:42.916 [2024-05-15 11:17:40.153880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.916 [2024-05-15 11:17:40.153897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:42.916 [2024-05-15 11:17:40.158965] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e5658 00:25:42.916 [2024-05-15 11:17:40.159555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.916 [2024-05-15 11:17:40.159573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:42.916 [2024-05-15 11:17:40.168340] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190fac10 00:25:42.916 [2024-05-15 11:17:40.168928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:15153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.916 [2024-05-15 11:17:40.168946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:42.916 [2024-05-15 11:17:40.178796] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190fac10 00:25:43.176 [2024-05-15 11:17:40.179893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.176 [2024-05-15 11:17:40.179910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:43.176 [2024-05-15 11:17:40.188666] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190fb480 00:25:43.176 [2024-05-15 11:17:40.189845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.176 [2024-05-15 11:17:40.189863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:43.176 [2024-05-15 11:17:40.196641] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190fef90 00:25:43.176 [2024-05-15 11:17:40.197369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.176 [2024-05-15 11:17:40.197386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:43.176 [2024-05-15 11:17:40.205710] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f8e88 00:25:43.176 [2024-05-15 11:17:40.206402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:3258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.176 [2024-05-15 11:17:40.206438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:43.176 [2024-05-15 11:17:40.214966] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190fd208 00:25:43.176 [2024-05-15 11:17:40.215660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.176 [2024-05-15 11:17:40.215678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:43.176 [2024-05-15 11:17:40.223346] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190ebb98 00:25:43.176 [2024-05-15 11:17:40.224021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.176 [2024-05-15 11:17:40.224042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:43.176 [2024-05-15 11:17:40.232980] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190df118 00:25:43.176 [2024-05-15 11:17:40.233782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.176 [2024-05-15 11:17:40.233800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:43.176 [2024-05-15 11:17:40.242675] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190fef90 00:25:43.176 [2024-05-15 11:17:40.243596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.176 [2024-05-15 11:17:40.243614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:43.176 [2024-05-15 11:17:40.252503] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e4140 00:25:43.176 [2024-05-15 11:17:40.253545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.176 [2024-05-15 11:17:40.253563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:43.176 [2024-05-15 11:17:40.262043] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190eea00 00:25:43.176 [2024-05-15 11:17:40.263208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:24595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.176 [2024-05-15 11:17:40.263225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:43.176 [2024-05-15 11:17:40.271593] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f8618 00:25:43.176 [2024-05-15 11:17:40.272875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.176 [2024-05-15 11:17:40.272892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:43.176 [2024-05-15 11:17:40.281111] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f7970 00:25:43.176 [2024-05-15 11:17:40.282522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.176 [2024-05-15 11:17:40.282540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:43.176 [2024-05-15 11:17:40.287543] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190feb58 00:25:43.176 [2024-05-15 11:17:40.288100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.176 [2024-05-15 11:17:40.288118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:43.176 [2024-05-15 11:17:40.296759] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f0350 00:25:43.176 [2024-05-15 11:17:40.297317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:11199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.176 [2024-05-15 11:17:40.297335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:43.176 [2024-05-15 11:17:40.307556] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f0350 00:25:43.176 [2024-05-15 11:17:40.308593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:10375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.176 [2024-05-15 11:17:40.308615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.176 [2024-05-15 11:17:40.316714] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f8618 00:25:43.176 [2024-05-15 11:17:40.317779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.176 [2024-05-15 11:17:40.317798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.176 [2024-05-15 11:17:40.325290] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190feb58 00:25:43.176 [2024-05-15 11:17:40.326339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.176 [2024-05-15 11:17:40.326357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:43.176 [2024-05-15 11:17:40.334965] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e0a68 00:25:43.176 [2024-05-15 11:17:40.336106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.176 [2024-05-15 11:17:40.336124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:43.176 [2024-05-15 11:17:40.344638] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190fe720 00:25:43.176 [2024-05-15 11:17:40.345896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.176 [2024-05-15 11:17:40.345913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:43.176 [2024-05-15 11:17:40.354127] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e84c0 00:25:43.176 [2024-05-15 11:17:40.355511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.176 [2024-05-15 11:17:40.355528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:43.176 [2024-05-15 11:17:40.363684] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e99d8 00:25:43.176 [2024-05-15 11:17:40.365213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.176 [2024-05-15 11:17:40.365230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.176 [2024-05-15 11:17:40.370186] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f1ca0 00:25:43.176 [2024-05-15 11:17:40.370833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.176 [2024-05-15 11:17:40.370851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:43.176 [2024-05-15 11:17:40.379378] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f3a28 00:25:43.177 [2024-05-15 11:17:40.380031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.177 [2024-05-15 11:17:40.380048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:43.177 [2024-05-15 11:17:40.387838] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e38d0 00:25:43.177 [2024-05-15 11:17:40.388487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.177 [2024-05-15 11:17:40.388505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:43.177 [2024-05-15 11:17:40.397389] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f46d0 00:25:43.177 [2024-05-15 11:17:40.398168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.177 [2024-05-15 11:17:40.398186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:43.177 [2024-05-15 11:17:40.406906] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e6300 00:25:43.177 [2024-05-15 11:17:40.407790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:24212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.177 [2024-05-15 11:17:40.407808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:43.177 [2024-05-15 11:17:40.416503] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190ef270 00:25:43.177 [2024-05-15 11:17:40.417518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.177 [2024-05-15 11:17:40.417536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:43.177 [2024-05-15 11:17:40.426044] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190ddc00 00:25:43.177 [2024-05-15 11:17:40.427181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:24733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.177 [2024-05-15 11:17:40.427199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:43.177 [2024-05-15 11:17:40.435584] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f46d0 00:25:43.177 [2024-05-15 11:17:40.436873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.177 [2024-05-15 11:17:40.436891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:43.436 [2024-05-15 11:17:40.445457] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190ed4e8 00:25:43.436 [2024-05-15 11:17:40.446844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.436 [2024-05-15 11:17:40.446861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:43.436 [2024-05-15 11:17:40.454985] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f9f68 00:25:43.436 [2024-05-15 11:17:40.456484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:11581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.436 [2024-05-15 11:17:40.456502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:43.436 [2024-05-15 11:17:40.461426] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190ed0b0 00:25:43.436 [2024-05-15 11:17:40.462066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.436 [2024-05-15 11:17:40.462084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:43.436 [2024-05-15 11:17:40.470641] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f1868 00:25:43.436 [2024-05-15 11:17:40.471289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.436 [2024-05-15 11:17:40.471307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:43.436 [2024-05-15 11:17:40.479858] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190ed4e8 00:25:43.436 [2024-05-15 11:17:40.480501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:46 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.436 [2024-05-15 11:17:40.480518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:43.436 [2024-05-15 11:17:40.488191] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e3d08 00:25:43.436 [2024-05-15 11:17:40.488813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.436 [2024-05-15 11:17:40.488830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:43.436 [2024-05-15 11:17:40.497721] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e1710 00:25:43.436 [2024-05-15 11:17:40.498464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:11616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.436 [2024-05-15 11:17:40.498481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:43.436 [2024-05-15 11:17:40.508570] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e1710 00:25:43.436 [2024-05-15 11:17:40.509798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.436 [2024-05-15 11:17:40.509815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:43.436 [2024-05-15 11:17:40.518145] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e6b70 00:25:43.436 [2024-05-15 11:17:40.519554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.436 [2024-05-15 11:17:40.519571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:43.436 [2024-05-15 11:17:40.527741] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f0ff8 00:25:43.437 [2024-05-15 11:17:40.529210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.437 [2024-05-15 11:17:40.529228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:43.437 [2024-05-15 11:17:40.534195] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190fda78 00:25:43.437 [2024-05-15 11:17:40.534823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.437 [2024-05-15 11:17:40.534841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:43.437 [2024-05-15 11:17:40.543383] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e88f8 00:25:43.437 [2024-05-15 11:17:40.544005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.437 [2024-05-15 11:17:40.544028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:43.437 [2024-05-15 11:17:40.552611] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e6b70 00:25:43.437 [2024-05-15 11:17:40.553236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.437 [2024-05-15 11:17:40.553254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:43.437 [2024-05-15 11:17:40.561097] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f6cc8 00:25:43.437 [2024-05-15 11:17:40.561712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:8854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.437 [2024-05-15 11:17:40.561729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:43.437 [2024-05-15 11:17:40.570747] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f6458 00:25:43.437 [2024-05-15 11:17:40.571514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.437 [2024-05-15 11:17:40.571532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:43.437 [2024-05-15 11:17:40.580566] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190fef90 00:25:43.437 [2024-05-15 11:17:40.581422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.437 [2024-05-15 11:17:40.581440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:43.437 [2024-05-15 11:17:40.590181] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e49b0 00:25:43.437 [2024-05-15 11:17:40.591147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:8620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.437 [2024-05-15 11:17:40.591168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:43.437 [2024-05-15 11:17:40.601198] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e49b0 00:25:43.437 [2024-05-15 11:17:40.602639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.437 [2024-05-15 11:17:40.602656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:43.437 [2024-05-15 11:17:40.607620] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f7970 00:25:43.437 [2024-05-15 11:17:40.608224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.437 [2024-05-15 11:17:40.608248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:43.437 [2024-05-15 11:17:40.616783] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e4de8 00:25:43.437 [2024-05-15 11:17:40.617395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.437 [2024-05-15 11:17:40.617413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:43.437 [2024-05-15 11:17:40.625200] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e27f0 00:25:43.437 [2024-05-15 11:17:40.625786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.437 [2024-05-15 11:17:40.625804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:43.437 [2024-05-15 11:17:40.634661] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e6b70 00:25:43.437 [2024-05-15 11:17:40.635367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.437 [2024-05-15 11:17:40.635384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:43.437 [2024-05-15 11:17:40.644154] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f2d80 00:25:43.437 [2024-05-15 11:17:40.644984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.437 [2024-05-15 11:17:40.645008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:43.437 [2024-05-15 11:17:40.653644] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e0ea0 00:25:43.437 [2024-05-15 11:17:40.654590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.437 [2024-05-15 11:17:40.654608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:43.437 [2024-05-15 11:17:40.663112] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f8a50 00:25:43.437 [2024-05-15 11:17:40.664184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:8256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.437 [2024-05-15 11:17:40.664214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:43.437 [2024-05-15 11:17:40.672587] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e6b70 00:25:43.437 [2024-05-15 11:17:40.673776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.437 [2024-05-15 11:17:40.673793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:43.437 [2024-05-15 11:17:40.682033] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e9e10 00:25:43.437 [2024-05-15 11:17:40.683353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.437 [2024-05-15 11:17:40.683370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:43.437 [2024-05-15 11:17:40.691468] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190df118 00:25:43.437 [2024-05-15 11:17:40.692901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:18543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.437 [2024-05-15 11:17:40.692918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:43.437 [2024-05-15 11:17:40.697967] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f35f0 00:25:43.437 [2024-05-15 11:17:40.698586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.437 [2024-05-15 11:17:40.698604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:43.696 [2024-05-15 11:17:40.707356] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e5a90 00:25:43.696 [2024-05-15 11:17:40.707972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.696 [2024-05-15 11:17:40.707990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:43.696 [2024-05-15 11:17:40.716700] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e9e10 00:25:43.696 [2024-05-15 11:17:40.717298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.696 [2024-05-15 11:17:40.717317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:43.696 [2024-05-15 11:17:40.725120] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190feb58 00:25:43.696 [2024-05-15 11:17:40.725692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.696 [2024-05-15 11:17:40.725710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:43.696 [2024-05-15 11:17:40.734589] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e38d0 00:25:43.696 [2024-05-15 11:17:40.735282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.696 [2024-05-15 11:17:40.735300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:43.696 [2024-05-15 11:17:40.744105] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190de470 00:25:43.696 [2024-05-15 11:17:40.744924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.697 [2024-05-15 11:17:40.744942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:43.697 [2024-05-15 11:17:40.753578] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e5220 00:25:43.697 [2024-05-15 11:17:40.754513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:12241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.697 [2024-05-15 11:17:40.754531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:43.697 [2024-05-15 11:17:40.763176] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190de038 00:25:43.697 [2024-05-15 11:17:40.764263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.697 [2024-05-15 11:17:40.764282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:43.697 [2024-05-15 11:17:40.772718] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e38d0 00:25:43.697 [2024-05-15 11:17:40.773896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.697 [2024-05-15 11:17:40.773914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:43.697 [2024-05-15 11:17:40.782179] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e9168 00:25:43.697 [2024-05-15 11:17:40.783475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.697 [2024-05-15 11:17:40.783496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:43.697 [2024-05-15 11:17:40.791635] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190edd58 00:25:43.697 [2024-05-15 11:17:40.793056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.697 [2024-05-15 11:17:40.793074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:43.697 [2024-05-15 11:17:40.798015] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f4298 00:25:43.697 [2024-05-15 11:17:40.798598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.697 [2024-05-15 11:17:40.798616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:43.697 [2024-05-15 11:17:40.807081] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f0350 00:25:43.697 [2024-05-15 11:17:40.807657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.697 [2024-05-15 11:17:40.807675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:43.697 [2024-05-15 11:17:40.816092] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190fb480 00:25:43.697 [2024-05-15 11:17:40.816676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.697 [2024-05-15 11:17:40.816694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:43.697 [2024-05-15 11:17:40.825161] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f1868 00:25:43.697 [2024-05-15 11:17:40.825756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.697 [2024-05-15 11:17:40.825774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:43.697 [2024-05-15 11:17:40.833665] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190ed4e8 00:25:43.697 [2024-05-15 11:17:40.834227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.697 [2024-05-15 11:17:40.834245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:43.697 [2024-05-15 11:17:40.843207] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f81e0 00:25:43.697 [2024-05-15 11:17:40.843871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.697 [2024-05-15 11:17:40.843889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:43.697 [2024-05-15 11:17:40.852607] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f0788 00:25:43.697 [2024-05-15 11:17:40.853399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:18774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.697 [2024-05-15 11:17:40.853416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:43.697 [2024-05-15 11:17:40.862134] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e6300 00:25:43.697 [2024-05-15 11:17:40.863048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.697 [2024-05-15 11:17:40.863065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:43.697 [2024-05-15 11:17:40.871611] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190efae0 00:25:43.697 [2024-05-15 11:17:40.872646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.697 [2024-05-15 11:17:40.872664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:43.697 [2024-05-15 11:17:40.881074] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190feb58 00:25:43.697 [2024-05-15 11:17:40.882250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.697 [2024-05-15 11:17:40.882268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:43.697 [2024-05-15 11:17:40.890590] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f7da8 00:25:43.697 [2024-05-15 11:17:40.891860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.697 [2024-05-15 11:17:40.891887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:43.697 [2024-05-15 11:17:40.900054] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190eea00 00:25:43.697 [2024-05-15 11:17:40.901452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.697 [2024-05-15 11:17:40.901470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:43.697 [2024-05-15 11:17:40.906451] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e9168 00:25:43.697 [2024-05-15 11:17:40.907005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.697 [2024-05-15 11:17:40.907023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:43.697 [2024-05-15 11:17:40.915645] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e5220 00:25:43.697 [2024-05-15 11:17:40.916192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.697 [2024-05-15 11:17:40.916211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:43.697 [2024-05-15 11:17:40.926999] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e0a68 00:25:43.697 [2024-05-15 11:17:40.928185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.697 [2024-05-15 11:17:40.928204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.697 [2024-05-15 11:17:40.934619] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190ecc78 00:25:43.697 [2024-05-15 11:17:40.935285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:9461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.697 [2024-05-15 11:17:40.935303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:43.697 [2024-05-15 11:17:40.942945] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e1f80 00:25:43.697 [2024-05-15 11:17:40.943601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.697 [2024-05-15 11:17:40.943618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:43.697 [2024-05-15 11:17:40.952441] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f1ca0 00:25:43.697 [2024-05-15 11:17:40.953217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.697 [2024-05-15 11:17:40.953234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:43.956 [2024-05-15 11:17:40.962123] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190ed4e8 00:25:43.956 [2024-05-15 11:17:40.963068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.956 [2024-05-15 11:17:40.963086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:43.956 [2024-05-15 11:17:40.971743] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e5658 00:25:43.956 [2024-05-15 11:17:40.972763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.956 [2024-05-15 11:17:40.972780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:43.956 [2024-05-15 11:17:40.981230] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190ebb98 00:25:43.956 [2024-05-15 11:17:40.982366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:11844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.956 [2024-05-15 11:17:40.982383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:43.956 [2024-05-15 11:17:40.990724] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190ea680 00:25:43.956 [2024-05-15 11:17:40.991982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.956 [2024-05-15 11:17:40.991999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:43.956 [2024-05-15 11:17:41.000185] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190fb048 00:25:43.956 [2024-05-15 11:17:41.001562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:24296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.956 [2024-05-15 11:17:41.001580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:43.956 [2024-05-15 11:17:41.009700] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e7c50 00:25:43.956 [2024-05-15 11:17:41.011297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.957 [2024-05-15 11:17:41.011314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.957 [2024-05-15 11:17:41.016106] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190eaab8 00:25:43.957 [2024-05-15 11:17:41.016835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.957 [2024-05-15 11:17:41.016856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:43.957 [2024-05-15 11:17:41.025321] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190ed0b0 00:25:43.957 [2024-05-15 11:17:41.026056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.957 [2024-05-15 11:17:41.026075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:43.957 [2024-05-15 11:17:41.034364] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e3498 00:25:43.957 [2024-05-15 11:17:41.035099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.957 [2024-05-15 11:17:41.035116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:43.957 [2024-05-15 11:17:41.043442] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e4de8 00:25:43.957 [2024-05-15 11:17:41.044191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:17857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.957 [2024-05-15 11:17:41.044209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:43.957 [2024-05-15 11:17:41.052455] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190ee5c8 00:25:43.957 [2024-05-15 11:17:41.053216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.957 [2024-05-15 11:17:41.053234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:43.957 [2024-05-15 11:17:41.061474] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f7970 00:25:43.957 [2024-05-15 11:17:41.062244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.957 [2024-05-15 11:17:41.062261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:43.957 [2024-05-15 11:17:41.070502] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190de470 00:25:43.957 [2024-05-15 11:17:41.071238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.957 [2024-05-15 11:17:41.071256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:43.957 [2024-05-15 11:17:41.079536] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190fd208 00:25:43.957 [2024-05-15 11:17:41.080307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.957 [2024-05-15 11:17:41.080325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:43.957 [2024-05-15 11:17:41.088849] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e88f8 00:25:43.957 [2024-05-15 11:17:41.089622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.957 [2024-05-15 11:17:41.089641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:43.957 [2024-05-15 11:17:41.097905] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f7538 00:25:43.957 [2024-05-15 11:17:41.098667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:8724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.957 [2024-05-15 11:17:41.098689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:43.957 [2024-05-15 11:17:41.106992] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e4140 00:25:43.957 [2024-05-15 11:17:41.107757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.957 [2024-05-15 11:17:41.107777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:43.957 [2024-05-15 11:17:41.116054] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190fa7d8 00:25:43.957 [2024-05-15 11:17:41.116832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.957 [2024-05-15 11:17:41.116850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:43.957 [2024-05-15 11:17:41.125306] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190fb048 00:25:43.957 [2024-05-15 11:17:41.126071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.957 [2024-05-15 11:17:41.126090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:43.957 [2024-05-15 11:17:41.134410] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e6300 00:25:43.957 [2024-05-15 11:17:41.135170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.957 [2024-05-15 11:17:41.135189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:43.957 [2024-05-15 11:17:41.143466] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f2948 00:25:43.957 [2024-05-15 11:17:41.144231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:11719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.957 [2024-05-15 11:17:41.144250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:43.957 [2024-05-15 11:17:41.152515] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190eb328 00:25:43.957 [2024-05-15 11:17:41.153311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.957 [2024-05-15 11:17:41.153330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:43.957 [2024-05-15 11:17:41.161627] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f8618 00:25:43.957 [2024-05-15 11:17:41.162420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.957 [2024-05-15 11:17:41.162439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:43.957 [2024-05-15 11:17:41.170951] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190eaef0 00:25:43.957 [2024-05-15 11:17:41.171742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.957 [2024-05-15 11:17:41.171760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:43.957 [2024-05-15 11:17:41.180050] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190ecc78 00:25:43.957 [2024-05-15 11:17:41.180829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.957 [2024-05-15 11:17:41.180847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:43.957 [2024-05-15 11:17:41.189140] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e3060 00:25:43.957 [2024-05-15 11:17:41.189935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:8114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.957 [2024-05-15 11:17:41.189954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:43.957 [2024-05-15 11:17:41.198289] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190ebb98 00:25:43.957 [2024-05-15 11:17:41.199081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.957 [2024-05-15 11:17:41.199099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:43.957 [2024-05-15 11:17:41.207366] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e2c28 00:25:43.957 [2024-05-15 11:17:41.208135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.957 [2024-05-15 11:17:41.208153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:43.957 [2024-05-15 11:17:41.216451] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190eee38 00:25:43.957 [2024-05-15 11:17:41.217236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.957 [2024-05-15 11:17:41.217256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:44.216 [2024-05-15 11:17:41.225734] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f7da8 00:25:44.216 [2024-05-15 11:17:41.226533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.216 [2024-05-15 11:17:41.226551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:44.216 [2024-05-15 11:17:41.234896] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e38d0 00:25:44.216 [2024-05-15 11:17:41.235656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.216 [2024-05-15 11:17:41.235674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:44.216 [2024-05-15 11:17:41.243978] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190dece0 00:25:44.216 [2024-05-15 11:17:41.244739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.217 [2024-05-15 11:17:41.244757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:44.217 [2024-05-15 11:17:41.253047] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f1ca0 00:25:44.217 [2024-05-15 11:17:41.253802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.217 [2024-05-15 11:17:41.253820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:44.217 [2024-05-15 11:17:41.262141] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190efae0 00:25:44.217 [2024-05-15 11:17:41.262901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.217 [2024-05-15 11:17:41.262919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:44.217 [2024-05-15 11:17:41.271194] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f31b8 00:25:44.217 [2024-05-15 11:17:41.271950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.217 [2024-05-15 11:17:41.271968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:44.217 [2024-05-15 11:17:41.280466] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f57b0 00:25:44.217 [2024-05-15 11:17:41.281233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.217 [2024-05-15 11:17:41.281251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:44.217 [2024-05-15 11:17:41.289564] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f3e60 00:25:44.217 [2024-05-15 11:17:41.290325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:15827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.217 [2024-05-15 11:17:41.290343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:44.217 [2024-05-15 11:17:41.299738] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e6b70 00:25:44.217 [2024-05-15 11:17:41.300874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.217 [2024-05-15 11:17:41.300892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:44.217 [2024-05-15 11:17:41.309246] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e0ea0 00:25:44.217 [2024-05-15 11:17:41.310508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.217 [2024-05-15 11:17:41.310526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:44.217 [2024-05-15 11:17:41.318726] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f0350 00:25:44.217 [2024-05-15 11:17:41.320187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.217 [2024-05-15 11:17:41.320205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:44.217 [2024-05-15 11:17:41.327147] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e8088 00:25:44.217 [2024-05-15 11:17:41.328265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.217 [2024-05-15 11:17:41.328283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.217 [2024-05-15 11:17:41.336094] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f0ff8 00:25:44.217 [2024-05-15 11:17:41.337227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.217 [2024-05-15 11:17:41.337248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.217 [2024-05-15 11:17:41.344732] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f1430 00:25:44.217 [2024-05-15 11:17:41.346121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.217 [2024-05-15 11:17:41.346141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:44.217 [2024-05-15 11:17:41.353199] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e3498 00:25:44.217 [2024-05-15 11:17:41.353958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.217 [2024-05-15 11:17:41.353976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:44.217 [2024-05-15 11:17:41.362275] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190ee190 00:25:44.217 [2024-05-15 11:17:41.363037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:17504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.217 [2024-05-15 11:17:41.363056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:44.217 [2024-05-15 11:17:41.371366] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f7100 00:25:44.217 [2024-05-15 11:17:41.372029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.217 [2024-05-15 11:17:41.372046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:44.217 [2024-05-15 11:17:41.380435] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e99d8 00:25:44.217 [2024-05-15 11:17:41.381088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:3493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.217 [2024-05-15 11:17:41.381105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:44.217 [2024-05-15 11:17:41.389459] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f8618 00:25:44.217 [2024-05-15 11:17:41.390101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.217 [2024-05-15 11:17:41.390119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:44.217 [2024-05-15 11:17:41.398791] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f7970 00:25:44.217 [2024-05-15 11:17:41.399645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.217 [2024-05-15 11:17:41.399664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:44.217 [2024-05-15 11:17:41.407403] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e12d8 00:25:44.217 [2024-05-15 11:17:41.408244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.217 [2024-05-15 11:17:41.408261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:44.217 [2024-05-15 11:17:41.416923] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e4de8 00:25:44.217 [2024-05-15 11:17:41.417887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.217 [2024-05-15 11:17:41.417905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:44.217 [2024-05-15 11:17:41.427049] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f3a28 00:25:44.217 [2024-05-15 11:17:41.428077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.217 [2024-05-15 11:17:41.428097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:44.217 [2024-05-15 11:17:41.436117] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e27f0 00:25:44.217 [2024-05-15 11:17:41.437124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.217 [2024-05-15 11:17:41.437143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:44.217 [2024-05-15 11:17:41.445136] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190feb58 00:25:44.217 [2024-05-15 11:17:41.446135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.217 [2024-05-15 11:17:41.446154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:44.217 [2024-05-15 11:17:41.453520] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190fac10 00:25:44.217 [2024-05-15 11:17:41.454884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.217 [2024-05-15 11:17:41.454902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:44.217 [2024-05-15 11:17:41.461924] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190ee190 00:25:44.217 [2024-05-15 11:17:41.462657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.217 [2024-05-15 11:17:41.462675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:44.217 [2024-05-15 11:17:41.471298] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190f8a50 00:25:44.217 [2024-05-15 11:17:41.472109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:17083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.217 [2024-05-15 11:17:41.472127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:44.476 [2024-05-15 11:17:41.480992] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e3d08 00:25:44.476 [2024-05-15 11:17:41.481951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.476 [2024-05-15 11:17:41.481970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:44.476 [2024-05-15 11:17:41.490691] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190ea680 00:25:44.476 [2024-05-15 11:17:41.491686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.476 [2024-05-15 11:17:41.491704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:44.476 [2024-05-15 11:17:41.499273] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e23b8 00:25:44.476 [2024-05-15 11:17:41.500261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:17503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.476 [2024-05-15 11:17:41.500279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:44.476 [2024-05-15 11:17:41.507746] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e5658 00:25:44.476 [2024-05-15 11:17:41.508368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.476 [2024-05-15 11:17:41.508386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:44.477 [2024-05-15 11:17:41.516926] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e4578 00:25:44.477 [2024-05-15 11:17:41.517434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.477 [2024-05-15 11:17:41.517452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:44.477 [2024-05-15 11:17:41.526427] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190eaab8 00:25:44.477 [2024-05-15 11:17:41.527048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.477 [2024-05-15 11:17:41.527066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:44.477 [2024-05-15 11:17:41.535787] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7bf20) with pdu=0x2000190e5ec8 00:25:44.477 [2024-05-15 11:17:41.536657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.477 [2024-05-15 11:17:41.536675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:44.477 00:25:44.477 Latency(us) 00:25:44.477 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:44.477 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:44.477 nvme0n1 : 2.00 27851.47 108.79 0.00 0.00 4589.25 2393.49 13050.21 00:25:44.477 =================================================================================================================== 00:25:44.477 Total : 27851.47 108.79 0.00 0.00 4589.25 2393.49 13050.21 00:25:44.477 0 00:25:44.477 11:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:44.477 11:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:44.477 11:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:44.477 | .driver_specific 00:25:44.477 | .nvme_error 00:25:44.477 | .status_code 00:25:44.477 | .command_transient_transport_error' 00:25:44.477 11:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:44.477 11:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 218 > 0 )) 00:25:44.477 11:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2396460 00:25:44.477 11:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 2396460 ']' 00:25:44.477 11:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 2396460 00:25:44.477 11:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:25:44.736 11:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:25:44.736 11:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2396460 00:25:44.736 11:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:25:44.736 11:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:25:44.736 11:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2396460' 00:25:44.736 killing process with pid 2396460 00:25:44.736 11:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 2396460 00:25:44.736 Received shutdown signal, test time was about 2.000000 seconds 00:25:44.736 00:25:44.736 Latency(us) 00:25:44.736 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:44.736 =================================================================================================================== 00:25:44.736 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:44.736 11:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 2396460 00:25:44.736 11:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:25:44.736 11:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:44.736 11:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:44.736 11:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:44.736 11:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:44.736 11:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2397152 00:25:44.736 11:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2397152 /var/tmp/bperf.sock 00:25:44.736 11:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:25:44.736 11:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 2397152 ']' 00:25:44.736 11:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:44.736 11:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:44.736 11:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:44.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:44.736 11:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:44.736 11:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:44.995 [2024-05-15 11:17:42.037954] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:25:44.995 [2024-05-15 11:17:42.038004] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2397152 ] 00:25:44.995 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:44.995 Zero copy mechanism will not be used. 00:25:44.995 EAL: No free 2048 kB hugepages reported on node 1 00:25:44.995 [2024-05-15 11:17:42.092368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:44.995 [2024-05-15 11:17:42.161249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:45.929 11:17:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:45.929 11:17:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:25:45.929 11:17:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:45.929 11:17:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:45.929 11:17:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:45.929 11:17:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:45.929 11:17:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:45.929 11:17:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:45.929 11:17:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:45.929 11:17:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:46.187 nvme0n1 00:25:46.187 11:17:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:46.187 11:17:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:46.187 11:17:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:46.187 11:17:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:46.187 11:17:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:46.187 11:17:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:46.445 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:46.445 Zero copy mechanism will not be used. 00:25:46.445 Running I/O for 2 seconds... 00:25:46.445 [2024-05-15 11:17:43.544543] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.445 [2024-05-15 11:17:43.544925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.445 [2024-05-15 11:17:43.544956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.445 [2024-05-15 11:17:43.549628] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.445 [2024-05-15 11:17:43.550015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.445 [2024-05-15 11:17:43.550039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.446 [2024-05-15 11:17:43.554761] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.446 [2024-05-15 11:17:43.555116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.446 [2024-05-15 11:17:43.555138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.446 [2024-05-15 11:17:43.559840] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.446 [2024-05-15 11:17:43.560213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.446 [2024-05-15 11:17:43.560234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.446 [2024-05-15 11:17:43.564968] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.446 [2024-05-15 11:17:43.565337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.446 [2024-05-15 11:17:43.565362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.446 [2024-05-15 11:17:43.570056] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.446 [2024-05-15 11:17:43.570429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.446 [2024-05-15 11:17:43.570450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.446 [2024-05-15 11:17:43.575157] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.446 [2024-05-15 11:17:43.575534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.446 [2024-05-15 11:17:43.575553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.446 [2024-05-15 11:17:43.580304] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.446 [2024-05-15 11:17:43.580684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.446 [2024-05-15 11:17:43.580703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.446 [2024-05-15 11:17:43.585879] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.446 [2024-05-15 11:17:43.586234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.446 [2024-05-15 11:17:43.586253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.446 [2024-05-15 11:17:43.591150] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.446 [2024-05-15 11:17:43.591509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.446 [2024-05-15 11:17:43.591529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.446 [2024-05-15 11:17:43.596610] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.446 [2024-05-15 11:17:43.596967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.446 [2024-05-15 11:17:43.596986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.446 [2024-05-15 11:17:43.603031] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.446 [2024-05-15 11:17:43.603397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.446 [2024-05-15 11:17:43.603417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.446 [2024-05-15 11:17:43.609254] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.446 [2024-05-15 11:17:43.609609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.446 [2024-05-15 11:17:43.609628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.446 [2024-05-15 11:17:43.615007] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.446 [2024-05-15 11:17:43.615236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.446 [2024-05-15 11:17:43.615254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.446 [2024-05-15 11:17:43.620373] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.446 [2024-05-15 11:17:43.620704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.446 [2024-05-15 11:17:43.620722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.446 [2024-05-15 11:17:43.626553] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.446 [2024-05-15 11:17:43.626877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.446 [2024-05-15 11:17:43.626895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.446 [2024-05-15 11:17:43.631894] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.446 [2024-05-15 11:17:43.632231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.446 [2024-05-15 11:17:43.632250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.446 [2024-05-15 11:17:43.637201] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.446 [2024-05-15 11:17:43.637505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.446 [2024-05-15 11:17:43.637524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.446 [2024-05-15 11:17:43.642732] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.446 [2024-05-15 11:17:43.643018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.446 [2024-05-15 11:17:43.643037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.446 [2024-05-15 11:17:43.648460] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.446 [2024-05-15 11:17:43.648771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.446 [2024-05-15 11:17:43.648790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.446 [2024-05-15 11:17:43.653751] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.446 [2024-05-15 11:17:43.654056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.446 [2024-05-15 11:17:43.654074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.446 [2024-05-15 11:17:43.659359] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.446 [2024-05-15 11:17:43.659660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.446 [2024-05-15 11:17:43.659682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.446 [2024-05-15 11:17:43.664920] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.446 [2024-05-15 11:17:43.665230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.446 [2024-05-15 11:17:43.665249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.446 [2024-05-15 11:17:43.671285] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.446 [2024-05-15 11:17:43.671622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.446 [2024-05-15 11:17:43.671641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.446 [2024-05-15 11:17:43.678573] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.446 [2024-05-15 11:17:43.678999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.446 [2024-05-15 11:17:43.679018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.446 [2024-05-15 11:17:43.685725] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.446 [2024-05-15 11:17:43.686082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.446 [2024-05-15 11:17:43.686100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.446 [2024-05-15 11:17:43.692209] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.446 [2024-05-15 11:17:43.692568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.446 [2024-05-15 11:17:43.692586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.446 [2024-05-15 11:17:43.698807] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.446 [2024-05-15 11:17:43.699155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.446 [2024-05-15 11:17:43.699180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.446 [2024-05-15 11:17:43.705522] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.446 [2024-05-15 11:17:43.705846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.446 [2024-05-15 11:17:43.705865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.707 [2024-05-15 11:17:43.711791] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.707 [2024-05-15 11:17:43.712103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.707 [2024-05-15 11:17:43.712121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.707 [2024-05-15 11:17:43.717995] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.707 [2024-05-15 11:17:43.718343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.707 [2024-05-15 11:17:43.718362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.707 [2024-05-15 11:17:43.724684] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.707 [2024-05-15 11:17:43.725021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.707 [2024-05-15 11:17:43.725039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.707 [2024-05-15 11:17:43.731291] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.707 [2024-05-15 11:17:43.731643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.707 [2024-05-15 11:17:43.731661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.707 [2024-05-15 11:17:43.737325] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.707 [2024-05-15 11:17:43.737645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.707 [2024-05-15 11:17:43.737663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.707 [2024-05-15 11:17:43.743127] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.707 [2024-05-15 11:17:43.743467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.707 [2024-05-15 11:17:43.743486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.707 [2024-05-15 11:17:43.749102] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.707 [2024-05-15 11:17:43.749451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.707 [2024-05-15 11:17:43.749470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.707 [2024-05-15 11:17:43.754358] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.707 [2024-05-15 11:17:43.754695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.707 [2024-05-15 11:17:43.754713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.707 [2024-05-15 11:17:43.759930] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.707 [2024-05-15 11:17:43.760243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.707 [2024-05-15 11:17:43.760262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.707 [2024-05-15 11:17:43.764716] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.707 [2024-05-15 11:17:43.765033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.707 [2024-05-15 11:17:43.765050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.707 [2024-05-15 11:17:43.769127] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.707 [2024-05-15 11:17:43.769453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.707 [2024-05-15 11:17:43.769471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.708 [2024-05-15 11:17:43.773476] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.708 [2024-05-15 11:17:43.773784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.708 [2024-05-15 11:17:43.773802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.708 [2024-05-15 11:17:43.777816] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.708 [2024-05-15 11:17:43.778115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.708 [2024-05-15 11:17:43.778133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.708 [2024-05-15 11:17:43.782074] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.708 [2024-05-15 11:17:43.782376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.708 [2024-05-15 11:17:43.782394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.708 [2024-05-15 11:17:43.786299] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.708 [2024-05-15 11:17:43.786588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.708 [2024-05-15 11:17:43.786605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.708 [2024-05-15 11:17:43.790528] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.708 [2024-05-15 11:17:43.790826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.708 [2024-05-15 11:17:43.790844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.708 [2024-05-15 11:17:43.794790] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.708 [2024-05-15 11:17:43.795094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.708 [2024-05-15 11:17:43.795111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.708 [2024-05-15 11:17:43.799071] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.708 [2024-05-15 11:17:43.799380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.708 [2024-05-15 11:17:43.799399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.708 [2024-05-15 11:17:43.803284] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.708 [2024-05-15 11:17:43.803576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.708 [2024-05-15 11:17:43.803598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.708 [2024-05-15 11:17:43.807995] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.708 [2024-05-15 11:17:43.808321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.708 [2024-05-15 11:17:43.808340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.708 [2024-05-15 11:17:43.812494] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.708 [2024-05-15 11:17:43.812802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.708 [2024-05-15 11:17:43.812820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.708 [2024-05-15 11:17:43.817331] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.708 [2024-05-15 11:17:43.817639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.708 [2024-05-15 11:17:43.817656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.708 [2024-05-15 11:17:43.822545] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.708 [2024-05-15 11:17:43.822894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.708 [2024-05-15 11:17:43.822913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.708 [2024-05-15 11:17:43.828802] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.708 [2024-05-15 11:17:43.829120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.708 [2024-05-15 11:17:43.829139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.708 [2024-05-15 11:17:43.834751] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.708 [2024-05-15 11:17:43.835101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.708 [2024-05-15 11:17:43.835119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.708 [2024-05-15 11:17:43.841216] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.708 [2024-05-15 11:17:43.841546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.708 [2024-05-15 11:17:43.841564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.708 [2024-05-15 11:17:43.847753] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.708 [2024-05-15 11:17:43.848110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.708 [2024-05-15 11:17:43.848129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.708 [2024-05-15 11:17:43.854349] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.708 [2024-05-15 11:17:43.854686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.708 [2024-05-15 11:17:43.854705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.708 [2024-05-15 11:17:43.860726] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.708 [2024-05-15 11:17:43.861053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.708 [2024-05-15 11:17:43.861070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.708 [2024-05-15 11:17:43.867771] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.708 [2024-05-15 11:17:43.868076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.708 [2024-05-15 11:17:43.868094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.708 [2024-05-15 11:17:43.873717] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.708 [2024-05-15 11:17:43.874034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.708 [2024-05-15 11:17:43.874053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.708 [2024-05-15 11:17:43.880133] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.708 [2024-05-15 11:17:43.880529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.708 [2024-05-15 11:17:43.880548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.708 [2024-05-15 11:17:43.887226] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.708 [2024-05-15 11:17:43.887535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.708 [2024-05-15 11:17:43.887554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.708 [2024-05-15 11:17:43.893429] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.708 [2024-05-15 11:17:43.893761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.708 [2024-05-15 11:17:43.893780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.708 [2024-05-15 11:17:43.899467] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.708 [2024-05-15 11:17:43.899780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.708 [2024-05-15 11:17:43.899798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.708 [2024-05-15 11:17:43.905577] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.708 [2024-05-15 11:17:43.905853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.708 [2024-05-15 11:17:43.905871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.708 [2024-05-15 11:17:43.910576] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.708 [2024-05-15 11:17:43.910847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.708 [2024-05-15 11:17:43.910865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.708 [2024-05-15 11:17:43.916006] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.708 [2024-05-15 11:17:43.916283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.708 [2024-05-15 11:17:43.916302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.708 [2024-05-15 11:17:43.921347] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.709 [2024-05-15 11:17:43.921619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.709 [2024-05-15 11:17:43.921637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.709 [2024-05-15 11:17:43.925734] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.709 [2024-05-15 11:17:43.925980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.709 [2024-05-15 11:17:43.925999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.709 [2024-05-15 11:17:43.929773] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.709 [2024-05-15 11:17:43.930031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.709 [2024-05-15 11:17:43.930050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.709 [2024-05-15 11:17:43.933844] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.709 [2024-05-15 11:17:43.934099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.709 [2024-05-15 11:17:43.934118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.709 [2024-05-15 11:17:43.938175] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.709 [2024-05-15 11:17:43.938438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.709 [2024-05-15 11:17:43.938456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.709 [2024-05-15 11:17:43.942382] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.709 [2024-05-15 11:17:43.942644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.709 [2024-05-15 11:17:43.942663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.709 [2024-05-15 11:17:43.946571] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.709 [2024-05-15 11:17:43.946821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.709 [2024-05-15 11:17:43.946843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.709 [2024-05-15 11:17:43.951001] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.709 [2024-05-15 11:17:43.951255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.709 [2024-05-15 11:17:43.951273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.709 [2024-05-15 11:17:43.955263] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.709 [2024-05-15 11:17:43.955534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.709 [2024-05-15 11:17:43.955554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.709 [2024-05-15 11:17:43.959509] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.709 [2024-05-15 11:17:43.959760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.709 [2024-05-15 11:17:43.959778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.709 [2024-05-15 11:17:43.963997] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.709 [2024-05-15 11:17:43.964263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.709 [2024-05-15 11:17:43.964282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.709 [2024-05-15 11:17:43.968617] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.709 [2024-05-15 11:17:43.968880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.709 [2024-05-15 11:17:43.968898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.969 [2024-05-15 11:17:43.972990] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.969 [2024-05-15 11:17:43.973250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.969 [2024-05-15 11:17:43.973268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.969 [2024-05-15 11:17:43.977457] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.969 [2024-05-15 11:17:43.977727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.969 [2024-05-15 11:17:43.977746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.969 [2024-05-15 11:17:43.981747] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.969 [2024-05-15 11:17:43.981998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.969 [2024-05-15 11:17:43.982016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.969 [2024-05-15 11:17:43.986104] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.969 [2024-05-15 11:17:43.986369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.969 [2024-05-15 11:17:43.986387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.969 [2024-05-15 11:17:43.990368] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.969 [2024-05-15 11:17:43.990637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.969 [2024-05-15 11:17:43.990656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.969 [2024-05-15 11:17:43.994681] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.969 [2024-05-15 11:17:43.994938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.969 [2024-05-15 11:17:43.994956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.969 [2024-05-15 11:17:43.999585] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.969 [2024-05-15 11:17:43.999877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.969 [2024-05-15 11:17:43.999895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.969 [2024-05-15 11:17:44.003583] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.969 [2024-05-15 11:17:44.003841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.969 [2024-05-15 11:17:44.003860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.969 [2024-05-15 11:17:44.007411] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.969 [2024-05-15 11:17:44.007659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.969 [2024-05-15 11:17:44.007678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.969 [2024-05-15 11:17:44.011265] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.970 [2024-05-15 11:17:44.011513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.970 [2024-05-15 11:17:44.011532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.970 [2024-05-15 11:17:44.015141] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.970 [2024-05-15 11:17:44.015416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.970 [2024-05-15 11:17:44.015434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.970 [2024-05-15 11:17:44.019570] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.970 [2024-05-15 11:17:44.019807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.970 [2024-05-15 11:17:44.019825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.970 [2024-05-15 11:17:44.023512] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.970 [2024-05-15 11:17:44.023752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.970 [2024-05-15 11:17:44.023771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.970 [2024-05-15 11:17:44.027336] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.970 [2024-05-15 11:17:44.027560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.970 [2024-05-15 11:17:44.027578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.970 [2024-05-15 11:17:44.031064] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.970 [2024-05-15 11:17:44.031309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.970 [2024-05-15 11:17:44.031327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.970 [2024-05-15 11:17:44.034806] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.970 [2024-05-15 11:17:44.035042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.970 [2024-05-15 11:17:44.035060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.970 [2024-05-15 11:17:44.038539] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.970 [2024-05-15 11:17:44.038766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.970 [2024-05-15 11:17:44.038784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.970 [2024-05-15 11:17:44.042286] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.970 [2024-05-15 11:17:44.042509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.970 [2024-05-15 11:17:44.042528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.970 [2024-05-15 11:17:44.046033] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.970 [2024-05-15 11:17:44.046296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.970 [2024-05-15 11:17:44.046316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.970 [2024-05-15 11:17:44.049896] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.970 [2024-05-15 11:17:44.050134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.970 [2024-05-15 11:17:44.050153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.970 [2024-05-15 11:17:44.053693] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.970 [2024-05-15 11:17:44.053924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.970 [2024-05-15 11:17:44.053946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.970 [2024-05-15 11:17:44.057501] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.970 [2024-05-15 11:17:44.057743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.970 [2024-05-15 11:17:44.057763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.970 [2024-05-15 11:17:44.061251] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.970 [2024-05-15 11:17:44.061502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.970 [2024-05-15 11:17:44.061521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.970 [2024-05-15 11:17:44.065033] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.970 [2024-05-15 11:17:44.065272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.970 [2024-05-15 11:17:44.065290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.970 [2024-05-15 11:17:44.068812] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.970 [2024-05-15 11:17:44.069062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.970 [2024-05-15 11:17:44.069081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.970 [2024-05-15 11:17:44.072594] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.970 [2024-05-15 11:17:44.072853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.970 [2024-05-15 11:17:44.072871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.970 [2024-05-15 11:17:44.076379] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.970 [2024-05-15 11:17:44.076607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.970 [2024-05-15 11:17:44.076626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.970 [2024-05-15 11:17:44.080093] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.970 [2024-05-15 11:17:44.080356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.970 [2024-05-15 11:17:44.080375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.970 [2024-05-15 11:17:44.083881] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.970 [2024-05-15 11:17:44.084122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.970 [2024-05-15 11:17:44.084141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.970 [2024-05-15 11:17:44.087646] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.970 [2024-05-15 11:17:44.087883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.970 [2024-05-15 11:17:44.087901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.970 [2024-05-15 11:17:44.091423] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.970 [2024-05-15 11:17:44.091664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.970 [2024-05-15 11:17:44.091683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.970 [2024-05-15 11:17:44.095144] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.970 [2024-05-15 11:17:44.095380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.970 [2024-05-15 11:17:44.095399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.970 [2024-05-15 11:17:44.098923] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.970 [2024-05-15 11:17:44.099176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.970 [2024-05-15 11:17:44.099196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.970 [2024-05-15 11:17:44.102838] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.970 [2024-05-15 11:17:44.103064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.970 [2024-05-15 11:17:44.103083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.970 [2024-05-15 11:17:44.107734] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.970 [2024-05-15 11:17:44.107957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.970 [2024-05-15 11:17:44.107975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.970 [2024-05-15 11:17:44.112627] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.970 [2024-05-15 11:17:44.112891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.970 [2024-05-15 11:17:44.112911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.970 [2024-05-15 11:17:44.116920] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.970 [2024-05-15 11:17:44.117148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.970 [2024-05-15 11:17:44.117174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.970 [2024-05-15 11:17:44.122414] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.971 [2024-05-15 11:17:44.122653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.971 [2024-05-15 11:17:44.122673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.971 [2024-05-15 11:17:44.127092] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.971 [2024-05-15 11:17:44.127339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.971 [2024-05-15 11:17:44.127358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.971 [2024-05-15 11:17:44.131734] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.971 [2024-05-15 11:17:44.132022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.971 [2024-05-15 11:17:44.132040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.971 [2024-05-15 11:17:44.137354] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.971 [2024-05-15 11:17:44.137688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.971 [2024-05-15 11:17:44.137707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.971 [2024-05-15 11:17:44.143445] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.971 [2024-05-15 11:17:44.143719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.971 [2024-05-15 11:17:44.143739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.971 [2024-05-15 11:17:44.148656] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.971 [2024-05-15 11:17:44.148922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.971 [2024-05-15 11:17:44.148941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.971 [2024-05-15 11:17:44.153260] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.971 [2024-05-15 11:17:44.153538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.971 [2024-05-15 11:17:44.153557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.971 [2024-05-15 11:17:44.157369] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.971 [2024-05-15 11:17:44.157602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.971 [2024-05-15 11:17:44.157621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.971 [2024-05-15 11:17:44.161201] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.971 [2024-05-15 11:17:44.161449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.971 [2024-05-15 11:17:44.161467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.971 [2024-05-15 11:17:44.165066] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.971 [2024-05-15 11:17:44.165321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.971 [2024-05-15 11:17:44.165344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.971 [2024-05-15 11:17:44.169338] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.971 [2024-05-15 11:17:44.169585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.971 [2024-05-15 11:17:44.169604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.971 [2024-05-15 11:17:44.173438] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.971 [2024-05-15 11:17:44.173681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.971 [2024-05-15 11:17:44.173700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.971 [2024-05-15 11:17:44.177314] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.971 [2024-05-15 11:17:44.177573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.971 [2024-05-15 11:17:44.177591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.971 [2024-05-15 11:17:44.181178] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.971 [2024-05-15 11:17:44.181433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.971 [2024-05-15 11:17:44.181451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.971 [2024-05-15 11:17:44.184997] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.971 [2024-05-15 11:17:44.185254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.971 [2024-05-15 11:17:44.185272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.971 [2024-05-15 11:17:44.188834] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.971 [2024-05-15 11:17:44.189070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.971 [2024-05-15 11:17:44.189088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.971 [2024-05-15 11:17:44.193059] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.971 [2024-05-15 11:17:44.193313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.971 [2024-05-15 11:17:44.193332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.971 [2024-05-15 11:17:44.196919] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.971 [2024-05-15 11:17:44.197155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.971 [2024-05-15 11:17:44.197181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.971 [2024-05-15 11:17:44.200742] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.971 [2024-05-15 11:17:44.200974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.971 [2024-05-15 11:17:44.200992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.971 [2024-05-15 11:17:44.204567] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.971 [2024-05-15 11:17:44.204805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.971 [2024-05-15 11:17:44.204824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.971 [2024-05-15 11:17:44.208427] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.971 [2024-05-15 11:17:44.208666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.971 [2024-05-15 11:17:44.208685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.971 [2024-05-15 11:17:44.212241] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.971 [2024-05-15 11:17:44.212483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.971 [2024-05-15 11:17:44.212502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.971 [2024-05-15 11:17:44.216065] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.971 [2024-05-15 11:17:44.216309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.971 [2024-05-15 11:17:44.216329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.971 [2024-05-15 11:17:44.219974] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.971 [2024-05-15 11:17:44.220228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.971 [2024-05-15 11:17:44.220248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.971 [2024-05-15 11:17:44.223801] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.971 [2024-05-15 11:17:44.224044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.971 [2024-05-15 11:17:44.224063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.971 [2024-05-15 11:17:44.227795] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.971 [2024-05-15 11:17:44.228043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.971 [2024-05-15 11:17:44.228061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.971 [2024-05-15 11:17:44.232195] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:46.971 [2024-05-15 11:17:44.232457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.971 [2024-05-15 11:17:44.232476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.233 [2024-05-15 11:17:44.236099] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.233 [2024-05-15 11:17:44.236351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.233 [2024-05-15 11:17:44.236370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.233 [2024-05-15 11:17:44.240052] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.233 [2024-05-15 11:17:44.240314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.233 [2024-05-15 11:17:44.240334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.233 [2024-05-15 11:17:44.243935] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.233 [2024-05-15 11:17:44.244170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.233 [2024-05-15 11:17:44.244190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.233 [2024-05-15 11:17:44.247864] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.233 [2024-05-15 11:17:44.248113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.233 [2024-05-15 11:17:44.248131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.233 [2024-05-15 11:17:44.251702] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.233 [2024-05-15 11:17:44.251945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.233 [2024-05-15 11:17:44.251964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.233 [2024-05-15 11:17:44.255572] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.233 [2024-05-15 11:17:44.255825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.233 [2024-05-15 11:17:44.255844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.233 [2024-05-15 11:17:44.259420] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.233 [2024-05-15 11:17:44.259677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.233 [2024-05-15 11:17:44.259696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.233 [2024-05-15 11:17:44.263336] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.233 [2024-05-15 11:17:44.263597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.233 [2024-05-15 11:17:44.263616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.233 [2024-05-15 11:17:44.267476] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.233 [2024-05-15 11:17:44.267723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.233 [2024-05-15 11:17:44.267746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.233 [2024-05-15 11:17:44.272143] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.233 [2024-05-15 11:17:44.272410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.233 [2024-05-15 11:17:44.272429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.233 [2024-05-15 11:17:44.277091] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.233 [2024-05-15 11:17:44.277356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.233 [2024-05-15 11:17:44.277375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.233 [2024-05-15 11:17:44.281430] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.233 [2024-05-15 11:17:44.281676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.233 [2024-05-15 11:17:44.281695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.233 [2024-05-15 11:17:44.285762] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.233 [2024-05-15 11:17:44.286006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.233 [2024-05-15 11:17:44.286025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.233 [2024-05-15 11:17:44.290280] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.233 [2024-05-15 11:17:44.290517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.233 [2024-05-15 11:17:44.290535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.233 [2024-05-15 11:17:44.294669] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.233 [2024-05-15 11:17:44.294903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.233 [2024-05-15 11:17:44.294921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.233 [2024-05-15 11:17:44.299244] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.233 [2024-05-15 11:17:44.299488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.233 [2024-05-15 11:17:44.299506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.233 [2024-05-15 11:17:44.303451] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.233 [2024-05-15 11:17:44.303690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.233 [2024-05-15 11:17:44.303708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.233 [2024-05-15 11:17:44.307701] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.233 [2024-05-15 11:17:44.307954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.233 [2024-05-15 11:17:44.307973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.233 [2024-05-15 11:17:44.312082] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.233 [2024-05-15 11:17:44.312329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.233 [2024-05-15 11:17:44.312348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.233 [2024-05-15 11:17:44.316466] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.233 [2024-05-15 11:17:44.316709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.233 [2024-05-15 11:17:44.316728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.233 [2024-05-15 11:17:44.321023] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.233 [2024-05-15 11:17:44.321277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.233 [2024-05-15 11:17:44.321296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.233 [2024-05-15 11:17:44.325211] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.233 [2024-05-15 11:17:44.325452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.234 [2024-05-15 11:17:44.325471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.234 [2024-05-15 11:17:44.329181] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.234 [2024-05-15 11:17:44.329416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.234 [2024-05-15 11:17:44.329434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.234 [2024-05-15 11:17:44.333080] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.234 [2024-05-15 11:17:44.333336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.234 [2024-05-15 11:17:44.333356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.234 [2024-05-15 11:17:44.336954] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.234 [2024-05-15 11:17:44.337208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.234 [2024-05-15 11:17:44.337227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.234 [2024-05-15 11:17:44.340846] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.234 [2024-05-15 11:17:44.341093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.234 [2024-05-15 11:17:44.341112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.234 [2024-05-15 11:17:44.345269] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.234 [2024-05-15 11:17:44.345503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.234 [2024-05-15 11:17:44.345522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.234 [2024-05-15 11:17:44.350769] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.234 [2024-05-15 11:17:44.351010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.234 [2024-05-15 11:17:44.351029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.234 [2024-05-15 11:17:44.355325] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.234 [2024-05-15 11:17:44.355572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.234 [2024-05-15 11:17:44.355590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.234 [2024-05-15 11:17:44.359659] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.234 [2024-05-15 11:17:44.359887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.234 [2024-05-15 11:17:44.359905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.234 [2024-05-15 11:17:44.364061] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.234 [2024-05-15 11:17:44.364285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.234 [2024-05-15 11:17:44.364303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.234 [2024-05-15 11:17:44.368384] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.234 [2024-05-15 11:17:44.368614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.234 [2024-05-15 11:17:44.368632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.234 [2024-05-15 11:17:44.372628] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.234 [2024-05-15 11:17:44.372844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.234 [2024-05-15 11:17:44.372863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.234 [2024-05-15 11:17:44.376799] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.234 [2024-05-15 11:17:44.377014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.234 [2024-05-15 11:17:44.377032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.234 [2024-05-15 11:17:44.380986] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.234 [2024-05-15 11:17:44.381206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.234 [2024-05-15 11:17:44.381229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.234 [2024-05-15 11:17:44.385322] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.234 [2024-05-15 11:17:44.385549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.234 [2024-05-15 11:17:44.385567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.234 [2024-05-15 11:17:44.390807] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.234 [2024-05-15 11:17:44.391020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.234 [2024-05-15 11:17:44.391039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.234 [2024-05-15 11:17:44.395591] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.234 [2024-05-15 11:17:44.395810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.234 [2024-05-15 11:17:44.395829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.234 [2024-05-15 11:17:44.399837] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.234 [2024-05-15 11:17:44.400042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.234 [2024-05-15 11:17:44.400061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.234 [2024-05-15 11:17:44.404091] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.234 [2024-05-15 11:17:44.404304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.234 [2024-05-15 11:17:44.404323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.234 [2024-05-15 11:17:44.408493] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.234 [2024-05-15 11:17:44.408709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.234 [2024-05-15 11:17:44.408728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.234 [2024-05-15 11:17:44.412818] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.234 [2024-05-15 11:17:44.413058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.234 [2024-05-15 11:17:44.413076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.234 [2024-05-15 11:17:44.417019] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.234 [2024-05-15 11:17:44.417246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.234 [2024-05-15 11:17:44.417265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.234 [2024-05-15 11:17:44.421237] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.234 [2024-05-15 11:17:44.421460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.234 [2024-05-15 11:17:44.421478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.234 [2024-05-15 11:17:44.425873] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.235 [2024-05-15 11:17:44.426093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.235 [2024-05-15 11:17:44.426112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.235 [2024-05-15 11:17:44.431469] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.235 [2024-05-15 11:17:44.431677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.235 [2024-05-15 11:17:44.431695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.235 [2024-05-15 11:17:44.436131] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.235 [2024-05-15 11:17:44.436361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.235 [2024-05-15 11:17:44.436380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.235 [2024-05-15 11:17:44.440440] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.235 [2024-05-15 11:17:44.440668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.235 [2024-05-15 11:17:44.440687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.235 [2024-05-15 11:17:44.444633] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.235 [2024-05-15 11:17:44.444853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.235 [2024-05-15 11:17:44.444872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.235 [2024-05-15 11:17:44.448854] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.235 [2024-05-15 11:17:44.449072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.235 [2024-05-15 11:17:44.449091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.235 [2024-05-15 11:17:44.453286] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.235 [2024-05-15 11:17:44.453526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.235 [2024-05-15 11:17:44.453545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.235 [2024-05-15 11:17:44.457230] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.235 [2024-05-15 11:17:44.457478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.235 [2024-05-15 11:17:44.457502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.235 [2024-05-15 11:17:44.461190] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.235 [2024-05-15 11:17:44.461425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.235 [2024-05-15 11:17:44.461443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.235 [2024-05-15 11:17:44.465050] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.235 [2024-05-15 11:17:44.465290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.235 [2024-05-15 11:17:44.465309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.235 [2024-05-15 11:17:44.469039] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.235 [2024-05-15 11:17:44.469255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.235 [2024-05-15 11:17:44.469275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.235 [2024-05-15 11:17:44.473217] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.235 [2024-05-15 11:17:44.473455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.235 [2024-05-15 11:17:44.473474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.235 [2024-05-15 11:17:44.477546] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.235 [2024-05-15 11:17:44.477768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.235 [2024-05-15 11:17:44.477787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.235 [2024-05-15 11:17:44.481685] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.235 [2024-05-15 11:17:44.481888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.235 [2024-05-15 11:17:44.481907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.235 [2024-05-15 11:17:44.485952] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.235 [2024-05-15 11:17:44.486162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.235 [2024-05-15 11:17:44.486188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.235 [2024-05-15 11:17:44.490110] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.235 [2024-05-15 11:17:44.490325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.235 [2024-05-15 11:17:44.490343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.235 [2024-05-15 11:17:44.494425] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.235 [2024-05-15 11:17:44.494640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.235 [2024-05-15 11:17:44.494658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.496 [2024-05-15 11:17:44.498728] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.496 [2024-05-15 11:17:44.498947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.496 [2024-05-15 11:17:44.498966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.496 [2024-05-15 11:17:44.502949] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.496 [2024-05-15 11:17:44.503183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.496 [2024-05-15 11:17:44.503202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.496 [2024-05-15 11:17:44.507230] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.496 [2024-05-15 11:17:44.507454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.496 [2024-05-15 11:17:44.507472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.496 [2024-05-15 11:17:44.511742] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.496 [2024-05-15 11:17:44.511962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.496 [2024-05-15 11:17:44.511980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.496 [2024-05-15 11:17:44.515891] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.496 [2024-05-15 11:17:44.516096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.496 [2024-05-15 11:17:44.516114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.496 [2024-05-15 11:17:44.519991] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.496 [2024-05-15 11:17:44.520217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.497 [2024-05-15 11:17:44.520235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.497 [2024-05-15 11:17:44.524020] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.497 [2024-05-15 11:17:44.524245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.497 [2024-05-15 11:17:44.524263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.497 [2024-05-15 11:17:44.528092] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.497 [2024-05-15 11:17:44.528320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.497 [2024-05-15 11:17:44.528338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.497 [2024-05-15 11:17:44.532890] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.497 [2024-05-15 11:17:44.533117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.497 [2024-05-15 11:17:44.533136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.497 [2024-05-15 11:17:44.538781] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.497 [2024-05-15 11:17:44.539015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.497 [2024-05-15 11:17:44.539034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.497 [2024-05-15 11:17:44.543351] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.497 [2024-05-15 11:17:44.543598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.497 [2024-05-15 11:17:44.543617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.497 [2024-05-15 11:17:44.547829] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.497 [2024-05-15 11:17:44.548041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.497 [2024-05-15 11:17:44.548060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.497 [2024-05-15 11:17:44.552050] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.497 [2024-05-15 11:17:44.552277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.497 [2024-05-15 11:17:44.552295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.497 [2024-05-15 11:17:44.556348] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.497 [2024-05-15 11:17:44.556572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.497 [2024-05-15 11:17:44.556591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.497 [2024-05-15 11:17:44.560549] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.497 [2024-05-15 11:17:44.560760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.497 [2024-05-15 11:17:44.560779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.497 [2024-05-15 11:17:44.564647] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.497 [2024-05-15 11:17:44.564861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.497 [2024-05-15 11:17:44.564880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.497 [2024-05-15 11:17:44.568978] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.497 [2024-05-15 11:17:44.569202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.497 [2024-05-15 11:17:44.569224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.497 [2024-05-15 11:17:44.574231] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.497 [2024-05-15 11:17:44.574439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.497 [2024-05-15 11:17:44.574457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.497 [2024-05-15 11:17:44.579058] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.497 [2024-05-15 11:17:44.579265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.497 [2024-05-15 11:17:44.579284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.497 [2024-05-15 11:17:44.583280] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.497 [2024-05-15 11:17:44.583490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.497 [2024-05-15 11:17:44.583509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.497 [2024-05-15 11:17:44.587905] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.497 [2024-05-15 11:17:44.588122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.497 [2024-05-15 11:17:44.588140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.497 [2024-05-15 11:17:44.592087] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.497 [2024-05-15 11:17:44.592313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.497 [2024-05-15 11:17:44.592331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.497 [2024-05-15 11:17:44.596301] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.497 [2024-05-15 11:17:44.596524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.497 [2024-05-15 11:17:44.596543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.497 [2024-05-15 11:17:44.600564] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.497 [2024-05-15 11:17:44.600792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.497 [2024-05-15 11:17:44.600810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.497 [2024-05-15 11:17:44.604857] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.497 [2024-05-15 11:17:44.605090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.497 [2024-05-15 11:17:44.605109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.497 [2024-05-15 11:17:44.609190] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.497 [2024-05-15 11:17:44.609412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.497 [2024-05-15 11:17:44.609431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.497 [2024-05-15 11:17:44.613502] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.497 [2024-05-15 11:17:44.613708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.497 [2024-05-15 11:17:44.613727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.497 [2024-05-15 11:17:44.617807] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.497 [2024-05-15 11:17:44.618018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.497 [2024-05-15 11:17:44.618037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.497 [2024-05-15 11:17:44.622006] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.497 [2024-05-15 11:17:44.622224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.497 [2024-05-15 11:17:44.622242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.497 [2024-05-15 11:17:44.626568] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.497 [2024-05-15 11:17:44.626783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.497 [2024-05-15 11:17:44.626801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.497 [2024-05-15 11:17:44.631386] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.497 [2024-05-15 11:17:44.631650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.497 [2024-05-15 11:17:44.631669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.497 [2024-05-15 11:17:44.635811] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.497 [2024-05-15 11:17:44.636044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.497 [2024-05-15 11:17:44.636063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.497 [2024-05-15 11:17:44.639787] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.497 [2024-05-15 11:17:44.640015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.497 [2024-05-15 11:17:44.640034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.497 [2024-05-15 11:17:44.643726] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.497 [2024-05-15 11:17:44.643942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.498 [2024-05-15 11:17:44.643961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.498 [2024-05-15 11:17:44.647637] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.498 [2024-05-15 11:17:44.647866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.498 [2024-05-15 11:17:44.647885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.498 [2024-05-15 11:17:44.651548] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.498 [2024-05-15 11:17:44.651779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.498 [2024-05-15 11:17:44.651797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.498 [2024-05-15 11:17:44.655485] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.498 [2024-05-15 11:17:44.655707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.498 [2024-05-15 11:17:44.655726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.498 [2024-05-15 11:17:44.659442] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.498 [2024-05-15 11:17:44.659673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.498 [2024-05-15 11:17:44.659692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.498 [2024-05-15 11:17:44.663377] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.498 [2024-05-15 11:17:44.663595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.498 [2024-05-15 11:17:44.663614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.498 [2024-05-15 11:17:44.667314] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.498 [2024-05-15 11:17:44.667545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.498 [2024-05-15 11:17:44.667564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.498 [2024-05-15 11:17:44.671242] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.498 [2024-05-15 11:17:44.671466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.498 [2024-05-15 11:17:44.671484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.498 [2024-05-15 11:17:44.675190] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.498 [2024-05-15 11:17:44.675426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.498 [2024-05-15 11:17:44.675445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.498 [2024-05-15 11:17:44.679101] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.498 [2024-05-15 11:17:44.679330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.498 [2024-05-15 11:17:44.679353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.498 [2024-05-15 11:17:44.683040] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.498 [2024-05-15 11:17:44.683269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.498 [2024-05-15 11:17:44.683288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.498 [2024-05-15 11:17:44.686977] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.498 [2024-05-15 11:17:44.687207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.498 [2024-05-15 11:17:44.687225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.498 [2024-05-15 11:17:44.690886] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.498 [2024-05-15 11:17:44.691106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.498 [2024-05-15 11:17:44.691124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.498 [2024-05-15 11:17:44.694845] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.498 [2024-05-15 11:17:44.695071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.498 [2024-05-15 11:17:44.695090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.498 [2024-05-15 11:17:44.698796] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.498 [2024-05-15 11:17:44.699038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.498 [2024-05-15 11:17:44.699056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.498 [2024-05-15 11:17:44.702771] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.498 [2024-05-15 11:17:44.702987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.498 [2024-05-15 11:17:44.703006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.498 [2024-05-15 11:17:44.706776] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.498 [2024-05-15 11:17:44.706983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.498 [2024-05-15 11:17:44.707002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.498 [2024-05-15 11:17:44.710748] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.498 [2024-05-15 11:17:44.710972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.498 [2024-05-15 11:17:44.710991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.498 [2024-05-15 11:17:44.714762] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.498 [2024-05-15 11:17:44.714974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.498 [2024-05-15 11:17:44.714993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.498 [2024-05-15 11:17:44.718696] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.498 [2024-05-15 11:17:44.718919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.498 [2024-05-15 11:17:44.718937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.498 [2024-05-15 11:17:44.722624] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.498 [2024-05-15 11:17:44.722850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.498 [2024-05-15 11:17:44.722868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.498 [2024-05-15 11:17:44.726571] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.498 [2024-05-15 11:17:44.726802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.498 [2024-05-15 11:17:44.726822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.498 [2024-05-15 11:17:44.730507] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.498 [2024-05-15 11:17:44.730741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.498 [2024-05-15 11:17:44.730759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.498 [2024-05-15 11:17:44.734477] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.498 [2024-05-15 11:17:44.734695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.498 [2024-05-15 11:17:44.734713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.498 [2024-05-15 11:17:44.738664] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.498 [2024-05-15 11:17:44.738879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.498 [2024-05-15 11:17:44.738898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.498 [2024-05-15 11:17:44.743561] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.498 [2024-05-15 11:17:44.743771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.498 [2024-05-15 11:17:44.743790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.498 [2024-05-15 11:17:44.748697] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.498 [2024-05-15 11:17:44.748927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.498 [2024-05-15 11:17:44.748946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.498 [2024-05-15 11:17:44.753157] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.498 [2024-05-15 11:17:44.753386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.498 [2024-05-15 11:17:44.753405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.498 [2024-05-15 11:17:44.757527] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.498 [2024-05-15 11:17:44.757744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.499 [2024-05-15 11:17:44.757763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.759 [2024-05-15 11:17:44.762224] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.759 [2024-05-15 11:17:44.762475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.759 [2024-05-15 11:17:44.762494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.759 [2024-05-15 11:17:44.766526] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.759 [2024-05-15 11:17:44.766799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.759 [2024-05-15 11:17:44.766817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.759 [2024-05-15 11:17:44.771921] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.759 [2024-05-15 11:17:44.772152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.759 [2024-05-15 11:17:44.772178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.759 [2024-05-15 11:17:44.778175] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.759 [2024-05-15 11:17:44.778499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.759 [2024-05-15 11:17:44.778517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.759 [2024-05-15 11:17:44.783802] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.759 [2024-05-15 11:17:44.784049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.759 [2024-05-15 11:17:44.784068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.759 [2024-05-15 11:17:44.790148] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.759 [2024-05-15 11:17:44.790370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.759 [2024-05-15 11:17:44.790389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.759 [2024-05-15 11:17:44.794924] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.759 [2024-05-15 11:17:44.795148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.759 [2024-05-15 11:17:44.795176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.759 [2024-05-15 11:17:44.799461] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.759 [2024-05-15 11:17:44.799684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.759 [2024-05-15 11:17:44.799703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.759 [2024-05-15 11:17:44.803791] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.759 [2024-05-15 11:17:44.804002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.759 [2024-05-15 11:17:44.804020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.759 [2024-05-15 11:17:44.808300] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.759 [2024-05-15 11:17:44.808525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.759 [2024-05-15 11:17:44.808544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.759 [2024-05-15 11:17:44.812483] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.759 [2024-05-15 11:17:44.812694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.759 [2024-05-15 11:17:44.812713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.759 [2024-05-15 11:17:44.816649] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.759 [2024-05-15 11:17:44.816858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.759 [2024-05-15 11:17:44.816876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.759 [2024-05-15 11:17:44.820787] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.759 [2024-05-15 11:17:44.821012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.759 [2024-05-15 11:17:44.821031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.759 [2024-05-15 11:17:44.825005] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.759 [2024-05-15 11:17:44.825223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.759 [2024-05-15 11:17:44.825242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.759 [2024-05-15 11:17:44.830338] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.759 [2024-05-15 11:17:44.830562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.759 [2024-05-15 11:17:44.830581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.760 [2024-05-15 11:17:44.835328] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.760 [2024-05-15 11:17:44.835539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.760 [2024-05-15 11:17:44.835557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.760 [2024-05-15 11:17:44.839738] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.760 [2024-05-15 11:17:44.839973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.760 [2024-05-15 11:17:44.839992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.760 [2024-05-15 11:17:44.844060] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.760 [2024-05-15 11:17:44.844299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.760 [2024-05-15 11:17:44.844318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.760 [2024-05-15 11:17:44.848336] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.760 [2024-05-15 11:17:44.848575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.760 [2024-05-15 11:17:44.848594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.760 [2024-05-15 11:17:44.852510] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.760 [2024-05-15 11:17:44.852744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.760 [2024-05-15 11:17:44.852762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.760 [2024-05-15 11:17:44.856796] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.760 [2024-05-15 11:17:44.857015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.760 [2024-05-15 11:17:44.857034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.760 [2024-05-15 11:17:44.861141] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.760 [2024-05-15 11:17:44.861384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.760 [2024-05-15 11:17:44.861403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.760 [2024-05-15 11:17:44.865471] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.760 [2024-05-15 11:17:44.865698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.760 [2024-05-15 11:17:44.865716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.760 [2024-05-15 11:17:44.870001] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.760 [2024-05-15 11:17:44.870221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.760 [2024-05-15 11:17:44.870240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.760 [2024-05-15 11:17:44.874202] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.760 [2024-05-15 11:17:44.874410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.760 [2024-05-15 11:17:44.874428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.760 [2024-05-15 11:17:44.878116] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.760 [2024-05-15 11:17:44.878350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.760 [2024-05-15 11:17:44.878369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.760 [2024-05-15 11:17:44.882027] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.760 [2024-05-15 11:17:44.882244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.760 [2024-05-15 11:17:44.882263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.760 [2024-05-15 11:17:44.886122] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.760 [2024-05-15 11:17:44.886341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.760 [2024-05-15 11:17:44.886360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.760 [2024-05-15 11:17:44.891034] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.760 [2024-05-15 11:17:44.891257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.760 [2024-05-15 11:17:44.891276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.760 [2024-05-15 11:17:44.896247] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.760 [2024-05-15 11:17:44.896475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.760 [2024-05-15 11:17:44.896494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.760 [2024-05-15 11:17:44.900652] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.760 [2024-05-15 11:17:44.900865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.760 [2024-05-15 11:17:44.900884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.760 [2024-05-15 11:17:44.904943] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.760 [2024-05-15 11:17:44.905170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.760 [2024-05-15 11:17:44.905189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.760 [2024-05-15 11:17:44.909382] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.760 [2024-05-15 11:17:44.909591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.760 [2024-05-15 11:17:44.909613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.760 [2024-05-15 11:17:44.913769] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.760 [2024-05-15 11:17:44.914010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.760 [2024-05-15 11:17:44.914029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.760 [2024-05-15 11:17:44.918025] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.760 [2024-05-15 11:17:44.918236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.760 [2024-05-15 11:17:44.918254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.760 [2024-05-15 11:17:44.922408] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.760 [2024-05-15 11:17:44.922641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.760 [2024-05-15 11:17:44.922660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.760 [2024-05-15 11:17:44.926675] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.760 [2024-05-15 11:17:44.926885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.760 [2024-05-15 11:17:44.926903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.760 [2024-05-15 11:17:44.931654] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.760 [2024-05-15 11:17:44.931877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.760 [2024-05-15 11:17:44.931895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.760 [2024-05-15 11:17:44.936794] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.760 [2024-05-15 11:17:44.937012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.760 [2024-05-15 11:17:44.937031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.760 [2024-05-15 11:17:44.941227] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.760 [2024-05-15 11:17:44.941466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.760 [2024-05-15 11:17:44.941484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.760 [2024-05-15 11:17:44.945524] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.760 [2024-05-15 11:17:44.945738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.760 [2024-05-15 11:17:44.945756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.760 [2024-05-15 11:17:44.949791] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.760 [2024-05-15 11:17:44.950001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.760 [2024-05-15 11:17:44.950021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.760 [2024-05-15 11:17:44.954072] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.760 [2024-05-15 11:17:44.954290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.760 [2024-05-15 11:17:44.954309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.761 [2024-05-15 11:17:44.958411] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.761 [2024-05-15 11:17:44.958644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.761 [2024-05-15 11:17:44.958663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.761 [2024-05-15 11:17:44.962748] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.761 [2024-05-15 11:17:44.962984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.761 [2024-05-15 11:17:44.963004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.761 [2024-05-15 11:17:44.967003] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.761 [2024-05-15 11:17:44.967215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.761 [2024-05-15 11:17:44.967234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.761 [2024-05-15 11:17:44.971456] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.761 [2024-05-15 11:17:44.971664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.761 [2024-05-15 11:17:44.971682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.761 [2024-05-15 11:17:44.975460] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.761 [2024-05-15 11:17:44.975682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.761 [2024-05-15 11:17:44.975701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.761 [2024-05-15 11:17:44.979407] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.761 [2024-05-15 11:17:44.979627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.761 [2024-05-15 11:17:44.979646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.761 [2024-05-15 11:17:44.983320] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.761 [2024-05-15 11:17:44.983556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.761 [2024-05-15 11:17:44.983574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.761 [2024-05-15 11:17:44.987234] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.761 [2024-05-15 11:17:44.987458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.761 [2024-05-15 11:17:44.987477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.761 [2024-05-15 11:17:44.991534] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.761 [2024-05-15 11:17:44.991752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.761 [2024-05-15 11:17:44.991771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.761 [2024-05-15 11:17:44.996430] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.761 [2024-05-15 11:17:44.996647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.761 [2024-05-15 11:17:44.996665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.761 [2024-05-15 11:17:45.001643] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.761 [2024-05-15 11:17:45.001849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.761 [2024-05-15 11:17:45.001868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.761 [2024-05-15 11:17:45.006244] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.761 [2024-05-15 11:17:45.006463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.761 [2024-05-15 11:17:45.006482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.761 [2024-05-15 11:17:45.010522] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.761 [2024-05-15 11:17:45.010733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.761 [2024-05-15 11:17:45.010753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.761 [2024-05-15 11:17:45.014811] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.761 [2024-05-15 11:17:45.015037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.761 [2024-05-15 11:17:45.015056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.761 [2024-05-15 11:17:45.019117] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:47.761 [2024-05-15 11:17:45.019333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.761 [2024-05-15 11:17:45.019352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.021 [2024-05-15 11:17:45.023365] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.021 [2024-05-15 11:17:45.023593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.021 [2024-05-15 11:17:45.023615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.021 [2024-05-15 11:17:45.027628] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.021 [2024-05-15 11:17:45.027843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.021 [2024-05-15 11:17:45.027861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.021 [2024-05-15 11:17:45.032435] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.021 [2024-05-15 11:17:45.032658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.021 [2024-05-15 11:17:45.032676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.021 [2024-05-15 11:17:45.036847] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.021 [2024-05-15 11:17:45.037062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.021 [2024-05-15 11:17:45.037080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.021 [2024-05-15 11:17:45.041081] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.021 [2024-05-15 11:17:45.041298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.021 [2024-05-15 11:17:45.041317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.021 [2024-05-15 11:17:45.045354] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.021 [2024-05-15 11:17:45.045568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.021 [2024-05-15 11:17:45.045586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.021 [2024-05-15 11:17:45.049586] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.021 [2024-05-15 11:17:45.049793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.021 [2024-05-15 11:17:45.049812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.021 [2024-05-15 11:17:45.053920] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.021 [2024-05-15 11:17:45.054149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.021 [2024-05-15 11:17:45.054172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.021 [2024-05-15 11:17:45.058451] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.021 [2024-05-15 11:17:45.058721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.021 [2024-05-15 11:17:45.058739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.021 [2024-05-15 11:17:45.064368] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.021 [2024-05-15 11:17:45.064674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.021 [2024-05-15 11:17:45.064692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.021 [2024-05-15 11:17:45.070700] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.021 [2024-05-15 11:17:45.070944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.021 [2024-05-15 11:17:45.070962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.021 [2024-05-15 11:17:45.078000] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.021 [2024-05-15 11:17:45.078338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.021 [2024-05-15 11:17:45.078357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.021 [2024-05-15 11:17:45.084758] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.021 [2024-05-15 11:17:45.085049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.021 [2024-05-15 11:17:45.085067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.021 [2024-05-15 11:17:45.092106] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.021 [2024-05-15 11:17:45.092313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.021 [2024-05-15 11:17:45.092331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.021 [2024-05-15 11:17:45.099687] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.021 [2024-05-15 11:17:45.099931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.021 [2024-05-15 11:17:45.099949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.021 [2024-05-15 11:17:45.106967] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.021 [2024-05-15 11:17:45.107233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.021 [2024-05-15 11:17:45.107252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.021 [2024-05-15 11:17:45.113990] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.021 [2024-05-15 11:17:45.114337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.021 [2024-05-15 11:17:45.114355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.021 [2024-05-15 11:17:45.121668] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.021 [2024-05-15 11:17:45.121992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.021 [2024-05-15 11:17:45.122011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.022 [2024-05-15 11:17:45.127260] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.022 [2024-05-15 11:17:45.127506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.022 [2024-05-15 11:17:45.127525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.022 [2024-05-15 11:17:45.131757] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.022 [2024-05-15 11:17:45.132019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.022 [2024-05-15 11:17:45.132037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.022 [2024-05-15 11:17:45.136628] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.022 [2024-05-15 11:17:45.136892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.022 [2024-05-15 11:17:45.136910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.022 [2024-05-15 11:17:45.141738] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.022 [2024-05-15 11:17:45.142015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.022 [2024-05-15 11:17:45.142033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.022 [2024-05-15 11:17:45.146521] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.022 [2024-05-15 11:17:45.146733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.022 [2024-05-15 11:17:45.146753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.022 [2024-05-15 11:17:45.151224] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.022 [2024-05-15 11:17:45.151495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.022 [2024-05-15 11:17:45.151513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.022 [2024-05-15 11:17:45.155846] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.022 [2024-05-15 11:17:45.156125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.022 [2024-05-15 11:17:45.156143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.022 [2024-05-15 11:17:45.160046] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.022 [2024-05-15 11:17:45.160325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.022 [2024-05-15 11:17:45.160343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.022 [2024-05-15 11:17:45.164685] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.022 [2024-05-15 11:17:45.164968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.022 [2024-05-15 11:17:45.164992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.022 [2024-05-15 11:17:45.170001] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.022 [2024-05-15 11:17:45.170280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.022 [2024-05-15 11:17:45.170299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.022 [2024-05-15 11:17:45.174722] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.022 [2024-05-15 11:17:45.174984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.022 [2024-05-15 11:17:45.175003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.022 [2024-05-15 11:17:45.179423] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.022 [2024-05-15 11:17:45.179661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.022 [2024-05-15 11:17:45.179680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.022 [2024-05-15 11:17:45.183578] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.022 [2024-05-15 11:17:45.183814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.022 [2024-05-15 11:17:45.183833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.022 [2024-05-15 11:17:45.188279] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.022 [2024-05-15 11:17:45.188593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.022 [2024-05-15 11:17:45.188611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.022 [2024-05-15 11:17:45.193507] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.022 [2024-05-15 11:17:45.193764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.022 [2024-05-15 11:17:45.193783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.022 [2024-05-15 11:17:45.198962] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.022 [2024-05-15 11:17:45.199274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.022 [2024-05-15 11:17:45.199293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.022 [2024-05-15 11:17:45.204587] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.022 [2024-05-15 11:17:45.204913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.022 [2024-05-15 11:17:45.204932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.022 [2024-05-15 11:17:45.210735] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.022 [2024-05-15 11:17:45.210986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.022 [2024-05-15 11:17:45.211004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.022 [2024-05-15 11:17:45.218087] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.022 [2024-05-15 11:17:45.218306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.022 [2024-05-15 11:17:45.218324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.022 [2024-05-15 11:17:45.224235] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.022 [2024-05-15 11:17:45.224542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.022 [2024-05-15 11:17:45.224560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.022 [2024-05-15 11:17:45.230742] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.022 [2024-05-15 11:17:45.231048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.022 [2024-05-15 11:17:45.231067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.022 [2024-05-15 11:17:45.237379] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.022 [2024-05-15 11:17:45.237664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.022 [2024-05-15 11:17:45.237683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.022 [2024-05-15 11:17:45.244865] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.022 [2024-05-15 11:17:45.245151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.022 [2024-05-15 11:17:45.245175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.022 [2024-05-15 11:17:45.251583] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.022 [2024-05-15 11:17:45.251882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.022 [2024-05-15 11:17:45.251901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.022 [2024-05-15 11:17:45.258687] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.022 [2024-05-15 11:17:45.259031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.022 [2024-05-15 11:17:45.259050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.022 [2024-05-15 11:17:45.265078] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.022 [2024-05-15 11:17:45.265414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.022 [2024-05-15 11:17:45.265436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.022 [2024-05-15 11:17:45.272246] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.022 [2024-05-15 11:17:45.272550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.022 [2024-05-15 11:17:45.272568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.022 [2024-05-15 11:17:45.279983] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.022 [2024-05-15 11:17:45.280322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.022 [2024-05-15 11:17:45.280340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.281 [2024-05-15 11:17:45.286420] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.281 [2024-05-15 11:17:45.286651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.281 [2024-05-15 11:17:45.286670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.281 [2024-05-15 11:17:45.290903] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.281 [2024-05-15 11:17:45.291123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.281 [2024-05-15 11:17:45.291142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.281 [2024-05-15 11:17:45.295072] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.281 [2024-05-15 11:17:45.295327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.281 [2024-05-15 11:17:45.295347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.281 [2024-05-15 11:17:45.299149] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.281 [2024-05-15 11:17:45.299388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.281 [2024-05-15 11:17:45.299407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.281 [2024-05-15 11:17:45.303260] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.281 [2024-05-15 11:17:45.303500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.281 [2024-05-15 11:17:45.303519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.281 [2024-05-15 11:17:45.307231] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.281 [2024-05-15 11:17:45.307491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.281 [2024-05-15 11:17:45.307510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.281 [2024-05-15 11:17:45.311377] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.281 [2024-05-15 11:17:45.311617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.281 [2024-05-15 11:17:45.311635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.281 [2024-05-15 11:17:45.315766] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.281 [2024-05-15 11:17:45.315987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.281 [2024-05-15 11:17:45.316005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.281 [2024-05-15 11:17:45.321097] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.281 [2024-05-15 11:17:45.321324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.281 [2024-05-15 11:17:45.321343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.281 [2024-05-15 11:17:45.326240] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.281 [2024-05-15 11:17:45.326454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.281 [2024-05-15 11:17:45.326472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.281 [2024-05-15 11:17:45.331514] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.281 [2024-05-15 11:17:45.331724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.281 [2024-05-15 11:17:45.331742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.281 [2024-05-15 11:17:45.336674] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.281 [2024-05-15 11:17:45.336887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.281 [2024-05-15 11:17:45.336905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.281 [2024-05-15 11:17:45.342060] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.281 [2024-05-15 11:17:45.342280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.281 [2024-05-15 11:17:45.342299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.281 [2024-05-15 11:17:45.347322] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.281 [2024-05-15 11:17:45.347532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.281 [2024-05-15 11:17:45.347550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.281 [2024-05-15 11:17:45.352388] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.281 [2024-05-15 11:17:45.352597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.281 [2024-05-15 11:17:45.352615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.281 [2024-05-15 11:17:45.357614] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.281 [2024-05-15 11:17:45.357826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.281 [2024-05-15 11:17:45.357844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.281 [2024-05-15 11:17:45.362984] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.281 [2024-05-15 11:17:45.363227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.281 [2024-05-15 11:17:45.363245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.281 [2024-05-15 11:17:45.368320] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.282 [2024-05-15 11:17:45.368531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.282 [2024-05-15 11:17:45.368550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.282 [2024-05-15 11:17:45.373411] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.282 [2024-05-15 11:17:45.373628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.282 [2024-05-15 11:17:45.373647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.282 [2024-05-15 11:17:45.378707] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.282 [2024-05-15 11:17:45.378915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.282 [2024-05-15 11:17:45.378934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.282 [2024-05-15 11:17:45.384039] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.282 [2024-05-15 11:17:45.384269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.282 [2024-05-15 11:17:45.384287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.282 [2024-05-15 11:17:45.388945] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.282 [2024-05-15 11:17:45.389157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.282 [2024-05-15 11:17:45.389181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.282 [2024-05-15 11:17:45.394149] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.282 [2024-05-15 11:17:45.394367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.282 [2024-05-15 11:17:45.394384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.282 [2024-05-15 11:17:45.399276] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.282 [2024-05-15 11:17:45.399496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.282 [2024-05-15 11:17:45.399519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.282 [2024-05-15 11:17:45.404338] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.282 [2024-05-15 11:17:45.404546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.282 [2024-05-15 11:17:45.404564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.282 [2024-05-15 11:17:45.409758] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.282 [2024-05-15 11:17:45.409996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.282 [2024-05-15 11:17:45.410014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.282 [2024-05-15 11:17:45.414025] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.282 [2024-05-15 11:17:45.414254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.282 [2024-05-15 11:17:45.414272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.282 [2024-05-15 11:17:45.418069] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.282 [2024-05-15 11:17:45.418281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.282 [2024-05-15 11:17:45.418300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.282 [2024-05-15 11:17:45.422131] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.282 [2024-05-15 11:17:45.422356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.282 [2024-05-15 11:17:45.422374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.282 [2024-05-15 11:17:45.426552] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.282 [2024-05-15 11:17:45.426761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.282 [2024-05-15 11:17:45.426779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.282 [2024-05-15 11:17:45.430609] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.282 [2024-05-15 11:17:45.430839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.282 [2024-05-15 11:17:45.430857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.282 [2024-05-15 11:17:45.434948] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.282 [2024-05-15 11:17:45.435148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.282 [2024-05-15 11:17:45.435171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.282 [2024-05-15 11:17:45.440218] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.282 [2024-05-15 11:17:45.440431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.282 [2024-05-15 11:17:45.440449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.282 [2024-05-15 11:17:45.445313] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.282 [2024-05-15 11:17:45.445580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.282 [2024-05-15 11:17:45.445599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.282 [2024-05-15 11:17:45.451547] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.282 [2024-05-15 11:17:45.451754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.282 [2024-05-15 11:17:45.451773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.282 [2024-05-15 11:17:45.459039] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.282 [2024-05-15 11:17:45.459309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.282 [2024-05-15 11:17:45.459328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.282 [2024-05-15 11:17:45.466514] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.282 [2024-05-15 11:17:45.466809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.282 [2024-05-15 11:17:45.466827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.282 [2024-05-15 11:17:45.473843] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.282 [2024-05-15 11:17:45.474061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.282 [2024-05-15 11:17:45.474080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.282 [2024-05-15 11:17:45.481912] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.282 [2024-05-15 11:17:45.482171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.282 [2024-05-15 11:17:45.482190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.282 [2024-05-15 11:17:45.488691] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.282 [2024-05-15 11:17:45.488897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.282 [2024-05-15 11:17:45.488916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.282 [2024-05-15 11:17:45.496115] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.282 [2024-05-15 11:17:45.496399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.282 [2024-05-15 11:17:45.496418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.282 [2024-05-15 11:17:45.503472] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.282 [2024-05-15 11:17:45.503780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.282 [2024-05-15 11:17:45.503801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.282 [2024-05-15 11:17:45.510330] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.282 [2024-05-15 11:17:45.510537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.282 [2024-05-15 11:17:45.510556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.282 [2024-05-15 11:17:45.517514] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.282 [2024-05-15 11:17:45.517718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.282 [2024-05-15 11:17:45.517736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.282 [2024-05-15 11:17:45.525081] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.282 [2024-05-15 11:17:45.525401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.282 [2024-05-15 11:17:45.525419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.282 [2024-05-15 11:17:45.532523] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa7c050) with pdu=0x2000190fef90 00:25:48.283 [2024-05-15 11:17:45.532786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.283 [2024-05-15 11:17:45.532805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.283 00:25:48.283 Latency(us) 00:25:48.283 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:48.283 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:48.283 nvme0n1 : 2.00 6447.49 805.94 0.00 0.00 2476.45 1723.88 13278.16 00:25:48.283 =================================================================================================================== 00:25:48.283 Total : 6447.49 805.94 0.00 0.00 2476.45 1723.88 13278.16 00:25:48.283 0 00:25:48.541 11:17:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:48.541 11:17:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:48.541 11:17:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:48.541 | .driver_specific 00:25:48.541 | .nvme_error 00:25:48.541 | .status_code 00:25:48.541 | .command_transient_transport_error' 00:25:48.541 11:17:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:48.541 11:17:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 416 > 0 )) 00:25:48.541 11:17:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2397152 00:25:48.541 11:17:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 2397152 ']' 00:25:48.541 11:17:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 2397152 00:25:48.541 11:17:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:25:48.541 11:17:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:25:48.541 11:17:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2397152 00:25:48.541 11:17:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:25:48.541 11:17:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:25:48.541 11:17:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2397152' 00:25:48.541 killing process with pid 2397152 00:25:48.541 11:17:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 2397152 00:25:48.541 Received shutdown signal, test time was about 2.000000 seconds 00:25:48.541 00:25:48.541 Latency(us) 00:25:48.541 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:48.541 =================================================================================================================== 00:25:48.541 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:48.541 11:17:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 2397152 00:25:48.799 11:17:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2395039 00:25:48.799 11:17:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 2395039 ']' 00:25:48.799 11:17:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 2395039 00:25:48.799 11:17:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:25:48.799 11:17:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:25:48.799 11:17:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2395039 00:25:48.799 11:17:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:25:48.799 11:17:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:25:48.799 11:17:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2395039' 00:25:48.799 killing process with pid 2395039 00:25:48.799 11:17:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 2395039 00:25:48.799 [2024-05-15 11:17:46.028046] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:48.799 11:17:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 2395039 00:25:49.058 00:25:49.058 real 0m16.758s 00:25:49.058 user 0m31.809s 00:25:49.058 sys 0m4.649s 00:25:49.058 11:17:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # xtrace_disable 00:25:49.058 11:17:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:49.058 ************************************ 00:25:49.058 END TEST nvmf_digest_error 00:25:49.058 ************************************ 00:25:49.058 11:17:46 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:25:49.058 11:17:46 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:25:49.058 11:17:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:49.058 11:17:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:25:49.058 11:17:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:49.058 11:17:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:25:49.058 11:17:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:49.058 11:17:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:49.058 rmmod nvme_tcp 00:25:49.058 rmmod nvme_fabrics 00:25:49.058 rmmod nvme_keyring 00:25:49.317 11:17:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:49.317 11:17:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:25:49.317 11:17:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:25:49.317 11:17:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 2395039 ']' 00:25:49.317 11:17:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 2395039 00:25:49.317 11:17:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@947 -- # '[' -z 2395039 ']' 00:25:49.317 11:17:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@951 -- # kill -0 2395039 00:25:49.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 951: kill: (2395039) - No such process 00:25:49.317 11:17:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@974 -- # echo 'Process with pid 2395039 is not found' 00:25:49.317 Process with pid 2395039 is not found 00:25:49.317 11:17:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:49.317 11:17:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:49.317 11:17:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:49.317 11:17:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:49.317 11:17:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:49.317 11:17:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:49.317 11:17:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:49.317 11:17:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:51.220 11:17:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:51.220 00:25:51.220 real 0m41.735s 00:25:51.220 user 1m6.443s 00:25:51.220 sys 0m13.289s 00:25:51.220 11:17:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # xtrace_disable 00:25:51.220 11:17:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:51.220 ************************************ 00:25:51.220 END TEST nvmf_digest 00:25:51.220 ************************************ 00:25:51.220 11:17:48 nvmf_tcp -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:25:51.220 11:17:48 nvmf_tcp -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:25:51.220 11:17:48 nvmf_tcp -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:25:51.220 11:17:48 nvmf_tcp -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:51.220 11:17:48 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:25:51.220 11:17:48 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:25:51.220 11:17:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:51.220 ************************************ 00:25:51.220 START TEST nvmf_bdevperf 00:25:51.220 ************************************ 00:25:51.220 11:17:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:51.479 * Looking for test storage... 00:25:51.479 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:51.479 11:17:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:51.479 11:17:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:25:51.479 11:17:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:51.479 11:17:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:51.479 11:17:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:51.479 11:17:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:51.479 11:17:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:51.479 11:17:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:51.479 11:17:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:51.479 11:17:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:51.479 11:17:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:51.479 11:17:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:51.479 11:17:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:51.479 11:17:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:51.479 11:17:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:51.479 11:17:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:51.479 11:17:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:51.479 11:17:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:51.479 11:17:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:51.479 11:17:48 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:51.479 11:17:48 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:51.479 11:17:48 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:51.480 11:17:48 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.480 11:17:48 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.480 11:17:48 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.480 11:17:48 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:25:51.480 11:17:48 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.480 11:17:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:25:51.480 11:17:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:51.480 11:17:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:51.480 11:17:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:51.480 11:17:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:51.480 11:17:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:51.480 11:17:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:51.480 11:17:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:51.480 11:17:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:51.480 11:17:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:51.480 11:17:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:51.480 11:17:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:25:51.480 11:17:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:51.480 11:17:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:51.480 11:17:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:51.480 11:17:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:51.480 11:17:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:51.480 11:17:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:51.480 11:17:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:51.480 11:17:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:51.480 11:17:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:51.480 11:17:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:51.480 11:17:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:25:51.480 11:17:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:56.755 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:56.755 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:56.755 Found net devices under 0000:86:00.0: cvl_0_0 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:56.755 Found net devices under 0000:86:00.1: cvl_0_1 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:56.755 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:56.756 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:56.756 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:56.756 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:56.756 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:56.756 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:56.756 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:56.756 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:56.756 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:56.756 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:25:56.756 00:25:56.756 --- 10.0.0.2 ping statistics --- 00:25:56.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:56.756 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:25:56.756 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:56.756 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:56.756 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:25:56.756 00:25:56.756 --- 10.0.0.1 ping statistics --- 00:25:56.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:56.756 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:25:56.756 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:56.756 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:25:56.756 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:56.756 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:56.756 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:56.756 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:56.756 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:56.756 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:56.756 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:56.756 11:17:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:25:56.756 11:17:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:56.756 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:56.756 11:17:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@721 -- # xtrace_disable 00:25:56.756 11:17:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:56.756 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2401406 00:25:56.756 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2401406 00:25:56.756 11:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:56.756 11:17:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@828 -- # '[' -z 2401406 ']' 00:25:56.756 11:17:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:56.756 11:17:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:56.756 11:17:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:56.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:56.756 11:17:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:56.756 11:17:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:56.756 [2024-05-15 11:17:53.997909] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:25:56.756 [2024-05-15 11:17:53.997952] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:57.013 EAL: No free 2048 kB hugepages reported on node 1 00:25:57.013 [2024-05-15 11:17:54.055306] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:57.013 [2024-05-15 11:17:54.135667] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:57.013 [2024-05-15 11:17:54.135702] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:57.013 [2024-05-15 11:17:54.135709] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:57.013 [2024-05-15 11:17:54.135716] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:57.013 [2024-05-15 11:17:54.135721] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:57.013 [2024-05-15 11:17:54.135969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:57.013 [2024-05-15 11:17:54.135990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:57.013 [2024-05-15 11:17:54.135995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:57.577 11:17:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:57.577 11:17:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@861 -- # return 0 00:25:57.577 11:17:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:57.577 11:17:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@727 -- # xtrace_disable 00:25:57.577 11:17:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:57.835 11:17:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:57.835 11:17:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:57.835 11:17:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:57.835 11:17:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:57.835 [2024-05-15 11:17:54.853410] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:57.835 11:17:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:57.835 11:17:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:57.835 11:17:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:57.835 11:17:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:57.835 Malloc0 00:25:57.835 11:17:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:57.835 11:17:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:57.835 11:17:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:57.835 11:17:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:57.835 11:17:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:57.835 11:17:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:57.835 11:17:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:57.835 11:17:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:57.835 11:17:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:57.835 11:17:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:57.835 11:17:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:57.835 11:17:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:57.835 [2024-05-15 11:17:54.911543] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:57.835 [2024-05-15 11:17:54.911755] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:57.835 11:17:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:57.835 11:17:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:25:57.835 11:17:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:25:57.835 11:17:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:25:57.835 11:17:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:25:57.835 11:17:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:57.835 11:17:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:57.835 { 00:25:57.835 "params": { 00:25:57.835 "name": "Nvme$subsystem", 00:25:57.835 "trtype": "$TEST_TRANSPORT", 00:25:57.835 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:57.835 "adrfam": "ipv4", 00:25:57.835 "trsvcid": "$NVMF_PORT", 00:25:57.835 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:57.835 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:57.835 "hdgst": ${hdgst:-false}, 00:25:57.835 "ddgst": ${ddgst:-false} 00:25:57.836 }, 00:25:57.836 "method": "bdev_nvme_attach_controller" 00:25:57.836 } 00:25:57.836 EOF 00:25:57.836 )") 00:25:57.836 11:17:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:25:57.836 11:17:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:25:57.836 11:17:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:25:57.836 11:17:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:57.836 "params": { 00:25:57.836 "name": "Nvme1", 00:25:57.836 "trtype": "tcp", 00:25:57.836 "traddr": "10.0.0.2", 00:25:57.836 "adrfam": "ipv4", 00:25:57.836 "trsvcid": "4420", 00:25:57.836 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:57.836 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:57.836 "hdgst": false, 00:25:57.836 "ddgst": false 00:25:57.836 }, 00:25:57.836 "method": "bdev_nvme_attach_controller" 00:25:57.836 }' 00:25:57.836 [2024-05-15 11:17:54.957538] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:25:57.836 [2024-05-15 11:17:54.957582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2401652 ] 00:25:57.836 EAL: No free 2048 kB hugepages reported on node 1 00:25:57.836 [2024-05-15 11:17:55.010855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:57.836 [2024-05-15 11:17:55.083602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.455 Running I/O for 1 seconds... 00:25:59.387 00:25:59.387 Latency(us) 00:25:59.387 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:59.387 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:59.387 Verification LBA range: start 0x0 length 0x4000 00:25:59.387 Nvme1n1 : 1.00 10769.23 42.07 0.00 0.00 11828.26 1381.95 14360.93 00:25:59.387 =================================================================================================================== 00:25:59.387 Total : 10769.23 42.07 0.00 0.00 11828.26 1381.95 14360.93 00:25:59.387 11:17:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2401886 00:25:59.387 11:17:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:25:59.388 11:17:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:25:59.388 11:17:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:25:59.388 11:17:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:25:59.388 11:17:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:25:59.388 11:17:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:59.388 11:17:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:59.388 { 00:25:59.388 "params": { 00:25:59.388 "name": "Nvme$subsystem", 00:25:59.388 "trtype": "$TEST_TRANSPORT", 00:25:59.388 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:59.388 "adrfam": "ipv4", 00:25:59.388 "trsvcid": "$NVMF_PORT", 00:25:59.388 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:59.388 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:59.388 "hdgst": ${hdgst:-false}, 00:25:59.388 "ddgst": ${ddgst:-false} 00:25:59.388 }, 00:25:59.388 "method": "bdev_nvme_attach_controller" 00:25:59.388 } 00:25:59.388 EOF 00:25:59.388 )") 00:25:59.388 11:17:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:25:59.388 11:17:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:25:59.388 11:17:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:25:59.388 11:17:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:59.388 "params": { 00:25:59.388 "name": "Nvme1", 00:25:59.388 "trtype": "tcp", 00:25:59.388 "traddr": "10.0.0.2", 00:25:59.388 "adrfam": "ipv4", 00:25:59.388 "trsvcid": "4420", 00:25:59.388 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:59.388 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:59.388 "hdgst": false, 00:25:59.388 "ddgst": false 00:25:59.388 }, 00:25:59.388 "method": "bdev_nvme_attach_controller" 00:25:59.388 }' 00:25:59.645 [2024-05-15 11:17:56.662366] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:25:59.645 [2024-05-15 11:17:56.662416] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2401886 ] 00:25:59.645 EAL: No free 2048 kB hugepages reported on node 1 00:25:59.645 [2024-05-15 11:17:56.716515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.645 [2024-05-15 11:17:56.786318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:59.903 Running I/O for 15 seconds... 00:26:02.430 11:17:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2401406 00:26:02.430 11:17:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:26:02.430 [2024-05-15 11:17:59.635581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.430 [2024-05-15 11:17:59.635620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.430 [2024-05-15 11:17:59.635640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.430 [2024-05-15 11:17:59.635649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.430 [2024-05-15 11:17:59.635658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.430 [2024-05-15 11:17:59.635665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.430 [2024-05-15 11:17:59.635674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.430 [2024-05-15 11:17:59.635681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.430 [2024-05-15 11:17:59.635690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.430 [2024-05-15 11:17:59.635698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.430 [2024-05-15 11:17:59.635705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.430 [2024-05-15 11:17:59.635712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.430 [2024-05-15 11:17:59.635721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.430 [2024-05-15 11:17:59.635728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.430 [2024-05-15 11:17:59.635739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.430 [2024-05-15 11:17:59.635746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.430 [2024-05-15 11:17:59.635754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.430 [2024-05-15 11:17:59.635762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.430 [2024-05-15 11:17:59.635775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.430 [2024-05-15 11:17:59.635782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.430 [2024-05-15 11:17:59.635791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.430 [2024-05-15 11:17:59.635798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.430 [2024-05-15 11:17:59.635807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.430 [2024-05-15 11:17:59.635813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.430 [2024-05-15 11:17:59.635822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.430 [2024-05-15 11:17:59.635828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.430 [2024-05-15 11:17:59.635841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.430 [2024-05-15 11:17:59.635847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.430 [2024-05-15 11:17:59.635855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.430 [2024-05-15 11:17:59.635864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.430 [2024-05-15 11:17:59.635872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.430 [2024-05-15 11:17:59.635878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.430 [2024-05-15 11:17:59.635887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.430 [2024-05-15 11:17:59.635896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.430 [2024-05-15 11:17:59.635906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.430 [2024-05-15 11:17:59.635913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.430 [2024-05-15 11:17:59.635924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.430 [2024-05-15 11:17:59.635932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.430 [2024-05-15 11:17:59.635942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.431 [2024-05-15 11:17:59.635950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.431 [2024-05-15 11:17:59.635960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.431 [2024-05-15 11:17:59.635969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.431 [2024-05-15 11:17:59.635979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.431 [2024-05-15 11:17:59.635987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.431 [2024-05-15 11:17:59.635998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.431 [2024-05-15 11:17:59.636005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.431 [2024-05-15 11:17:59.636013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.431 [2024-05-15 11:17:59.636020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.431 [2024-05-15 11:17:59.636028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.431 [2024-05-15 11:17:59.636034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.431 [2024-05-15 11:17:59.636042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.431 [2024-05-15 11:17:59.636049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.431 [2024-05-15 11:17:59.636057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.431 [2024-05-15 11:17:59.636064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.431 [2024-05-15 11:17:59.636072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.431 [2024-05-15 11:17:59.636079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.431 [2024-05-15 11:17:59.636086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.431 [2024-05-15 11:17:59.636093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.431 [2024-05-15 11:17:59.636101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.431 [2024-05-15 11:17:59.636108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.431 [2024-05-15 11:17:59.636116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.431 [2024-05-15 11:17:59.636123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.431 [2024-05-15 11:17:59.636131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:99632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.431 [2024-05-15 11:17:59.636137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.431 [2024-05-15 11:17:59.636145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.431 [2024-05-15 11:17:59.636151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.431 [2024-05-15 11:17:59.636160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.431 [2024-05-15 11:17:59.636282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.431 [2024-05-15 11:17:59.636291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:99656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.431 [2024-05-15 11:17:59.636300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.431 [2024-05-15 11:17:59.636309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.431 [2024-05-15 11:17:59.636315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.431 [2024-05-15 11:17:59.636323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.431 [2024-05-15 11:17:59.636330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.431 [2024-05-15 11:17:59.636338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.431 [2024-05-15 11:17:59.636345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.431 [2024-05-15 11:17:59.636353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.431 [2024-05-15 11:17:59.636359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.431 [2024-05-15 11:17:59.636368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.431 [2024-05-15 11:17:59.636374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.431 [2024-05-15 11:17:59.636382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.431 [2024-05-15 11:17:59.636389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.431 [2024-05-15 11:17:59.636397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.431 [2024-05-15 11:17:59.636403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.431 [2024-05-15 11:17:59.636411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.431 [2024-05-15 11:17:59.636417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.431 [2024-05-15 11:17:59.636425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.431 [2024-05-15 11:17:59.636432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.431 [2024-05-15 11:17:59.636440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.431 [2024-05-15 11:17:59.636446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.431 [2024-05-15 11:17:59.636455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.431 [2024-05-15 11:17:59.636461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.431 [2024-05-15 11:17:59.636469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.431 [2024-05-15 11:17:59.636476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.431 [2024-05-15 11:17:59.636485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.431 [2024-05-15 11:17:59.636492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.431 [2024-05-15 11:17:59.636500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.431 [2024-05-15 11:17:59.636506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.431 [2024-05-15 11:17:59.636514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.431 [2024-05-15 11:17:59.636521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.431 [2024-05-15 11:17:59.636529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.431 [2024-05-15 11:17:59.636535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.431 [2024-05-15 11:17:59.636543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.431 [2024-05-15 11:17:59.636550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.431 [2024-05-15 11:17:59.636558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.431 [2024-05-15 11:17:59.636565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.431 [2024-05-15 11:17:59.636572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.431 [2024-05-15 11:17:59.636579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.431 [2024-05-15 11:17:59.636586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.431 [2024-05-15 11:17:59.636593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.431 [2024-05-15 11:17:59.636601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.431 [2024-05-15 11:17:59.636607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.431 [2024-05-15 11:17:59.636615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.431 [2024-05-15 11:17:59.636622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.431 [2024-05-15 11:17:59.636629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.431 [2024-05-15 11:17:59.636636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.431 [2024-05-15 11:17:59.636643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.431 [2024-05-15 11:17:59.636650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.431 [2024-05-15 11:17:59.636658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.431 [2024-05-15 11:17:59.636665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.431 [2024-05-15 11:17:59.636675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.431 [2024-05-15 11:17:59.636682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.432 [2024-05-15 11:17:59.636690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.432 [2024-05-15 11:17:59.636696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.432 [2024-05-15 11:17:59.636704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.432 [2024-05-15 11:17:59.636710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.432 [2024-05-15 11:17:59.636718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.432 [2024-05-15 11:17:59.636725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.432 [2024-05-15 11:17:59.636732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.432 [2024-05-15 11:17:59.636739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.432 [2024-05-15 11:17:59.636747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.432 [2024-05-15 11:17:59.636754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.432 [2024-05-15 11:17:59.636762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.432 [2024-05-15 11:17:59.636768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.432 [2024-05-15 11:17:59.636776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.432 [2024-05-15 11:17:59.636783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.432 [2024-05-15 11:17:59.636791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.432 [2024-05-15 11:17:59.636797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.432 [2024-05-15 11:17:59.636806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.432 [2024-05-15 11:17:59.636812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.432 [2024-05-15 11:17:59.636820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.432 [2024-05-15 11:17:59.636826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.432 [2024-05-15 11:17:59.636834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.432 [2024-05-15 11:17:59.636841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.432 [2024-05-15 11:17:59.636850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.432 [2024-05-15 11:17:59.636857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.432 [2024-05-15 11:17:59.636865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.432 [2024-05-15 11:17:59.636871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.432 [2024-05-15 11:17:59.636879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.432 [2024-05-15 11:17:59.636885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.432 [2024-05-15 11:17:59.636893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.432 [2024-05-15 11:17:59.636899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.432 [2024-05-15 11:17:59.636907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.432 [2024-05-15 11:17:59.636914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.432 [2024-05-15 11:17:59.636923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.432 [2024-05-15 11:17:59.636929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.432 [2024-05-15 11:17:59.636937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.432 [2024-05-15 11:17:59.636943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.432 [2024-05-15 11:17:59.636951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.432 [2024-05-15 11:17:59.636957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.432 [2024-05-15 11:17:59.636966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.432 [2024-05-15 11:17:59.636972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.432 [2024-05-15 11:17:59.636980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.432 [2024-05-15 11:17:59.636986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.432 [2024-05-15 11:17:59.636994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.432 [2024-05-15 11:17:59.637000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.432 [2024-05-15 11:17:59.637009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.432 [2024-05-15 11:17:59.637015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.432 [2024-05-15 11:17:59.637023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.432 [2024-05-15 11:17:59.637029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.432 [2024-05-15 11:17:59.637038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.432 [2024-05-15 11:17:59.637044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.432 [2024-05-15 11:17:59.637052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.432 [2024-05-15 11:17:59.637059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.432 [2024-05-15 11:17:59.637066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.432 [2024-05-15 11:17:59.637073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.432 [2024-05-15 11:17:59.637081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.432 [2024-05-15 11:17:59.637087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.432 [2024-05-15 11:17:59.637095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.432 [2024-05-15 11:17:59.637101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.432 [2024-05-15 11:17:59.637109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.432 [2024-05-15 11:17:59.637115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.432 [2024-05-15 11:17:59.637123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.432 [2024-05-15 11:17:59.637130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.432 [2024-05-15 11:17:59.637138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.432 [2024-05-15 11:17:59.637144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.432 [2024-05-15 11:17:59.637152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.432 [2024-05-15 11:17:59.637159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.432 [2024-05-15 11:17:59.637171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.432 [2024-05-15 11:17:59.637179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.432 [2024-05-15 11:17:59.637187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.432 [2024-05-15 11:17:59.637193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.432 [2024-05-15 11:17:59.637201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.432 [2024-05-15 11:17:59.637207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.432 [2024-05-15 11:17:59.637215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.432 [2024-05-15 11:17:59.637226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.432 [2024-05-15 11:17:59.637234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.432 [2024-05-15 11:17:59.637241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.432 [2024-05-15 11:17:59.637248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.432 [2024-05-15 11:17:59.637255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.432 [2024-05-15 11:17:59.637263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.432 [2024-05-15 11:17:59.637269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.432 [2024-05-15 11:17:59.637277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.432 [2024-05-15 11:17:59.637283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.433 [2024-05-15 11:17:59.637291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.433 [2024-05-15 11:17:59.637298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.433 [2024-05-15 11:17:59.637305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.433 [2024-05-15 11:17:59.637313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.433 [2024-05-15 11:17:59.637320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.433 [2024-05-15 11:17:59.637327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.433 [2024-05-15 11:17:59.637335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.433 [2024-05-15 11:17:59.637342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.433 [2024-05-15 11:17:59.637350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.433 [2024-05-15 11:17:59.637356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.433 [2024-05-15 11:17:59.637364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.433 [2024-05-15 11:17:59.637370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.433 [2024-05-15 11:17:59.637377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:100032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.433 [2024-05-15 11:17:59.637384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.433 [2024-05-15 11:17:59.637392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.433 [2024-05-15 11:17:59.637399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.433 [2024-05-15 11:17:59.637407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.433 [2024-05-15 11:17:59.637414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.433 [2024-05-15 11:17:59.637421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.433 [2024-05-15 11:17:59.637428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.433 [2024-05-15 11:17:59.637436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.433 [2024-05-15 11:17:59.637442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.433 [2024-05-15 11:17:59.637450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.433 [2024-05-15 11:17:59.637456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.433 [2024-05-15 11:17:59.637464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:100080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.433 [2024-05-15 11:17:59.637470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.433 [2024-05-15 11:17:59.637478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.433 [2024-05-15 11:17:59.637484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.433 [2024-05-15 11:17:59.637492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:100096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.433 [2024-05-15 11:17:59.637499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.433 [2024-05-15 11:17:59.637507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:100104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.433 [2024-05-15 11:17:59.637513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.433 [2024-05-15 11:17:59.637521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:100112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.433 [2024-05-15 11:17:59.637528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.433 [2024-05-15 11:17:59.637536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:100120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.433 [2024-05-15 11:17:59.637543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.433 [2024-05-15 11:17:59.637551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:100128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.433 [2024-05-15 11:17:59.637557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.433 [2024-05-15 11:17:59.637565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:100136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.433 [2024-05-15 11:17:59.637571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.433 [2024-05-15 11:17:59.637579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:100144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.433 [2024-05-15 11:17:59.637586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.433 [2024-05-15 11:17:59.637594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:100152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.433 [2024-05-15 11:17:59.637606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.433 [2024-05-15 11:17:59.637614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:100160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.433 [2024-05-15 11:17:59.637620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.433 [2024-05-15 11:17:59.637628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:100168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.433 [2024-05-15 11:17:59.637634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.433 [2024-05-15 11:17:59.637642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:100176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.433 [2024-05-15 11:17:59.637649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.433 [2024-05-15 11:17:59.637656] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15236f0 is same with the state(5) to be set 00:26:02.433 [2024-05-15 11:17:59.637665] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:02.433 [2024-05-15 11:17:59.637670] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:02.433 [2024-05-15 11:17:59.637678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100184 len:8 PRP1 0x0 PRP2 0x0 00:26:02.433 [2024-05-15 11:17:59.637684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.433 [2024-05-15 11:17:59.637728] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15236f0 was disconnected and freed. reset controller. 00:26:02.433 [2024-05-15 11:17:59.637774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:02.433 [2024-05-15 11:17:59.637783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.433 [2024-05-15 11:17:59.637791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:02.433 [2024-05-15 11:17:59.637798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.433 [2024-05-15 11:17:59.637805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:02.433 [2024-05-15 11:17:59.637811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.433 [2024-05-15 11:17:59.637818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:02.433 [2024-05-15 11:17:59.637824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.433 [2024-05-15 11:17:59.637830] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:02.433 [2024-05-15 11:17:59.640695] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.433 [2024-05-15 11:17:59.640719] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:02.433 [2024-05-15 11:17:59.641359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.433 [2024-05-15 11:17:59.641560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.433 [2024-05-15 11:17:59.641570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:02.433 [2024-05-15 11:17:59.641578] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:02.433 [2024-05-15 11:17:59.641758] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:02.433 [2024-05-15 11:17:59.641938] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.433 [2024-05-15 11:17:59.641945] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.433 [2024-05-15 11:17:59.641953] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.433 [2024-05-15 11:17:59.644822] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.433 [2024-05-15 11:17:59.654019] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.433 [2024-05-15 11:17:59.654494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.433 [2024-05-15 11:17:59.654653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.433 [2024-05-15 11:17:59.654663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:02.433 [2024-05-15 11:17:59.654671] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:02.433 [2024-05-15 11:17:59.654845] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:02.433 [2024-05-15 11:17:59.655020] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.433 [2024-05-15 11:17:59.655028] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.433 [2024-05-15 11:17:59.655034] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.433 [2024-05-15 11:17:59.657752] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.434 [2024-05-15 11:17:59.666925] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.434 [2024-05-15 11:17:59.667375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.434 [2024-05-15 11:17:59.667632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.434 [2024-05-15 11:17:59.667663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:02.434 [2024-05-15 11:17:59.667684] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:02.434 [2024-05-15 11:17:59.668285] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:02.434 [2024-05-15 11:17:59.668807] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.434 [2024-05-15 11:17:59.668815] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.434 [2024-05-15 11:17:59.668821] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.434 [2024-05-15 11:17:59.671529] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.434 [2024-05-15 11:17:59.679821] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.434 [2024-05-15 11:17:59.680093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.434 [2024-05-15 11:17:59.680267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.434 [2024-05-15 11:17:59.680281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:02.434 [2024-05-15 11:17:59.680288] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:02.434 [2024-05-15 11:17:59.680463] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:02.434 [2024-05-15 11:17:59.680638] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.434 [2024-05-15 11:17:59.680646] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.434 [2024-05-15 11:17:59.680652] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.434 [2024-05-15 11:17:59.683459] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.694 [2024-05-15 11:17:59.692972] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.694 [2024-05-15 11:17:59.693435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 11:17:59.693692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 11:17:59.693722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:02.694 [2024-05-15 11:17:59.693744] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:02.694 [2024-05-15 11:17:59.694342] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:02.694 [2024-05-15 11:17:59.694693] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.694 [2024-05-15 11:17:59.694701] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.694 [2024-05-15 11:17:59.694707] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.694 [2024-05-15 11:17:59.697471] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.694 [2024-05-15 11:17:59.706092] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.694 [2024-05-15 11:17:59.706511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 11:17:59.706676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 11:17:59.706686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:02.694 [2024-05-15 11:17:59.706693] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:02.694 [2024-05-15 11:17:59.706866] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:02.694 [2024-05-15 11:17:59.707040] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.694 [2024-05-15 11:17:59.707049] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.694 [2024-05-15 11:17:59.707054] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.694 [2024-05-15 11:17:59.709823] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.694 [2024-05-15 11:17:59.719087] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.694 [2024-05-15 11:17:59.719510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 11:17:59.719789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 11:17:59.719820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:02.694 [2024-05-15 11:17:59.719849] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:02.694 [2024-05-15 11:17:59.720141] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:02.694 [2024-05-15 11:17:59.720319] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.694 [2024-05-15 11:17:59.720328] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.694 [2024-05-15 11:17:59.720334] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.694 [2024-05-15 11:17:59.723145] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.694 [2024-05-15 11:17:59.731939] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.694 [2024-05-15 11:17:59.732299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 11:17:59.732514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 11:17:59.732544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:02.694 [2024-05-15 11:17:59.732565] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:02.694 [2024-05-15 11:17:59.733088] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:02.694 [2024-05-15 11:17:59.733266] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.694 [2024-05-15 11:17:59.733274] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.694 [2024-05-15 11:17:59.733280] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.694 [2024-05-15 11:17:59.736063] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.694 [2024-05-15 11:17:59.744832] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.694 [2024-05-15 11:17:59.745191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 11:17:59.746413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 11:17:59.746434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:02.694 [2024-05-15 11:17:59.746442] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:02.694 [2024-05-15 11:17:59.746652] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:02.694 [2024-05-15 11:17:59.746828] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.694 [2024-05-15 11:17:59.746836] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.694 [2024-05-15 11:17:59.746842] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.694 [2024-05-15 11:17:59.749603] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.694 [2024-05-15 11:17:59.757983] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.694 [2024-05-15 11:17:59.758377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 11:17:59.758541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 11:17:59.758552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:02.694 [2024-05-15 11:17:59.758559] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:02.694 [2024-05-15 11:17:59.758747] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:02.694 [2024-05-15 11:17:59.758931] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.694 [2024-05-15 11:17:59.758940] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.694 [2024-05-15 11:17:59.758946] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.694 [2024-05-15 11:17:59.761813] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.694 [2024-05-15 11:17:59.770973] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.694 [2024-05-15 11:17:59.771399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 11:17:59.771562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 11:17:59.771572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:02.694 [2024-05-15 11:17:59.771579] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:02.694 [2024-05-15 11:17:59.771754] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:02.694 [2024-05-15 11:17:59.771929] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.694 [2024-05-15 11:17:59.771937] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.694 [2024-05-15 11:17:59.771943] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.694 [2024-05-15 11:17:59.774698] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.694 [2024-05-15 11:17:59.783949] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.694 [2024-05-15 11:17:59.784282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 11:17:59.784387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 11:17:59.784397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:02.694 [2024-05-15 11:17:59.784404] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:02.694 [2024-05-15 11:17:59.784578] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:02.694 [2024-05-15 11:17:59.784754] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.694 [2024-05-15 11:17:59.784761] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.694 [2024-05-15 11:17:59.784767] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.694 [2024-05-15 11:17:59.787578] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.694 [2024-05-15 11:17:59.796994] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.694 [2024-05-15 11:17:59.797355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 11:17:59.797519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 11:17:59.797550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:02.694 [2024-05-15 11:17:59.797571] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:02.694 [2024-05-15 11:17:59.798082] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:02.694 [2024-05-15 11:17:59.798265] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.694 [2024-05-15 11:17:59.798274] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.694 [2024-05-15 11:17:59.798279] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.694 [2024-05-15 11:17:59.801030] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.694 [2024-05-15 11:17:59.809874] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.694 [2024-05-15 11:17:59.810234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 11:17:59.810397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 11:17:59.810407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:02.694 [2024-05-15 11:17:59.810414] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:02.694 [2024-05-15 11:17:59.810587] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:02.694 [2024-05-15 11:17:59.810762] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.694 [2024-05-15 11:17:59.810770] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.694 [2024-05-15 11:17:59.810776] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.694 [2024-05-15 11:17:59.813521] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.694 [2024-05-15 11:17:59.822921] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.694 [2024-05-15 11:17:59.823309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 11:17:59.823528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 11:17:59.823558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:02.694 [2024-05-15 11:17:59.823580] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:02.694 [2024-05-15 11:17:59.824116] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:02.694 [2024-05-15 11:17:59.824309] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.694 [2024-05-15 11:17:59.824317] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.694 [2024-05-15 11:17:59.824323] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.694 [2024-05-15 11:17:59.827068] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.694 [2024-05-15 11:17:59.835867] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.694 [2024-05-15 11:17:59.836243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 11:17:59.836361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 11:17:59.836372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:02.694 [2024-05-15 11:17:59.836378] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:02.694 [2024-05-15 11:17:59.836552] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:02.694 [2024-05-15 11:17:59.836726] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.694 [2024-05-15 11:17:59.836737] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.694 [2024-05-15 11:17:59.836743] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.694 [2024-05-15 11:17:59.839476] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.694 [2024-05-15 11:17:59.848789] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.694 [2024-05-15 11:17:59.849185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 11:17:59.849390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 11:17:59.849421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:02.695 [2024-05-15 11:17:59.849442] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:02.695 [2024-05-15 11:17:59.850025] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:02.695 [2024-05-15 11:17:59.850360] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.695 [2024-05-15 11:17:59.850369] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.695 [2024-05-15 11:17:59.850375] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.695 [2024-05-15 11:17:59.853116] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.695 [2024-05-15 11:17:59.861771] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.695 [2024-05-15 11:17:59.862212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 11:17:59.862423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 11:17:59.862453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:02.695 [2024-05-15 11:17:59.862475] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:02.695 [2024-05-15 11:17:59.863060] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:02.695 [2024-05-15 11:17:59.863264] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.695 [2024-05-15 11:17:59.863273] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.695 [2024-05-15 11:17:59.863279] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.695 [2024-05-15 11:17:59.865991] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.695 [2024-05-15 11:17:59.874814] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.695 [2024-05-15 11:17:59.875111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 11:17:59.875332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 11:17:59.875343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:02.695 [2024-05-15 11:17:59.875350] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:02.695 [2024-05-15 11:17:59.875514] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:02.695 [2024-05-15 11:17:59.875680] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.695 [2024-05-15 11:17:59.875687] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.695 [2024-05-15 11:17:59.875696] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.695 [2024-05-15 11:17:59.878470] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.695 [2024-05-15 11:17:59.887789] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.695 [2024-05-15 11:17:59.888109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 11:17:59.888333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 11:17:59.888368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:02.695 [2024-05-15 11:17:59.888389] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:02.695 [2024-05-15 11:17:59.888974] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:02.695 [2024-05-15 11:17:59.889388] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.695 [2024-05-15 11:17:59.889396] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.695 [2024-05-15 11:17:59.889402] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.695 [2024-05-15 11:17:59.892268] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.695 [2024-05-15 11:17:59.900883] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.695 [2024-05-15 11:17:59.901244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 11:17:59.901356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 11:17:59.901367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:02.695 [2024-05-15 11:17:59.901374] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:02.695 [2024-05-15 11:17:59.901553] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:02.695 [2024-05-15 11:17:59.901733] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.695 [2024-05-15 11:17:59.901741] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.695 [2024-05-15 11:17:59.901747] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.695 [2024-05-15 11:17:59.904616] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.695 [2024-05-15 11:17:59.914031] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.695 [2024-05-15 11:17:59.914328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 11:17:59.914488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 11:17:59.914498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:02.695 [2024-05-15 11:17:59.914505] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:02.695 [2024-05-15 11:17:59.914678] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:02.695 [2024-05-15 11:17:59.914853] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.695 [2024-05-15 11:17:59.914861] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.695 [2024-05-15 11:17:59.914867] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.695 [2024-05-15 11:17:59.917653] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.695 [2024-05-15 11:17:59.927148] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.695 [2024-05-15 11:17:59.927477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 11:17:59.927644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 11:17:59.927654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:02.695 [2024-05-15 11:17:59.927661] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:02.695 [2024-05-15 11:17:59.927834] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:02.695 [2024-05-15 11:17:59.928009] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.695 [2024-05-15 11:17:59.928017] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.695 [2024-05-15 11:17:59.928022] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.695 [2024-05-15 11:17:59.930832] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.695 [2024-05-15 11:17:59.940219] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.695 [2024-05-15 11:17:59.940526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 11:17:59.940719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 11:17:59.940729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:02.695 [2024-05-15 11:17:59.940736] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:02.695 [2024-05-15 11:17:59.940910] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:02.695 [2024-05-15 11:17:59.941084] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.695 [2024-05-15 11:17:59.941093] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.695 [2024-05-15 11:17:59.941099] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.695 [2024-05-15 11:17:59.943866] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.695 [2024-05-15 11:17:59.953188] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.695 [2024-05-15 11:17:59.953508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 11:17:59.953784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 11:17:59.953815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:02.695 [2024-05-15 11:17:59.953836] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:02.695 [2024-05-15 11:17:59.954304] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:02.695 [2024-05-15 11:17:59.954509] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.695 [2024-05-15 11:17:59.954517] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.695 [2024-05-15 11:17:59.954523] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.955 [2024-05-15 11:17:59.957414] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.955 [2024-05-15 11:17:59.966187] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.955 [2024-05-15 11:17:59.966550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.955 [2024-05-15 11:17:59.966698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.955 [2024-05-15 11:17:59.966709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:02.955 [2024-05-15 11:17:59.966716] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:02.955 [2024-05-15 11:17:59.966895] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:02.955 [2024-05-15 11:17:59.967075] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.955 [2024-05-15 11:17:59.967083] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.955 [2024-05-15 11:17:59.967090] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.955 [2024-05-15 11:17:59.969822] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.955 [2024-05-15 11:17:59.979232] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.955 [2024-05-15 11:17:59.979605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.955 [2024-05-15 11:17:59.979758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.955 [2024-05-15 11:17:59.979789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:02.955 [2024-05-15 11:17:59.979811] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:02.955 [2024-05-15 11:17:59.980254] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:02.955 [2024-05-15 11:17:59.980429] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.955 [2024-05-15 11:17:59.980438] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.955 [2024-05-15 11:17:59.980444] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.955 [2024-05-15 11:17:59.983225] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.955 [2024-05-15 11:17:59.992219] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.955 [2024-05-15 11:17:59.992525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.955 [2024-05-15 11:17:59.992741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.955 [2024-05-15 11:17:59.992751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:02.955 [2024-05-15 11:17:59.992757] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:02.955 [2024-05-15 11:17:59.992936] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:02.955 [2024-05-15 11:17:59.993115] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.955 [2024-05-15 11:17:59.993123] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.955 [2024-05-15 11:17:59.993129] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.955 [2024-05-15 11:17:59.995955] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.955 [2024-05-15 11:18:00.005358] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.955 [2024-05-15 11:18:00.005676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.955 [2024-05-15 11:18:00.005888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.955 [2024-05-15 11:18:00.005898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:02.955 [2024-05-15 11:18:00.005905] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:02.955 [2024-05-15 11:18:00.006084] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:02.955 [2024-05-15 11:18:00.006269] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.955 [2024-05-15 11:18:00.006278] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.955 [2024-05-15 11:18:00.006284] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.955 [2024-05-15 11:18:00.009146] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.955 [2024-05-15 11:18:00.019071] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.955 [2024-05-15 11:18:00.019434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.955 [2024-05-15 11:18:00.019549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.955 [2024-05-15 11:18:00.019560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:02.955 [2024-05-15 11:18:00.019567] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:02.955 [2024-05-15 11:18:00.019747] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:02.955 [2024-05-15 11:18:00.019927] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.955 [2024-05-15 11:18:00.019936] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.955 [2024-05-15 11:18:00.019943] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.955 [2024-05-15 11:18:00.022811] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.955 [2024-05-15 11:18:00.032281] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.955 [2024-05-15 11:18:00.032641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.955 [2024-05-15 11:18:00.032898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.955 [2024-05-15 11:18:00.032910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:02.955 [2024-05-15 11:18:00.032917] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:02.955 [2024-05-15 11:18:00.033096] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:02.955 [2024-05-15 11:18:00.033281] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.955 [2024-05-15 11:18:00.033291] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.955 [2024-05-15 11:18:00.033297] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.955 [2024-05-15 11:18:00.036171] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.955 [2024-05-15 11:18:00.045711] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.955 [2024-05-15 11:18:00.046068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.955 [2024-05-15 11:18:00.046239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.955 [2024-05-15 11:18:00.046256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:02.955 [2024-05-15 11:18:00.046264] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:02.955 [2024-05-15 11:18:00.046446] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:02.955 [2024-05-15 11:18:00.046627] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.955 [2024-05-15 11:18:00.046635] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.955 [2024-05-15 11:18:00.046642] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.955 [2024-05-15 11:18:00.049510] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.955 [2024-05-15 11:18:00.058807] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.955 [2024-05-15 11:18:00.059112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.955 [2024-05-15 11:18:00.059300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.955 [2024-05-15 11:18:00.059312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:02.955 [2024-05-15 11:18:00.059320] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:02.956 [2024-05-15 11:18:00.059498] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:02.956 [2024-05-15 11:18:00.059679] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.956 [2024-05-15 11:18:00.059688] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.956 [2024-05-15 11:18:00.059694] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.956 [2024-05-15 11:18:00.062561] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.956 [2024-05-15 11:18:00.072024] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.956 [2024-05-15 11:18:00.072320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.956 [2024-05-15 11:18:00.072557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.956 [2024-05-15 11:18:00.072568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:02.956 [2024-05-15 11:18:00.072575] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:02.956 [2024-05-15 11:18:00.072755] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:02.956 [2024-05-15 11:18:00.072935] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.956 [2024-05-15 11:18:00.072944] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.956 [2024-05-15 11:18:00.072950] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.956 [2024-05-15 11:18:00.075817] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.956 [2024-05-15 11:18:00.085162] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.956 [2024-05-15 11:18:00.085569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.956 [2024-05-15 11:18:00.085778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.956 [2024-05-15 11:18:00.085789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:02.956 [2024-05-15 11:18:00.085800] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:02.956 [2024-05-15 11:18:00.085979] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:02.956 [2024-05-15 11:18:00.086159] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.956 [2024-05-15 11:18:00.086173] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.956 [2024-05-15 11:18:00.086180] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.956 [2024-05-15 11:18:00.089041] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.956 [2024-05-15 11:18:00.098329] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.956 [2024-05-15 11:18:00.098789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.956 [2024-05-15 11:18:00.098943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.956 [2024-05-15 11:18:00.098953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:02.956 [2024-05-15 11:18:00.098960] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:02.956 [2024-05-15 11:18:00.099139] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:02.956 [2024-05-15 11:18:00.099325] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.956 [2024-05-15 11:18:00.099333] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.956 [2024-05-15 11:18:00.099340] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.956 [2024-05-15 11:18:00.102205] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.956 [2024-05-15 11:18:00.111491] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.956 [2024-05-15 11:18:00.111833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.956 [2024-05-15 11:18:00.112003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.956 [2024-05-15 11:18:00.112034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:02.956 [2024-05-15 11:18:00.112056] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:02.956 [2024-05-15 11:18:00.112573] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:02.956 [2024-05-15 11:18:00.112753] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.956 [2024-05-15 11:18:00.112761] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.956 [2024-05-15 11:18:00.112767] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.956 [2024-05-15 11:18:00.115644] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.956 [2024-05-15 11:18:00.124624] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.956 [2024-05-15 11:18:00.125085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.956 [2024-05-15 11:18:00.125372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.956 [2024-05-15 11:18:00.125405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:02.956 [2024-05-15 11:18:00.125428] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:02.956 [2024-05-15 11:18:00.125808] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:02.956 [2024-05-15 11:18:00.126070] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.956 [2024-05-15 11:18:00.126081] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.956 [2024-05-15 11:18:00.126090] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.956 [2024-05-15 11:18:00.130204] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.956 [2024-05-15 11:18:00.138128] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.956 [2024-05-15 11:18:00.138510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.956 [2024-05-15 11:18:00.138744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.956 [2024-05-15 11:18:00.138754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:02.956 [2024-05-15 11:18:00.138762] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:02.956 [2024-05-15 11:18:00.138940] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:02.956 [2024-05-15 11:18:00.139122] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.956 [2024-05-15 11:18:00.139130] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.956 [2024-05-15 11:18:00.139136] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.956 [2024-05-15 11:18:00.142002] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.956 [2024-05-15 11:18:00.151279] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.956 [2024-05-15 11:18:00.151632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.956 [2024-05-15 11:18:00.151808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.956 [2024-05-15 11:18:00.151819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:02.956 [2024-05-15 11:18:00.151826] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:02.956 [2024-05-15 11:18:00.152004] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:02.956 [2024-05-15 11:18:00.152189] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.956 [2024-05-15 11:18:00.152198] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.956 [2024-05-15 11:18:00.152204] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.956 [2024-05-15 11:18:00.155063] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.956 [2024-05-15 11:18:00.164489] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.956 [2024-05-15 11:18:00.164876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.956 [2024-05-15 11:18:00.165090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.956 [2024-05-15 11:18:00.165101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:02.956 [2024-05-15 11:18:00.165108] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:02.956 [2024-05-15 11:18:00.165293] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:02.956 [2024-05-15 11:18:00.165477] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.956 [2024-05-15 11:18:00.165485] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.956 [2024-05-15 11:18:00.165492] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.956 [2024-05-15 11:18:00.168363] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.956 [2024-05-15 11:18:00.177504] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.956 [2024-05-15 11:18:00.177933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.956 [2024-05-15 11:18:00.178159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.957 [2024-05-15 11:18:00.178174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:02.957 [2024-05-15 11:18:00.178182] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:02.957 [2024-05-15 11:18:00.178361] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:02.957 [2024-05-15 11:18:00.178540] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.957 [2024-05-15 11:18:00.178548] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.957 [2024-05-15 11:18:00.178554] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.957 [2024-05-15 11:18:00.181384] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.957 [2024-05-15 11:18:00.190588] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.957 [2024-05-15 11:18:00.191031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.957 [2024-05-15 11:18:00.191264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.957 [2024-05-15 11:18:00.191296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:02.957 [2024-05-15 11:18:00.191318] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:02.957 [2024-05-15 11:18:00.191903] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:02.957 [2024-05-15 11:18:00.192182] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.957 [2024-05-15 11:18:00.192190] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.957 [2024-05-15 11:18:00.192197] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.957 [2024-05-15 11:18:00.195012] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.957 [2024-05-15 11:18:00.203744] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.957 [2024-05-15 11:18:00.204098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.957 [2024-05-15 11:18:00.204252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.957 [2024-05-15 11:18:00.204263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:02.957 [2024-05-15 11:18:00.204271] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:02.957 [2024-05-15 11:18:00.204449] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:02.957 [2024-05-15 11:18:00.204629] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.957 [2024-05-15 11:18:00.204640] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.957 [2024-05-15 11:18:00.204646] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.957 [2024-05-15 11:18:00.207497] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.957 [2024-05-15 11:18:00.216934] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.957 [2024-05-15 11:18:00.217290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.957 [2024-05-15 11:18:00.217460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.957 [2024-05-15 11:18:00.217491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:02.957 [2024-05-15 11:18:00.217512] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.216 [2024-05-15 11:18:00.218054] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.216 [2024-05-15 11:18:00.218237] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.216 [2024-05-15 11:18:00.218246] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.216 [2024-05-15 11:18:00.218252] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.216 [2024-05-15 11:18:00.221181] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.216 [2024-05-15 11:18:00.230112] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.216 [2024-05-15 11:18:00.230436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.216 [2024-05-15 11:18:00.230589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.216 [2024-05-15 11:18:00.230600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.216 [2024-05-15 11:18:00.230608] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.216 [2024-05-15 11:18:00.230787] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.216 [2024-05-15 11:18:00.230966] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.216 [2024-05-15 11:18:00.230974] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.216 [2024-05-15 11:18:00.230981] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.216 [2024-05-15 11:18:00.233839] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.216 [2024-05-15 11:18:00.243231] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.216 [2024-05-15 11:18:00.243683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.216 [2024-05-15 11:18:00.243885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.216 [2024-05-15 11:18:00.243916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.216 [2024-05-15 11:18:00.243938] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.216 [2024-05-15 11:18:00.244251] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.216 [2024-05-15 11:18:00.244431] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.216 [2024-05-15 11:18:00.244440] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.216 [2024-05-15 11:18:00.244449] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.216 [2024-05-15 11:18:00.247294] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.216 [2024-05-15 11:18:00.256387] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.216 [2024-05-15 11:18:00.256857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.216 [2024-05-15 11:18:00.257092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.216 [2024-05-15 11:18:00.257102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.216 [2024-05-15 11:18:00.257110] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.216 [2024-05-15 11:18:00.257295] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.216 [2024-05-15 11:18:00.257476] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.216 [2024-05-15 11:18:00.257484] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.216 [2024-05-15 11:18:00.257491] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.216 [2024-05-15 11:18:00.260339] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.216 [2024-05-15 11:18:00.269530] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.216 [2024-05-15 11:18:00.269916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.216 [2024-05-15 11:18:00.270080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.216 [2024-05-15 11:18:00.270090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.216 [2024-05-15 11:18:00.270097] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.216 [2024-05-15 11:18:00.270281] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.216 [2024-05-15 11:18:00.270461] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.216 [2024-05-15 11:18:00.270469] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.216 [2024-05-15 11:18:00.270475] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.216 [2024-05-15 11:18:00.273337] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.216 [2024-05-15 11:18:00.282703] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.216 [2024-05-15 11:18:00.283057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.216 [2024-05-15 11:18:00.283308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.216 [2024-05-15 11:18:00.283340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.216 [2024-05-15 11:18:00.283362] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.216 [2024-05-15 11:18:00.283948] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.216 [2024-05-15 11:18:00.284365] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.216 [2024-05-15 11:18:00.284374] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.216 [2024-05-15 11:18:00.284380] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.216 [2024-05-15 11:18:00.287220] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.216 [2024-05-15 11:18:00.295917] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.216 [2024-05-15 11:18:00.296345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.216 [2024-05-15 11:18:00.296555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.216 [2024-05-15 11:18:00.296566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.216 [2024-05-15 11:18:00.296573] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.216 [2024-05-15 11:18:00.296752] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.216 [2024-05-15 11:18:00.296931] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.216 [2024-05-15 11:18:00.296939] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.216 [2024-05-15 11:18:00.296945] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.216 [2024-05-15 11:18:00.299811] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.216 [2024-05-15 11:18:00.308998] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.216 [2024-05-15 11:18:00.309382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.216 [2024-05-15 11:18:00.309556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.216 [2024-05-15 11:18:00.309586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.216 [2024-05-15 11:18:00.309608] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.216 [2024-05-15 11:18:00.310069] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.216 [2024-05-15 11:18:00.310253] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.216 [2024-05-15 11:18:00.310261] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.216 [2024-05-15 11:18:00.310268] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.216 [2024-05-15 11:18:00.313116] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.216 [2024-05-15 11:18:00.322241] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.216 [2024-05-15 11:18:00.322616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.216 [2024-05-15 11:18:00.322857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.216 [2024-05-15 11:18:00.322888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.216 [2024-05-15 11:18:00.322911] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.217 [2024-05-15 11:18:00.323355] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.217 [2024-05-15 11:18:00.323535] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.217 [2024-05-15 11:18:00.323543] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.217 [2024-05-15 11:18:00.323549] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.217 [2024-05-15 11:18:00.326378] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.217 [2024-05-15 11:18:00.335395] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.217 [2024-05-15 11:18:00.335847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-05-15 11:18:00.336139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-05-15 11:18:00.336184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.217 [2024-05-15 11:18:00.336207] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.217 [2024-05-15 11:18:00.336427] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.217 [2024-05-15 11:18:00.336606] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.217 [2024-05-15 11:18:00.336614] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.217 [2024-05-15 11:18:00.336620] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.217 [2024-05-15 11:18:00.339478] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.217 [2024-05-15 11:18:00.348509] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.217 [2024-05-15 11:18:00.348954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-05-15 11:18:00.349183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-05-15 11:18:00.349215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.217 [2024-05-15 11:18:00.349237] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.217 [2024-05-15 11:18:00.349823] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.217 [2024-05-15 11:18:00.350402] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.217 [2024-05-15 11:18:00.350410] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.217 [2024-05-15 11:18:00.350417] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.217 [2024-05-15 11:18:00.353286] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.217 [2024-05-15 11:18:00.361665] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.217 [2024-05-15 11:18:00.362128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-05-15 11:18:00.362368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-05-15 11:18:00.362401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.217 [2024-05-15 11:18:00.362424] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.217 [2024-05-15 11:18:00.363008] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.217 [2024-05-15 11:18:00.363561] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.217 [2024-05-15 11:18:00.363570] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.217 [2024-05-15 11:18:00.363576] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.217 [2024-05-15 11:18:00.366402] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.217 [2024-05-15 11:18:00.374762] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.217 [2024-05-15 11:18:00.375110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-05-15 11:18:00.375226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-05-15 11:18:00.375238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.217 [2024-05-15 11:18:00.375246] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.217 [2024-05-15 11:18:00.375421] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.217 [2024-05-15 11:18:00.375595] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.217 [2024-05-15 11:18:00.375603] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.217 [2024-05-15 11:18:00.375610] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.217 [2024-05-15 11:18:00.378315] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.217 [2024-05-15 11:18:00.387771] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.217 [2024-05-15 11:18:00.388112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-05-15 11:18:00.388343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-05-15 11:18:00.388354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.217 [2024-05-15 11:18:00.388361] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.217 [2024-05-15 11:18:00.388579] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.217 [2024-05-15 11:18:00.388758] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.217 [2024-05-15 11:18:00.388766] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.217 [2024-05-15 11:18:00.388772] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.217 [2024-05-15 11:18:00.391633] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.217 [2024-05-15 11:18:00.400910] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.217 [2024-05-15 11:18:00.401265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-05-15 11:18:00.401481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-05-15 11:18:00.401491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.217 [2024-05-15 11:18:00.401498] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.217 [2024-05-15 11:18:00.401676] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.217 [2024-05-15 11:18:00.401855] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.217 [2024-05-15 11:18:00.401863] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.217 [2024-05-15 11:18:00.401869] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.217 [2024-05-15 11:18:00.404737] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.217 [2024-05-15 11:18:00.414022] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.217 [2024-05-15 11:18:00.414383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-05-15 11:18:00.414615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-05-15 11:18:00.414653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.217 [2024-05-15 11:18:00.414675] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.217 [2024-05-15 11:18:00.415270] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.217 [2024-05-15 11:18:00.415741] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.217 [2024-05-15 11:18:00.415749] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.217 [2024-05-15 11:18:00.415755] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.217 [2024-05-15 11:18:00.418615] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.217 [2024-05-15 11:18:00.427143] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.217 [2024-05-15 11:18:00.427432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-05-15 11:18:00.427638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-05-15 11:18:00.427669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.217 [2024-05-15 11:18:00.427691] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.217 [2024-05-15 11:18:00.428289] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.217 [2024-05-15 11:18:00.428779] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.217 [2024-05-15 11:18:00.428787] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.217 [2024-05-15 11:18:00.428793] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.217 [2024-05-15 11:18:00.432640] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.217 [2024-05-15 11:18:00.440840] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.217 [2024-05-15 11:18:00.441221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-05-15 11:18:00.441360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-05-15 11:18:00.441370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.217 [2024-05-15 11:18:00.441377] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.217 [2024-05-15 11:18:00.441556] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.217 [2024-05-15 11:18:00.441735] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.217 [2024-05-15 11:18:00.441743] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.217 [2024-05-15 11:18:00.441749] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.217 [2024-05-15 11:18:00.444618] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.217 [2024-05-15 11:18:00.453957] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.218 [2024-05-15 11:18:00.454322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.218 [2024-05-15 11:18:00.454534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.218 [2024-05-15 11:18:00.454545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.218 [2024-05-15 11:18:00.454555] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.218 [2024-05-15 11:18:00.454734] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.218 [2024-05-15 11:18:00.454913] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.218 [2024-05-15 11:18:00.454921] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.218 [2024-05-15 11:18:00.454927] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.218 [2024-05-15 11:18:00.457799] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.218 [2024-05-15 11:18:00.467156] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.218 [2024-05-15 11:18:00.467540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.218 [2024-05-15 11:18:00.467757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.218 [2024-05-15 11:18:00.467787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.218 [2024-05-15 11:18:00.467809] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.218 [2024-05-15 11:18:00.468398] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.218 [2024-05-15 11:18:00.468578] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.218 [2024-05-15 11:18:00.468586] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.218 [2024-05-15 11:18:00.468592] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.218 [2024-05-15 11:18:00.471435] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.476 [2024-05-15 11:18:00.480290] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.476 [2024-05-15 11:18:00.480750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.476 [2024-05-15 11:18:00.480999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.476 [2024-05-15 11:18:00.481031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.476 [2024-05-15 11:18:00.481054] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.476 [2024-05-15 11:18:00.481568] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.476 [2024-05-15 11:18:00.481749] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.476 [2024-05-15 11:18:00.481756] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.476 [2024-05-15 11:18:00.481763] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.476 [2024-05-15 11:18:00.484678] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.476 [2024-05-15 11:18:00.493444] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.476 [2024-05-15 11:18:00.493868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.476 [2024-05-15 11:18:00.494105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.476 [2024-05-15 11:18:00.494136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.476 [2024-05-15 11:18:00.494159] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.476 [2024-05-15 11:18:00.494544] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.476 [2024-05-15 11:18:00.494724] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.476 [2024-05-15 11:18:00.494732] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.476 [2024-05-15 11:18:00.494738] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.476 [2024-05-15 11:18:00.497603] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.476 [2024-05-15 11:18:00.506532] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.476 [2024-05-15 11:18:00.506972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.476 [2024-05-15 11:18:00.507191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.476 [2024-05-15 11:18:00.507203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.476 [2024-05-15 11:18:00.507210] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.476 [2024-05-15 11:18:00.507405] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.476 [2024-05-15 11:18:00.507585] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.476 [2024-05-15 11:18:00.507593] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.476 [2024-05-15 11:18:00.507599] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.476 [2024-05-15 11:18:00.510476] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.476 [2024-05-15 11:18:00.519670] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.476 [2024-05-15 11:18:00.520097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.476 [2024-05-15 11:18:00.520321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.476 [2024-05-15 11:18:00.520334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.476 [2024-05-15 11:18:00.520342] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.476 [2024-05-15 11:18:00.520517] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.476 [2024-05-15 11:18:00.520691] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.476 [2024-05-15 11:18:00.520699] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.476 [2024-05-15 11:18:00.520705] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.476 [2024-05-15 11:18:00.523568] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.476 [2024-05-15 11:18:00.532759] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.476 [2024-05-15 11:18:00.533143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.476 [2024-05-15 11:18:00.533379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.476 [2024-05-15 11:18:00.533390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.476 [2024-05-15 11:18:00.533398] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.476 [2024-05-15 11:18:00.533578] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.476 [2024-05-15 11:18:00.533761] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.476 [2024-05-15 11:18:00.533769] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.476 [2024-05-15 11:18:00.533776] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.476 [2024-05-15 11:18:00.536600] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.476 [2024-05-15 11:18:00.545770] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.476 [2024-05-15 11:18:00.546168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.476 [2024-05-15 11:18:00.546369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.476 [2024-05-15 11:18:00.546379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.476 [2024-05-15 11:18:00.546386] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.476 [2024-05-15 11:18:00.546571] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.476 [2024-05-15 11:18:00.546746] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.476 [2024-05-15 11:18:00.546753] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.476 [2024-05-15 11:18:00.546759] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.476 [2024-05-15 11:18:00.549571] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.476 [2024-05-15 11:18:00.558958] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.476 [2024-05-15 11:18:00.559413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.476 [2024-05-15 11:18:00.559624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.476 [2024-05-15 11:18:00.559654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.476 [2024-05-15 11:18:00.559675] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.476 [2024-05-15 11:18:00.560111] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.476 [2024-05-15 11:18:00.560372] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.476 [2024-05-15 11:18:00.560383] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.476 [2024-05-15 11:18:00.560392] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.476 [2024-05-15 11:18:00.564499] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.476 [2024-05-15 11:18:00.572414] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.476 [2024-05-15 11:18:00.572858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.476 [2024-05-15 11:18:00.573147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.476 [2024-05-15 11:18:00.573191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.476 [2024-05-15 11:18:00.573213] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.477 [2024-05-15 11:18:00.573775] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.477 [2024-05-15 11:18:00.573955] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.477 [2024-05-15 11:18:00.573966] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.477 [2024-05-15 11:18:00.573973] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.477 [2024-05-15 11:18:00.576809] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.477 [2024-05-15 11:18:00.585557] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.477 [2024-05-15 11:18:00.586038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.477 [2024-05-15 11:18:00.586311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.477 [2024-05-15 11:18:00.586342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.477 [2024-05-15 11:18:00.586365] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.477 [2024-05-15 11:18:00.586736] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.477 [2024-05-15 11:18:00.586916] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.477 [2024-05-15 11:18:00.586924] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.477 [2024-05-15 11:18:00.586931] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.477 [2024-05-15 11:18:00.589758] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.477 [2024-05-15 11:18:00.598601] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.477 [2024-05-15 11:18:00.599059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.477 [2024-05-15 11:18:00.599260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.477 [2024-05-15 11:18:00.599293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.477 [2024-05-15 11:18:00.599315] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.477 [2024-05-15 11:18:00.599900] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.477 [2024-05-15 11:18:00.600217] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.477 [2024-05-15 11:18:00.600229] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.477 [2024-05-15 11:18:00.600239] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.477 [2024-05-15 11:18:00.604349] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.477 [2024-05-15 11:18:00.612203] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.477 [2024-05-15 11:18:00.612633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.477 [2024-05-15 11:18:00.612920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.477 [2024-05-15 11:18:00.612950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.477 [2024-05-15 11:18:00.612971] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.477 [2024-05-15 11:18:00.613568] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.477 [2024-05-15 11:18:00.614157] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.477 [2024-05-15 11:18:00.614188] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.477 [2024-05-15 11:18:00.614227] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.477 [2024-05-15 11:18:00.617041] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.477 [2024-05-15 11:18:00.625227] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.477 [2024-05-15 11:18:00.625634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.477 [2024-05-15 11:18:00.625847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.477 [2024-05-15 11:18:00.625857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.477 [2024-05-15 11:18:00.625864] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.477 [2024-05-15 11:18:00.626042] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.477 [2024-05-15 11:18:00.626226] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.477 [2024-05-15 11:18:00.626235] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.477 [2024-05-15 11:18:00.626241] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.477 [2024-05-15 11:18:00.629063] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.477 [2024-05-15 11:18:00.638170] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.477 [2024-05-15 11:18:00.638598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.477 [2024-05-15 11:18:00.638863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.477 [2024-05-15 11:18:00.638894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.477 [2024-05-15 11:18:00.638915] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.477 [2024-05-15 11:18:00.639273] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.477 [2024-05-15 11:18:00.639453] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.477 [2024-05-15 11:18:00.639461] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.477 [2024-05-15 11:18:00.639467] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.477 [2024-05-15 11:18:00.642341] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.477 [2024-05-15 11:18:00.651291] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.477 [2024-05-15 11:18:00.651642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.477 [2024-05-15 11:18:00.651854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.477 [2024-05-15 11:18:00.651864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.477 [2024-05-15 11:18:00.651871] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.477 [2024-05-15 11:18:00.652050] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.477 [2024-05-15 11:18:00.652235] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.477 [2024-05-15 11:18:00.652244] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.477 [2024-05-15 11:18:00.652251] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.477 [2024-05-15 11:18:00.655107] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.477 [2024-05-15 11:18:00.664415] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.477 [2024-05-15 11:18:00.664800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.477 [2024-05-15 11:18:00.665053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.477 [2024-05-15 11:18:00.665084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.477 [2024-05-15 11:18:00.665107] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.477 [2024-05-15 11:18:00.665407] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.477 [2024-05-15 11:18:00.665582] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.477 [2024-05-15 11:18:00.665590] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.477 [2024-05-15 11:18:00.665596] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.477 [2024-05-15 11:18:00.668593] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.477 [2024-05-15 11:18:00.677494] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.477 [2024-05-15 11:18:00.677848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.477 [2024-05-15 11:18:00.678073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.477 [2024-05-15 11:18:00.678084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.477 [2024-05-15 11:18:00.678091] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.477 [2024-05-15 11:18:00.678275] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.477 [2024-05-15 11:18:00.678462] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.477 [2024-05-15 11:18:00.678470] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.477 [2024-05-15 11:18:00.678476] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.477 [2024-05-15 11:18:00.681311] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.477 [2024-05-15 11:18:00.690563] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.477 [2024-05-15 11:18:00.690951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.477 [2024-05-15 11:18:00.691040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.477 [2024-05-15 11:18:00.691050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.477 [2024-05-15 11:18:00.691057] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.477 [2024-05-15 11:18:00.691254] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.477 [2024-05-15 11:18:00.691434] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.478 [2024-05-15 11:18:00.691443] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.478 [2024-05-15 11:18:00.691449] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.478 [2024-05-15 11:18:00.694268] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.478 [2024-05-15 11:18:00.703708] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.478 [2024-05-15 11:18:00.704142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.478 [2024-05-15 11:18:00.704387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.478 [2024-05-15 11:18:00.704399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.478 [2024-05-15 11:18:00.704406] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.478 [2024-05-15 11:18:00.704587] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.478 [2024-05-15 11:18:00.704766] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.478 [2024-05-15 11:18:00.704774] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.478 [2024-05-15 11:18:00.704780] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.478 [2024-05-15 11:18:00.707645] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.478 [2024-05-15 11:18:00.716922] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.478 [2024-05-15 11:18:00.717325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.478 [2024-05-15 11:18:00.717491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.478 [2024-05-15 11:18:00.717501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.478 [2024-05-15 11:18:00.717509] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.478 [2024-05-15 11:18:00.717687] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.478 [2024-05-15 11:18:00.717866] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.478 [2024-05-15 11:18:00.717874] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.478 [2024-05-15 11:18:00.717881] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.478 [2024-05-15 11:18:00.720748] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.478 [2024-05-15 11:18:00.730086] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.478 [2024-05-15 11:18:00.730489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.478 [2024-05-15 11:18:00.730746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.478 [2024-05-15 11:18:00.730777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.478 [2024-05-15 11:18:00.730799] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.478 [2024-05-15 11:18:00.731377] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.478 [2024-05-15 11:18:00.731557] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.478 [2024-05-15 11:18:00.731565] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.478 [2024-05-15 11:18:00.731571] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.478 [2024-05-15 11:18:00.734412] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.738 [2024-05-15 11:18:00.743305] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.738 [2024-05-15 11:18:00.743692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.738 [2024-05-15 11:18:00.743921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.738 [2024-05-15 11:18:00.743935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.738 [2024-05-15 11:18:00.743943] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.738 [2024-05-15 11:18:00.744124] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.738 [2024-05-15 11:18:00.744310] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.738 [2024-05-15 11:18:00.744319] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.738 [2024-05-15 11:18:00.744326] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.738 [2024-05-15 11:18:00.747216] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.738 [2024-05-15 11:18:00.756432] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.738 [2024-05-15 11:18:00.756755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.738 [2024-05-15 11:18:00.756988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.738 [2024-05-15 11:18:00.756998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.738 [2024-05-15 11:18:00.757007] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.738 [2024-05-15 11:18:00.757191] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.738 [2024-05-15 11:18:00.757371] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.738 [2024-05-15 11:18:00.757380] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.738 [2024-05-15 11:18:00.757387] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.738 [2024-05-15 11:18:00.760257] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.738 [2024-05-15 11:18:00.769612] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.738 [2024-05-15 11:18:00.770034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.738 [2024-05-15 11:18:00.770236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.738 [2024-05-15 11:18:00.770247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.738 [2024-05-15 11:18:00.770255] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.738 [2024-05-15 11:18:00.770433] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.738 [2024-05-15 11:18:00.770619] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.738 [2024-05-15 11:18:00.770627] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.738 [2024-05-15 11:18:00.770633] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.738 [2024-05-15 11:18:00.773510] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.738 [2024-05-15 11:18:00.782796] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.738 [2024-05-15 11:18:00.783216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.738 [2024-05-15 11:18:00.783322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.738 [2024-05-15 11:18:00.783335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.738 [2024-05-15 11:18:00.783342] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.738 [2024-05-15 11:18:00.783516] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.738 [2024-05-15 11:18:00.783707] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.738 [2024-05-15 11:18:00.783715] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.738 [2024-05-15 11:18:00.783721] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.738 [2024-05-15 11:18:00.786616] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.738 [2024-05-15 11:18:00.795846] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.738 [2024-05-15 11:18:00.796296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.738 [2024-05-15 11:18:00.796535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.738 [2024-05-15 11:18:00.796566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.738 [2024-05-15 11:18:00.796588] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.738 [2024-05-15 11:18:00.797181] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.738 [2024-05-15 11:18:00.797560] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.738 [2024-05-15 11:18:00.797568] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.738 [2024-05-15 11:18:00.797574] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.738 [2024-05-15 11:18:00.800420] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.738 [2024-05-15 11:18:00.808886] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.738 [2024-05-15 11:18:00.809274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.738 [2024-05-15 11:18:00.809438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.738 [2024-05-15 11:18:00.809449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.738 [2024-05-15 11:18:00.809456] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.738 [2024-05-15 11:18:00.809635] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.738 [2024-05-15 11:18:00.809814] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.738 [2024-05-15 11:18:00.809822] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.738 [2024-05-15 11:18:00.809828] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.738 [2024-05-15 11:18:00.812661] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.738 [2024-05-15 11:18:00.822030] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.738 [2024-05-15 11:18:00.822488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.738 [2024-05-15 11:18:00.822711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.738 [2024-05-15 11:18:00.822722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.738 [2024-05-15 11:18:00.822732] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.738 [2024-05-15 11:18:00.822912] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.738 [2024-05-15 11:18:00.823090] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.738 [2024-05-15 11:18:00.823099] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.738 [2024-05-15 11:18:00.823105] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.738 [2024-05-15 11:18:00.825933] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.738 [2024-05-15 11:18:00.834959] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.738 [2024-05-15 11:18:00.835417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.738 [2024-05-15 11:18:00.835611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.738 [2024-05-15 11:18:00.835642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.739 [2024-05-15 11:18:00.835663] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.739 [2024-05-15 11:18:00.836258] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.739 [2024-05-15 11:18:00.836653] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.739 [2024-05-15 11:18:00.836661] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.739 [2024-05-15 11:18:00.836667] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.739 [2024-05-15 11:18:00.839291] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.739 [2024-05-15 11:18:00.847815] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.739 [2024-05-15 11:18:00.848244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.739 [2024-05-15 11:18:00.848516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.739 [2024-05-15 11:18:00.848547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.739 [2024-05-15 11:18:00.848569] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.739 [2024-05-15 11:18:00.848828] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.739 [2024-05-15 11:18:00.848993] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.739 [2024-05-15 11:18:00.849000] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.739 [2024-05-15 11:18:00.849005] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.739 [2024-05-15 11:18:00.851724] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.739 [2024-05-15 11:18:00.860732] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.739 [2024-05-15 11:18:00.861156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.739 [2024-05-15 11:18:00.861375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.739 [2024-05-15 11:18:00.861385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.739 [2024-05-15 11:18:00.861392] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.739 [2024-05-15 11:18:00.861569] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.739 [2024-05-15 11:18:00.861743] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.739 [2024-05-15 11:18:00.861750] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.739 [2024-05-15 11:18:00.861756] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.739 [2024-05-15 11:18:00.864540] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.739 [2024-05-15 11:18:00.873707] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.739 [2024-05-15 11:18:00.874145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.739 [2024-05-15 11:18:00.874417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.739 [2024-05-15 11:18:00.874448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.739 [2024-05-15 11:18:00.874469] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.739 [2024-05-15 11:18:00.874770] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.739 [2024-05-15 11:18:00.874945] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.739 [2024-05-15 11:18:00.874953] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.739 [2024-05-15 11:18:00.874959] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.739 [2024-05-15 11:18:00.877731] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.739 [2024-05-15 11:18:00.886685] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.739 [2024-05-15 11:18:00.887131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.739 [2024-05-15 11:18:00.887356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.739 [2024-05-15 11:18:00.887388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.739 [2024-05-15 11:18:00.887411] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.739 [2024-05-15 11:18:00.887778] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.739 [2024-05-15 11:18:00.887952] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.739 [2024-05-15 11:18:00.887960] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.739 [2024-05-15 11:18:00.887967] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.739 [2024-05-15 11:18:00.890687] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.739 [2024-05-15 11:18:00.899848] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.739 [2024-05-15 11:18:00.900268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.739 [2024-05-15 11:18:00.900374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.739 [2024-05-15 11:18:00.900419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.739 [2024-05-15 11:18:00.900442] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.739 [2024-05-15 11:18:00.901025] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.739 [2024-05-15 11:18:00.901634] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.739 [2024-05-15 11:18:00.901661] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.739 [2024-05-15 11:18:00.901691] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.739 [2024-05-15 11:18:00.904507] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.739 [2024-05-15 11:18:00.912896] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.739 [2024-05-15 11:18:00.913359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.739 [2024-05-15 11:18:00.913523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.739 [2024-05-15 11:18:00.913534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.739 [2024-05-15 11:18:00.913541] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.739 [2024-05-15 11:18:00.913714] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.739 [2024-05-15 11:18:00.913889] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.739 [2024-05-15 11:18:00.913896] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.739 [2024-05-15 11:18:00.913902] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.739 [2024-05-15 11:18:00.916683] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.739 [2024-05-15 11:18:00.925782] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.739 [2024-05-15 11:18:00.926147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.739 [2024-05-15 11:18:00.926364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.739 [2024-05-15 11:18:00.926375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.739 [2024-05-15 11:18:00.926381] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.739 [2024-05-15 11:18:00.926555] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.739 [2024-05-15 11:18:00.926730] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.739 [2024-05-15 11:18:00.926737] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.739 [2024-05-15 11:18:00.926743] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.739 [2024-05-15 11:18:00.929468] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.739 [2024-05-15 11:18:00.938733] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.739 [2024-05-15 11:18:00.939084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.739 [2024-05-15 11:18:00.939240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.739 [2024-05-15 11:18:00.939273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.739 [2024-05-15 11:18:00.939294] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.739 [2024-05-15 11:18:00.939574] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.739 [2024-05-15 11:18:00.939748] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.739 [2024-05-15 11:18:00.939760] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.739 [2024-05-15 11:18:00.939767] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.739 [2024-05-15 11:18:00.942446] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.739 [2024-05-15 11:18:00.951654] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.739 [2024-05-15 11:18:00.952062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.739 [2024-05-15 11:18:00.952281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.739 [2024-05-15 11:18:00.952315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.739 [2024-05-15 11:18:00.952337] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.739 [2024-05-15 11:18:00.952866] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.739 [2024-05-15 11:18:00.953040] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.739 [2024-05-15 11:18:00.953047] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.739 [2024-05-15 11:18:00.953054] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.739 [2024-05-15 11:18:00.955811] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.740 [2024-05-15 11:18:00.964600] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.740 [2024-05-15 11:18:00.965026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.740 [2024-05-15 11:18:00.965174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.740 [2024-05-15 11:18:00.965185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.740 [2024-05-15 11:18:00.965192] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.740 [2024-05-15 11:18:00.965365] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.740 [2024-05-15 11:18:00.965539] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.740 [2024-05-15 11:18:00.965547] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.740 [2024-05-15 11:18:00.965553] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.740 [2024-05-15 11:18:00.968340] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.740 [2024-05-15 11:18:00.977585] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.740 [2024-05-15 11:18:00.977998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.740 [2024-05-15 11:18:00.978208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.740 [2024-05-15 11:18:00.978219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.740 [2024-05-15 11:18:00.978226] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.740 [2024-05-15 11:18:00.978400] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.740 [2024-05-15 11:18:00.978575] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.740 [2024-05-15 11:18:00.978583] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.740 [2024-05-15 11:18:00.978593] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.740 [2024-05-15 11:18:00.981333] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.740 [2024-05-15 11:18:00.990464] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.740 [2024-05-15 11:18:00.990869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.740 [2024-05-15 11:18:00.991099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.740 [2024-05-15 11:18:00.991108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:03.740 [2024-05-15 11:18:00.991114] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:03.740 [2024-05-15 11:18:00.991305] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:03.740 [2024-05-15 11:18:00.991479] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.740 [2024-05-15 11:18:00.991487] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.740 [2024-05-15 11:18:00.991493] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.740 [2024-05-15 11:18:00.994237] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.000 [2024-05-15 11:18:01.003511] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.000 [2024-05-15 11:18:01.003970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.000 [2024-05-15 11:18:01.004191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.000 [2024-05-15 11:18:01.004225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.000 [2024-05-15 11:18:01.004248] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.000 [2024-05-15 11:18:01.004664] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.000 [2024-05-15 11:18:01.004922] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.000 [2024-05-15 11:18:01.004934] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.000 [2024-05-15 11:18:01.004943] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.000 [2024-05-15 11:18:01.009088] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.000 [2024-05-15 11:18:01.017011] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.000 [2024-05-15 11:18:01.017427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.000 [2024-05-15 11:18:01.017636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.000 [2024-05-15 11:18:01.017646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.000 [2024-05-15 11:18:01.017653] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.000 [2024-05-15 11:18:01.017828] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.000 [2024-05-15 11:18:01.018003] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.000 [2024-05-15 11:18:01.018010] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.000 [2024-05-15 11:18:01.018016] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.000 [2024-05-15 11:18:01.020868] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.000 [2024-05-15 11:18:01.030055] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.000 [2024-05-15 11:18:01.030369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.000 [2024-05-15 11:18:01.030554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.000 [2024-05-15 11:18:01.030564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.000 [2024-05-15 11:18:01.030571] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.000 [2024-05-15 11:18:01.030745] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.000 [2024-05-15 11:18:01.030920] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.000 [2024-05-15 11:18:01.030928] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.000 [2024-05-15 11:18:01.030934] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.000 [2024-05-15 11:18:01.033751] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.000 [2024-05-15 11:18:01.043112] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.000 [2024-05-15 11:18:01.043510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.000 [2024-05-15 11:18:01.043794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.000 [2024-05-15 11:18:01.043826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.000 [2024-05-15 11:18:01.043848] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.000 [2024-05-15 11:18:01.044214] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.000 [2024-05-15 11:18:01.044391] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.000 [2024-05-15 11:18:01.044399] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.000 [2024-05-15 11:18:01.044405] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.000 [2024-05-15 11:18:01.047105] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.000 [2024-05-15 11:18:01.056022] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.000 [2024-05-15 11:18:01.056472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.000 [2024-05-15 11:18:01.056636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.000 [2024-05-15 11:18:01.056647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.000 [2024-05-15 11:18:01.056654] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.000 [2024-05-15 11:18:01.056827] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.000 [2024-05-15 11:18:01.057001] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.000 [2024-05-15 11:18:01.057009] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.000 [2024-05-15 11:18:01.057016] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.000 [2024-05-15 11:18:01.059724] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.000 [2024-05-15 11:18:01.068851] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.000 [2024-05-15 11:18:01.069285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.000 [2024-05-15 11:18:01.069484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.000 [2024-05-15 11:18:01.069516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.000 [2024-05-15 11:18:01.069537] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.000 [2024-05-15 11:18:01.069891] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.000 [2024-05-15 11:18:01.070056] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.000 [2024-05-15 11:18:01.070063] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.000 [2024-05-15 11:18:01.070069] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.000 [2024-05-15 11:18:01.072834] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.000 [2024-05-15 11:18:01.081746] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.000 [2024-05-15 11:18:01.082106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.000 [2024-05-15 11:18:01.082313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.000 [2024-05-15 11:18:01.082347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.000 [2024-05-15 11:18:01.082369] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.000 [2024-05-15 11:18:01.082953] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.000 [2024-05-15 11:18:01.083324] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.000 [2024-05-15 11:18:01.083332] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.000 [2024-05-15 11:18:01.083338] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.000 [2024-05-15 11:18:01.086094] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.000 [2024-05-15 11:18:01.094758] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.000 [2024-05-15 11:18:01.095101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.000 [2024-05-15 11:18:01.095338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.000 [2024-05-15 11:18:01.095350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.000 [2024-05-15 11:18:01.095359] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.000 [2024-05-15 11:18:01.095532] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.000 [2024-05-15 11:18:01.095707] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.000 [2024-05-15 11:18:01.095715] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.000 [2024-05-15 11:18:01.095721] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.000 [2024-05-15 11:18:01.098397] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.000 [2024-05-15 11:18:01.107867] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.000 [2024-05-15 11:18:01.108225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.000 [2024-05-15 11:18:01.108333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.000 [2024-05-15 11:18:01.108343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.000 [2024-05-15 11:18:01.108351] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.000 [2024-05-15 11:18:01.108530] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.000 [2024-05-15 11:18:01.108709] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.000 [2024-05-15 11:18:01.108718] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.000 [2024-05-15 11:18:01.108726] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.000 [2024-05-15 11:18:01.111599] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.000 [2024-05-15 11:18:01.121232] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.000 [2024-05-15 11:18:01.121585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.001 [2024-05-15 11:18:01.121849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.001 [2024-05-15 11:18:01.121859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.001 [2024-05-15 11:18:01.121867] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.001 [2024-05-15 11:18:01.122046] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.001 [2024-05-15 11:18:01.122233] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.001 [2024-05-15 11:18:01.122242] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.001 [2024-05-15 11:18:01.122249] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.001 [2024-05-15 11:18:01.125110] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.001 [2024-05-15 11:18:01.134401] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.001 [2024-05-15 11:18:01.134841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.001 [2024-05-15 11:18:01.135001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.001 [2024-05-15 11:18:01.135012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.001 [2024-05-15 11:18:01.135020] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.001 [2024-05-15 11:18:01.135206] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.001 [2024-05-15 11:18:01.135386] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.001 [2024-05-15 11:18:01.135395] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.001 [2024-05-15 11:18:01.135402] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.001 [2024-05-15 11:18:01.138269] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.001 [2024-05-15 11:18:01.147564] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.001 [2024-05-15 11:18:01.148000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.001 [2024-05-15 11:18:01.148235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.001 [2024-05-15 11:18:01.148249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.001 [2024-05-15 11:18:01.148257] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.001 [2024-05-15 11:18:01.148436] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.001 [2024-05-15 11:18:01.148614] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.001 [2024-05-15 11:18:01.148622] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.001 [2024-05-15 11:18:01.148629] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.001 [2024-05-15 11:18:01.151509] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.001 [2024-05-15 11:18:01.160798] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.001 [2024-05-15 11:18:01.161245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.001 [2024-05-15 11:18:01.161480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.001 [2024-05-15 11:18:01.161491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.001 [2024-05-15 11:18:01.161499] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.001 [2024-05-15 11:18:01.161678] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.001 [2024-05-15 11:18:01.161858] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.001 [2024-05-15 11:18:01.161866] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.001 [2024-05-15 11:18:01.161872] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.001 [2024-05-15 11:18:01.164743] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.001 [2024-05-15 11:18:01.173897] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.001 [2024-05-15 11:18:01.174267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.001 [2024-05-15 11:18:01.174384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.001 [2024-05-15 11:18:01.174395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.001 [2024-05-15 11:18:01.174402] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.001 [2024-05-15 11:18:01.174581] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.001 [2024-05-15 11:18:01.174761] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.001 [2024-05-15 11:18:01.174769] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.001 [2024-05-15 11:18:01.174775] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.001 [2024-05-15 11:18:01.177644] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.001 [2024-05-15 11:18:01.187021] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.001 [2024-05-15 11:18:01.187340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.001 [2024-05-15 11:18:01.187575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.001 [2024-05-15 11:18:01.187606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.001 [2024-05-15 11:18:01.187635] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.001 [2024-05-15 11:18:01.188238] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.001 [2024-05-15 11:18:01.188414] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.001 [2024-05-15 11:18:01.188422] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.001 [2024-05-15 11:18:01.188428] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.001 [2024-05-15 11:18:01.191212] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.001 [2024-05-15 11:18:01.200011] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.001 [2024-05-15 11:18:01.200357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.001 [2024-05-15 11:18:01.200538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.001 [2024-05-15 11:18:01.200569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.001 [2024-05-15 11:18:01.200591] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.001 [2024-05-15 11:18:01.201170] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.001 [2024-05-15 11:18:01.201345] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.001 [2024-05-15 11:18:01.201353] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.001 [2024-05-15 11:18:01.201360] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.001 [2024-05-15 11:18:01.204064] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.001 [2024-05-15 11:18:01.213077] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.001 [2024-05-15 11:18:01.213546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.001 [2024-05-15 11:18:01.213753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.001 [2024-05-15 11:18:01.213784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.001 [2024-05-15 11:18:01.213805] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.001 [2024-05-15 11:18:01.214403] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.001 [2024-05-15 11:18:01.214776] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.001 [2024-05-15 11:18:01.214784] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.001 [2024-05-15 11:18:01.214790] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.001 [2024-05-15 11:18:01.217575] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.001 [2024-05-15 11:18:01.226080] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.001 [2024-05-15 11:18:01.226465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.001 [2024-05-15 11:18:01.226576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.001 [2024-05-15 11:18:01.226586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.001 [2024-05-15 11:18:01.226593] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.001 [2024-05-15 11:18:01.226769] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.001 [2024-05-15 11:18:01.226943] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.001 [2024-05-15 11:18:01.226951] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.001 [2024-05-15 11:18:01.226957] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.001 [2024-05-15 11:18:01.229768] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.001 [2024-05-15 11:18:01.239106] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.001 [2024-05-15 11:18:01.239454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.001 [2024-05-15 11:18:01.239667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.001 [2024-05-15 11:18:01.239699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.001 [2024-05-15 11:18:01.239720] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.001 [2024-05-15 11:18:01.240318] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.001 [2024-05-15 11:18:01.240732] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.001 [2024-05-15 11:18:01.240739] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.001 [2024-05-15 11:18:01.240745] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.002 [2024-05-15 11:18:01.243738] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.002 [2024-05-15 11:18:01.252111] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.002 [2024-05-15 11:18:01.252467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.002 [2024-05-15 11:18:01.252574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.002 [2024-05-15 11:18:01.252584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.002 [2024-05-15 11:18:01.252591] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.002 [2024-05-15 11:18:01.252765] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.002 [2024-05-15 11:18:01.252940] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.002 [2024-05-15 11:18:01.252948] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.002 [2024-05-15 11:18:01.252954] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.002 [2024-05-15 11:18:01.255776] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.259 [2024-05-15 11:18:01.265267] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.259 [2024-05-15 11:18:01.265696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.259 [2024-05-15 11:18:01.265811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.259 [2024-05-15 11:18:01.265822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.259 [2024-05-15 11:18:01.265830] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.259 [2024-05-15 11:18:01.266010] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.259 [2024-05-15 11:18:01.266209] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.259 [2024-05-15 11:18:01.266219] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.259 [2024-05-15 11:18:01.266226] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.259 [2024-05-15 11:18:01.269121] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.259 [2024-05-15 11:18:01.278441] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.259 [2024-05-15 11:18:01.278807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.259 [2024-05-15 11:18:01.278914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.259 [2024-05-15 11:18:01.278925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.259 [2024-05-15 11:18:01.278932] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.260 [2024-05-15 11:18:01.279112] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.260 [2024-05-15 11:18:01.279300] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.260 [2024-05-15 11:18:01.279309] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.260 [2024-05-15 11:18:01.279317] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.260 [2024-05-15 11:18:01.282174] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.260 [2024-05-15 11:18:01.291586] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.260 [2024-05-15 11:18:01.291950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.260 [2024-05-15 11:18:01.292063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.260 [2024-05-15 11:18:01.292073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.260 [2024-05-15 11:18:01.292080] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.260 [2024-05-15 11:18:01.292278] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.260 [2024-05-15 11:18:01.292457] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.260 [2024-05-15 11:18:01.292465] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.260 [2024-05-15 11:18:01.292472] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.260 [2024-05-15 11:18:01.295292] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.260 [2024-05-15 11:18:01.304676] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.260 [2024-05-15 11:18:01.305147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.260 [2024-05-15 11:18:01.305370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.260 [2024-05-15 11:18:01.305402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.260 [2024-05-15 11:18:01.305424] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.260 [2024-05-15 11:18:01.305828] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.260 [2024-05-15 11:18:01.306003] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.260 [2024-05-15 11:18:01.306013] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.260 [2024-05-15 11:18:01.306020] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.260 [2024-05-15 11:18:01.308806] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.260 [2024-05-15 11:18:01.317823] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.260 [2024-05-15 11:18:01.318238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.260 [2024-05-15 11:18:01.318402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.260 [2024-05-15 11:18:01.318412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.260 [2024-05-15 11:18:01.318419] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.260 [2024-05-15 11:18:01.318597] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.260 [2024-05-15 11:18:01.318776] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.260 [2024-05-15 11:18:01.318784] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.260 [2024-05-15 11:18:01.318791] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.260 [2024-05-15 11:18:01.321660] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.260 [2024-05-15 11:18:01.330945] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.260 [2024-05-15 11:18:01.331331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.260 [2024-05-15 11:18:01.331447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.260 [2024-05-15 11:18:01.331457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.260 [2024-05-15 11:18:01.331464] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.260 [2024-05-15 11:18:01.331643] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.260 [2024-05-15 11:18:01.331822] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.260 [2024-05-15 11:18:01.331830] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.260 [2024-05-15 11:18:01.331837] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.260 [2024-05-15 11:18:01.334704] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.260 [2024-05-15 11:18:01.344169] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.260 [2024-05-15 11:18:01.344514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.260 [2024-05-15 11:18:01.344711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.260 [2024-05-15 11:18:01.344721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.260 [2024-05-15 11:18:01.344728] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.260 [2024-05-15 11:18:01.344908] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.260 [2024-05-15 11:18:01.345087] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.260 [2024-05-15 11:18:01.345095] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.260 [2024-05-15 11:18:01.345105] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.260 [2024-05-15 11:18:01.347969] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.260 [2024-05-15 11:18:01.357282] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.260 [2024-05-15 11:18:01.357646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.260 [2024-05-15 11:18:01.357806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.260 [2024-05-15 11:18:01.357816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.260 [2024-05-15 11:18:01.357822] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.260 [2024-05-15 11:18:01.357995] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.260 [2024-05-15 11:18:01.358173] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.260 [2024-05-15 11:18:01.358181] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.260 [2024-05-15 11:18:01.358187] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.260 [2024-05-15 11:18:01.361037] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.260 [2024-05-15 11:18:01.370432] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.260 [2024-05-15 11:18:01.370730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.260 [2024-05-15 11:18:01.370982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.260 [2024-05-15 11:18:01.370992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.260 [2024-05-15 11:18:01.370999] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.260 [2024-05-15 11:18:01.371183] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.260 [2024-05-15 11:18:01.371363] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.260 [2024-05-15 11:18:01.371371] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.260 [2024-05-15 11:18:01.371377] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.260 [2024-05-15 11:18:01.374210] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.260 [2024-05-15 11:18:01.383509] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.260 [2024-05-15 11:18:01.384014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.260 [2024-05-15 11:18:01.384315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.260 [2024-05-15 11:18:01.384350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.260 [2024-05-15 11:18:01.384372] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.260 [2024-05-15 11:18:01.384697] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.260 [2024-05-15 11:18:01.384872] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.260 [2024-05-15 11:18:01.384879] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.260 [2024-05-15 11:18:01.384886] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.260 [2024-05-15 11:18:01.387676] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.260 [2024-05-15 11:18:01.396655] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.260 [2024-05-15 11:18:01.397041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.260 [2024-05-15 11:18:01.397197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.260 [2024-05-15 11:18:01.397208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.260 [2024-05-15 11:18:01.397215] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.260 [2024-05-15 11:18:01.397395] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.261 [2024-05-15 11:18:01.397574] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.261 [2024-05-15 11:18:01.397582] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.261 [2024-05-15 11:18:01.397588] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.261 [2024-05-15 11:18:01.400460] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.261 [2024-05-15 11:18:01.409765] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.261 [2024-05-15 11:18:01.410128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.261 [2024-05-15 11:18:01.410289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.261 [2024-05-15 11:18:01.410300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.261 [2024-05-15 11:18:01.410307] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.261 [2024-05-15 11:18:01.410486] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.261 [2024-05-15 11:18:01.410666] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.261 [2024-05-15 11:18:01.410674] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.261 [2024-05-15 11:18:01.410680] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.261 [2024-05-15 11:18:01.413550] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.261 [2024-05-15 11:18:01.422854] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.261 [2024-05-15 11:18:01.423269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.261 [2024-05-15 11:18:01.423396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.261 [2024-05-15 11:18:01.423428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.261 [2024-05-15 11:18:01.423451] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.261 [2024-05-15 11:18:01.423986] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.261 [2024-05-15 11:18:01.424173] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.261 [2024-05-15 11:18:01.424182] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.261 [2024-05-15 11:18:01.424189] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.261 [2024-05-15 11:18:01.427052] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.261 [2024-05-15 11:18:01.436025] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.261 [2024-05-15 11:18:01.436324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.261 [2024-05-15 11:18:01.436485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.261 [2024-05-15 11:18:01.436496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.261 [2024-05-15 11:18:01.436503] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.261 [2024-05-15 11:18:01.436682] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.261 [2024-05-15 11:18:01.436862] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.261 [2024-05-15 11:18:01.436870] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.261 [2024-05-15 11:18:01.436877] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.261 [2024-05-15 11:18:01.439751] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.261 [2024-05-15 11:18:01.448975] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.261 [2024-05-15 11:18:01.449356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.261 [2024-05-15 11:18:01.449517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.261 [2024-05-15 11:18:01.449527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.261 [2024-05-15 11:18:01.449534] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.261 [2024-05-15 11:18:01.449707] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.261 [2024-05-15 11:18:01.449881] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.261 [2024-05-15 11:18:01.449888] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.261 [2024-05-15 11:18:01.449895] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.261 [2024-05-15 11:18:01.452646] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.261 [2024-05-15 11:18:01.461916] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.261 [2024-05-15 11:18:01.462338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.261 [2024-05-15 11:18:01.462494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.261 [2024-05-15 11:18:01.462504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.261 [2024-05-15 11:18:01.462511] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.261 [2024-05-15 11:18:01.462685] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.261 [2024-05-15 11:18:01.462859] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.261 [2024-05-15 11:18:01.462867] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.261 [2024-05-15 11:18:01.462873] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.261 [2024-05-15 11:18:01.465693] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.261 [2024-05-15 11:18:01.475078] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.261 [2024-05-15 11:18:01.475469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.261 [2024-05-15 11:18:01.475581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.261 [2024-05-15 11:18:01.475591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.261 [2024-05-15 11:18:01.475598] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.261 [2024-05-15 11:18:01.475777] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.261 [2024-05-15 11:18:01.475956] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.261 [2024-05-15 11:18:01.475964] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.261 [2024-05-15 11:18:01.475971] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.261 [2024-05-15 11:18:01.478803] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.261 [2024-05-15 11:18:01.488186] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.261 [2024-05-15 11:18:01.488554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.261 [2024-05-15 11:18:01.488716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.261 [2024-05-15 11:18:01.488726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.261 [2024-05-15 11:18:01.488733] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.261 [2024-05-15 11:18:01.488912] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.261 [2024-05-15 11:18:01.489091] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.261 [2024-05-15 11:18:01.489100] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.261 [2024-05-15 11:18:01.489107] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.261 [2024-05-15 11:18:01.491976] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.261 [2024-05-15 11:18:01.501377] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.261 [2024-05-15 11:18:01.501849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.261 [2024-05-15 11:18:01.502025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.261 [2024-05-15 11:18:01.502036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.261 [2024-05-15 11:18:01.502042] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.261 [2024-05-15 11:18:01.502225] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.262 [2024-05-15 11:18:01.502406] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.262 [2024-05-15 11:18:01.502413] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.262 [2024-05-15 11:18:01.502419] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.262 [2024-05-15 11:18:01.505271] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.262 [2024-05-15 11:18:01.514509] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.262 [2024-05-15 11:18:01.514875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.262 [2024-05-15 11:18:01.515113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.262 [2024-05-15 11:18:01.515124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.262 [2024-05-15 11:18:01.515130] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.262 [2024-05-15 11:18:01.515315] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.262 [2024-05-15 11:18:01.515495] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.262 [2024-05-15 11:18:01.515503] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.262 [2024-05-15 11:18:01.515509] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.262 [2024-05-15 11:18:01.518368] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.520 [2024-05-15 11:18:01.527708] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.520 [2024-05-15 11:18:01.528173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.520 [2024-05-15 11:18:01.528304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.520 [2024-05-15 11:18:01.528322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.520 [2024-05-15 11:18:01.528333] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.520 [2024-05-15 11:18:01.528526] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.520 [2024-05-15 11:18:01.528709] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.520 [2024-05-15 11:18:01.528717] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.520 [2024-05-15 11:18:01.528723] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.520 [2024-05-15 11:18:01.531557] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.520 [2024-05-15 11:18:01.540958] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.520 [2024-05-15 11:18:01.541383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.520 [2024-05-15 11:18:01.541593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.520 [2024-05-15 11:18:01.541604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.520 [2024-05-15 11:18:01.541611] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.520 [2024-05-15 11:18:01.541790] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.520 [2024-05-15 11:18:01.541972] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.520 [2024-05-15 11:18:01.541981] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.520 [2024-05-15 11:18:01.541989] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.520 [2024-05-15 11:18:01.544860] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.520 [2024-05-15 11:18:01.554151] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.520 [2024-05-15 11:18:01.554520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.520 [2024-05-15 11:18:01.554681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.520 [2024-05-15 11:18:01.554691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.520 [2024-05-15 11:18:01.554704] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.520 [2024-05-15 11:18:01.554883] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.521 [2024-05-15 11:18:01.555062] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.521 [2024-05-15 11:18:01.555070] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.521 [2024-05-15 11:18:01.555076] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.521 [2024-05-15 11:18:01.557941] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.521 [2024-05-15 11:18:01.567228] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.521 [2024-05-15 11:18:01.567663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.521 [2024-05-15 11:18:01.567894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.521 [2024-05-15 11:18:01.567904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.521 [2024-05-15 11:18:01.567910] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.521 [2024-05-15 11:18:01.568085] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.521 [2024-05-15 11:18:01.568282] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.521 [2024-05-15 11:18:01.568291] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.521 [2024-05-15 11:18:01.568297] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.521 [2024-05-15 11:18:01.571119] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.521 [2024-05-15 11:18:01.580328] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.521 [2024-05-15 11:18:01.580758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.521 [2024-05-15 11:18:01.580986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.521 [2024-05-15 11:18:01.580996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.521 [2024-05-15 11:18:01.581003] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.521 [2024-05-15 11:18:01.581189] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.521 [2024-05-15 11:18:01.581370] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.521 [2024-05-15 11:18:01.581378] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.521 [2024-05-15 11:18:01.581384] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.521 [2024-05-15 11:18:01.584180] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.521 [2024-05-15 11:18:01.593420] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.521 [2024-05-15 11:18:01.593779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.521 [2024-05-15 11:18:01.594018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.521 [2024-05-15 11:18:01.594028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.521 [2024-05-15 11:18:01.594035] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.521 [2024-05-15 11:18:01.594221] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.521 [2024-05-15 11:18:01.594396] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.521 [2024-05-15 11:18:01.594404] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.521 [2024-05-15 11:18:01.594410] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.521 [2024-05-15 11:18:01.597187] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.521 [2024-05-15 11:18:01.606487] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.521 [2024-05-15 11:18:01.606925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.521 [2024-05-15 11:18:01.607133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.521 [2024-05-15 11:18:01.607144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.521 [2024-05-15 11:18:01.607151] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.521 [2024-05-15 11:18:01.607336] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.521 [2024-05-15 11:18:01.607517] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.521 [2024-05-15 11:18:01.607525] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.521 [2024-05-15 11:18:01.607531] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.521 [2024-05-15 11:18:01.610395] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.521 [2024-05-15 11:18:01.619670] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.521 [2024-05-15 11:18:01.620094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.521 [2024-05-15 11:18:01.620253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.521 [2024-05-15 11:18:01.620264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.521 [2024-05-15 11:18:01.620271] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.521 [2024-05-15 11:18:01.620450] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.521 [2024-05-15 11:18:01.620629] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.521 [2024-05-15 11:18:01.620637] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.521 [2024-05-15 11:18:01.620643] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.521 [2024-05-15 11:18:01.623510] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.521 [2024-05-15 11:18:01.632785] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.521 [2024-05-15 11:18:01.633210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.521 [2024-05-15 11:18:01.633372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.521 [2024-05-15 11:18:01.633383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.521 [2024-05-15 11:18:01.633390] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.521 [2024-05-15 11:18:01.633568] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.521 [2024-05-15 11:18:01.633749] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.521 [2024-05-15 11:18:01.633757] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.521 [2024-05-15 11:18:01.633764] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.521 [2024-05-15 11:18:01.636630] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.521 [2024-05-15 11:18:01.645866] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.521 [2024-05-15 11:18:01.646231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.521 [2024-05-15 11:18:01.646375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.521 [2024-05-15 11:18:01.646386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.521 [2024-05-15 11:18:01.646393] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.521 [2024-05-15 11:18:01.646578] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.521 [2024-05-15 11:18:01.646752] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.521 [2024-05-15 11:18:01.646760] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.521 [2024-05-15 11:18:01.646766] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.521 [2024-05-15 11:18:01.649572] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.521 [2024-05-15 11:18:01.658916] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.521 [2024-05-15 11:18:01.659357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.521 [2024-05-15 11:18:01.659569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.521 [2024-05-15 11:18:01.659579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.521 [2024-05-15 11:18:01.659586] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.521 [2024-05-15 11:18:01.659765] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.521 [2024-05-15 11:18:01.659944] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.521 [2024-05-15 11:18:01.659952] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.521 [2024-05-15 11:18:01.659958] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.521 [2024-05-15 11:18:01.662822] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.521 [2024-05-15 11:18:01.672118] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.521 [2024-05-15 11:18:01.672492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.521 [2024-05-15 11:18:01.672722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.521 [2024-05-15 11:18:01.672732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.521 [2024-05-15 11:18:01.672740] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.521 [2024-05-15 11:18:01.672918] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.521 [2024-05-15 11:18:01.673097] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.522 [2024-05-15 11:18:01.673108] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.522 [2024-05-15 11:18:01.673114] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.522 [2024-05-15 11:18:01.675978] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.522 [2024-05-15 11:18:01.685211] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.522 [2024-05-15 11:18:01.685655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.522 [2024-05-15 11:18:01.685929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.522 [2024-05-15 11:18:01.685960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.522 [2024-05-15 11:18:01.685982] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.522 [2024-05-15 11:18:01.686583] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.522 [2024-05-15 11:18:01.686837] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.522 [2024-05-15 11:18:01.686845] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.522 [2024-05-15 11:18:01.686851] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.522 [2024-05-15 11:18:01.689715] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.522 [2024-05-15 11:18:01.698269] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.522 [2024-05-15 11:18:01.698692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.522 [2024-05-15 11:18:01.698945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.522 [2024-05-15 11:18:01.698976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.522 [2024-05-15 11:18:01.698998] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.522 [2024-05-15 11:18:01.699600] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.522 [2024-05-15 11:18:01.699898] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.522 [2024-05-15 11:18:01.699906] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.522 [2024-05-15 11:18:01.699913] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.522 [2024-05-15 11:18:01.702883] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.522 [2024-05-15 11:18:01.711360] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.522 [2024-05-15 11:18:01.711712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.522 [2024-05-15 11:18:01.711867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.522 [2024-05-15 11:18:01.711898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.522 [2024-05-15 11:18:01.711921] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.522 [2024-05-15 11:18:01.712444] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.522 [2024-05-15 11:18:01.712644] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.522 [2024-05-15 11:18:01.712655] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.522 [2024-05-15 11:18:01.712669] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.522 [2024-05-15 11:18:01.716775] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.522 [2024-05-15 11:18:01.724830] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.522 [2024-05-15 11:18:01.725243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.522 [2024-05-15 11:18:01.725388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.522 [2024-05-15 11:18:01.725398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.522 [2024-05-15 11:18:01.725406] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.522 [2024-05-15 11:18:01.725580] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.522 [2024-05-15 11:18:01.725755] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.522 [2024-05-15 11:18:01.725763] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.522 [2024-05-15 11:18:01.725769] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.522 [2024-05-15 11:18:01.728557] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.522 [2024-05-15 11:18:01.737844] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.522 [2024-05-15 11:18:01.738269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.522 [2024-05-15 11:18:01.738433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.522 [2024-05-15 11:18:01.738443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.522 [2024-05-15 11:18:01.738449] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.522 [2024-05-15 11:18:01.738613] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.522 [2024-05-15 11:18:01.738778] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.522 [2024-05-15 11:18:01.738785] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.522 [2024-05-15 11:18:01.738791] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.522 [2024-05-15 11:18:01.741586] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.522 [2024-05-15 11:18:01.750740] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.522 [2024-05-15 11:18:01.751058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.522 [2024-05-15 11:18:01.751285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.522 [2024-05-15 11:18:01.751296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.522 [2024-05-15 11:18:01.751303] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.522 [2024-05-15 11:18:01.751477] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.522 [2024-05-15 11:18:01.751650] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.522 [2024-05-15 11:18:01.751658] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.522 [2024-05-15 11:18:01.751664] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.522 [2024-05-15 11:18:01.754356] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.522 [2024-05-15 11:18:01.763721] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.522 [2024-05-15 11:18:01.764090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.522 [2024-05-15 11:18:01.764322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.522 [2024-05-15 11:18:01.764335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.522 [2024-05-15 11:18:01.764342] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.522 [2024-05-15 11:18:01.764523] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.522 [2024-05-15 11:18:01.764687] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.522 [2024-05-15 11:18:01.764695] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.522 [2024-05-15 11:18:01.764701] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.522 [2024-05-15 11:18:01.767366] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.522 [2024-05-15 11:18:01.776604] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.522 [2024-05-15 11:18:01.776909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.522 [2024-05-15 11:18:01.777036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.522 [2024-05-15 11:18:01.777047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.522 [2024-05-15 11:18:01.777054] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.522 [2024-05-15 11:18:01.777651] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.522 [2024-05-15 11:18:01.777826] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.522 [2024-05-15 11:18:01.777834] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.522 [2024-05-15 11:18:01.777840] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.522 [2024-05-15 11:18:01.780712] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.781 [2024-05-15 11:18:01.789702] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.781 [2024-05-15 11:18:01.790060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 11:18:01.790171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 11:18:01.790182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.781 [2024-05-15 11:18:01.790190] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.781 [2024-05-15 11:18:01.790369] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.781 [2024-05-15 11:18:01.790549] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.781 [2024-05-15 11:18:01.790558] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.781 [2024-05-15 11:18:01.790564] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.781 [2024-05-15 11:18:01.793431] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.781 [2024-05-15 11:18:01.802883] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.781 [2024-05-15 11:18:01.803228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 11:18:01.803386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 11:18:01.803397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.781 [2024-05-15 11:18:01.803405] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.781 [2024-05-15 11:18:01.803583] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.781 [2024-05-15 11:18:01.803762] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.781 [2024-05-15 11:18:01.803770] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.781 [2024-05-15 11:18:01.803776] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.781 [2024-05-15 11:18:01.806635] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.781 [2024-05-15 11:18:01.815952] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.781 [2024-05-15 11:18:01.816377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 11:18:01.816590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 11:18:01.816601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.781 [2024-05-15 11:18:01.816607] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.781 [2024-05-15 11:18:01.816772] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.781 [2024-05-15 11:18:01.816936] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.781 [2024-05-15 11:18:01.816944] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.781 [2024-05-15 11:18:01.816950] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.781 [2024-05-15 11:18:01.819752] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.781 [2024-05-15 11:18:01.829012] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.781 [2024-05-15 11:18:01.829458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 11:18:01.829739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 11:18:01.829771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.781 [2024-05-15 11:18:01.829792] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.781 [2024-05-15 11:18:01.830094] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.781 [2024-05-15 11:18:01.830279] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.781 [2024-05-15 11:18:01.830287] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.781 [2024-05-15 11:18:01.830294] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.781 [2024-05-15 11:18:01.833133] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.782 [2024-05-15 11:18:01.841935] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.782 [2024-05-15 11:18:01.842301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 11:18:01.842483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 11:18:01.842493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.782 [2024-05-15 11:18:01.842500] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.782 [2024-05-15 11:18:01.842674] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.782 [2024-05-15 11:18:01.842847] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.782 [2024-05-15 11:18:01.842855] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.782 [2024-05-15 11:18:01.842861] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.782 [2024-05-15 11:18:01.845633] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.782 [2024-05-15 11:18:01.854807] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.782 [2024-05-15 11:18:01.855222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 11:18:01.855450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 11:18:01.855460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.782 [2024-05-15 11:18:01.855466] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.782 [2024-05-15 11:18:01.855631] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.782 [2024-05-15 11:18:01.855796] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.782 [2024-05-15 11:18:01.855803] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.782 [2024-05-15 11:18:01.855809] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.782 [2024-05-15 11:18:01.858531] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.782 [2024-05-15 11:18:01.867693] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.782 [2024-05-15 11:18:01.868054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 11:18:01.868296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 11:18:01.868330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.782 [2024-05-15 11:18:01.868353] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.782 [2024-05-15 11:18:01.868844] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.782 [2024-05-15 11:18:01.869019] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.782 [2024-05-15 11:18:01.869026] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.782 [2024-05-15 11:18:01.869032] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.782 [2024-05-15 11:18:01.871741] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.782 [2024-05-15 11:18:01.880556] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.782 [2024-05-15 11:18:01.881011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 11:18:01.881248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 11:18:01.881282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.782 [2024-05-15 11:18:01.881304] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.782 [2024-05-15 11:18:01.881890] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.782 [2024-05-15 11:18:01.882130] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.782 [2024-05-15 11:18:01.882137] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.782 [2024-05-15 11:18:01.882143] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.782 [2024-05-15 11:18:01.884849] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.782 [2024-05-15 11:18:01.893427] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.782 [2024-05-15 11:18:01.893859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 11:18:01.894000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 11:18:01.894010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.782 [2024-05-15 11:18:01.894017] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.782 [2024-05-15 11:18:01.894197] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.782 [2024-05-15 11:18:01.894371] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.782 [2024-05-15 11:18:01.894379] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.782 [2024-05-15 11:18:01.894385] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.782 [2024-05-15 11:18:01.897087] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.782 [2024-05-15 11:18:01.906302] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.782 [2024-05-15 11:18:01.906747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 11:18:01.906985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 11:18:01.907016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.782 [2024-05-15 11:18:01.907037] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.782 [2024-05-15 11:18:01.907642] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.782 [2024-05-15 11:18:01.908211] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.782 [2024-05-15 11:18:01.908219] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.782 [2024-05-15 11:18:01.908225] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.782 [2024-05-15 11:18:01.910924] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.782 [2024-05-15 11:18:01.919378] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.782 [2024-05-15 11:18:01.919816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 11:18:01.920025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 11:18:01.920035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.782 [2024-05-15 11:18:01.920045] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.782 [2024-05-15 11:18:01.920230] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.782 [2024-05-15 11:18:01.920409] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.782 [2024-05-15 11:18:01.920416] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.782 [2024-05-15 11:18:01.920423] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.782 [2024-05-15 11:18:01.923304] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.782 [2024-05-15 11:18:01.932501] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.782 [2024-05-15 11:18:01.932926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 11:18:01.933131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 11:18:01.933161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.782 [2024-05-15 11:18:01.933198] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.782 [2024-05-15 11:18:01.933499] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.782 [2024-05-15 11:18:01.933679] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.782 [2024-05-15 11:18:01.933687] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.782 [2024-05-15 11:18:01.933693] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.782 [2024-05-15 11:18:01.936451] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.782 [2024-05-15 11:18:01.945399] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.782 [2024-05-15 11:18:01.945837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 11:18:01.946096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 11:18:01.946126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.782 [2024-05-15 11:18:01.946148] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.782 [2024-05-15 11:18:01.946416] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.782 [2024-05-15 11:18:01.946596] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.782 [2024-05-15 11:18:01.946604] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.782 [2024-05-15 11:18:01.946610] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.782 [2024-05-15 11:18:01.949349] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.782 [2024-05-15 11:18:01.958296] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.782 [2024-05-15 11:18:01.958646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 11:18:01.958776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 11:18:01.958785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.782 [2024-05-15 11:18:01.958791] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.782 [2024-05-15 11:18:01.958958] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.783 [2024-05-15 11:18:01.959122] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.783 [2024-05-15 11:18:01.959129] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.783 [2024-05-15 11:18:01.959135] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.783 [2024-05-15 11:18:01.961861] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.783 [2024-05-15 11:18:01.971253] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.783 [2024-05-15 11:18:01.971704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 11:18:01.971979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 11:18:01.972009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.783 [2024-05-15 11:18:01.972031] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.783 [2024-05-15 11:18:01.972381] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.783 [2024-05-15 11:18:01.972555] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.783 [2024-05-15 11:18:01.972563] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.783 [2024-05-15 11:18:01.972569] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.783 [2024-05-15 11:18:01.975272] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.783 [2024-05-15 11:18:01.984080] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.783 [2024-05-15 11:18:01.984520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 11:18:01.984745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 11:18:01.984775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.783 [2024-05-15 11:18:01.984797] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.783 [2024-05-15 11:18:01.985229] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.783 [2024-05-15 11:18:01.985403] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.783 [2024-05-15 11:18:01.985411] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.783 [2024-05-15 11:18:01.985417] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.783 [2024-05-15 11:18:01.989358] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.783 [2024-05-15 11:18:01.997491] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.783 [2024-05-15 11:18:01.997943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 11:18:01.998181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 11:18:01.998213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.783 [2024-05-15 11:18:01.998235] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.783 [2024-05-15 11:18:01.998531] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.783 [2024-05-15 11:18:01.998708] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.783 [2024-05-15 11:18:01.998715] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.783 [2024-05-15 11:18:01.998722] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.783 [2024-05-15 11:18:02.001467] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.783 [2024-05-15 11:18:02.010428] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.783 [2024-05-15 11:18:02.010803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 11:18:02.011032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 11:18:02.011042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.783 [2024-05-15 11:18:02.011048] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.783 [2024-05-15 11:18:02.011229] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.783 [2024-05-15 11:18:02.011402] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.783 [2024-05-15 11:18:02.011410] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.783 [2024-05-15 11:18:02.011416] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.783 [2024-05-15 11:18:02.014118] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.783 [2024-05-15 11:18:02.023329] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.783 [2024-05-15 11:18:02.023675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 11:18:02.023884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 11:18:02.023893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.783 [2024-05-15 11:18:02.023900] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.783 [2024-05-15 11:18:02.024073] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.783 [2024-05-15 11:18:02.024253] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.783 [2024-05-15 11:18:02.024261] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.783 [2024-05-15 11:18:02.024267] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.783 [2024-05-15 11:18:02.027010] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.783 [2024-05-15 11:18:02.036282] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.783 [2024-05-15 11:18:02.036703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 11:18:02.036941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 11:18:02.036972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:04.783 [2024-05-15 11:18:02.036995] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:04.783 [2024-05-15 11:18:02.037527] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:04.783 [2024-05-15 11:18:02.037701] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.783 [2024-05-15 11:18:02.037711] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.783 [2024-05-15 11:18:02.037717] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.783 [2024-05-15 11:18:02.040425] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.043 [2024-05-15 11:18:02.049205] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.043 [2024-05-15 11:18:02.049626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.043 [2024-05-15 11:18:02.049843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.043 [2024-05-15 11:18:02.049854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.043 [2024-05-15 11:18:02.049862] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.043 [2024-05-15 11:18:02.050041] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.043 [2024-05-15 11:18:02.050239] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.043 [2024-05-15 11:18:02.050248] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.043 [2024-05-15 11:18:02.050255] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.043 [2024-05-15 11:18:02.053063] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.043 [2024-05-15 11:18:02.062338] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.043 [2024-05-15 11:18:02.062782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.043 [2024-05-15 11:18:02.063040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.043 [2024-05-15 11:18:02.063072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.043 [2024-05-15 11:18:02.063095] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.043 [2024-05-15 11:18:02.063492] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.043 [2024-05-15 11:18:02.063672] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.043 [2024-05-15 11:18:02.063680] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.043 [2024-05-15 11:18:02.063687] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.043 [2024-05-15 11:18:02.066422] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.043 [2024-05-15 11:18:02.075352] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.043 [2024-05-15 11:18:02.075820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.043 [2024-05-15 11:18:02.076051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.043 [2024-05-15 11:18:02.076082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.043 [2024-05-15 11:18:02.076105] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.043 [2024-05-15 11:18:02.076687] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.043 [2024-05-15 11:18:02.076861] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.043 [2024-05-15 11:18:02.076869] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.043 [2024-05-15 11:18:02.076878] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.043 [2024-05-15 11:18:02.079581] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.043 [2024-05-15 11:18:02.088213] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.043 [2024-05-15 11:18:02.088643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.043 [2024-05-15 11:18:02.088851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.043 [2024-05-15 11:18:02.088861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.043 [2024-05-15 11:18:02.088868] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.043 [2024-05-15 11:18:02.089042] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.043 [2024-05-15 11:18:02.089222] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.043 [2024-05-15 11:18:02.089230] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.043 [2024-05-15 11:18:02.089236] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.043 [2024-05-15 11:18:02.092021] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.043 [2024-05-15 11:18:02.101081] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.043 [2024-05-15 11:18:02.101547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.043 [2024-05-15 11:18:02.101735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.043 [2024-05-15 11:18:02.101766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.043 [2024-05-15 11:18:02.101788] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.043 [2024-05-15 11:18:02.102386] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.043 [2024-05-15 11:18:02.102736] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.043 [2024-05-15 11:18:02.102744] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.043 [2024-05-15 11:18:02.102750] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.043 [2024-05-15 11:18:02.105453] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.043 [2024-05-15 11:18:02.114002] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.043 [2024-05-15 11:18:02.114459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.043 [2024-05-15 11:18:02.114699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.043 [2024-05-15 11:18:02.114737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.043 [2024-05-15 11:18:02.114758] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.044 [2024-05-15 11:18:02.115362] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.044 [2024-05-15 11:18:02.115627] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.044 [2024-05-15 11:18:02.115635] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.044 [2024-05-15 11:18:02.115645] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.044 [2024-05-15 11:18:02.118380] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.044 [2024-05-15 11:18:02.126890] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.044 [2024-05-15 11:18:02.127307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.044 [2024-05-15 11:18:02.127585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.044 [2024-05-15 11:18:02.127595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.044 [2024-05-15 11:18:02.127602] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.044 [2024-05-15 11:18:02.127767] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.044 [2024-05-15 11:18:02.127931] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.044 [2024-05-15 11:18:02.127938] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.044 [2024-05-15 11:18:02.127944] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.044 [2024-05-15 11:18:02.130669] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.044 [2024-05-15 11:18:02.139769] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.044 [2024-05-15 11:18:02.140154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.044 [2024-05-15 11:18:02.140383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.044 [2024-05-15 11:18:02.140393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.044 [2024-05-15 11:18:02.140400] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.044 [2024-05-15 11:18:02.140574] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.044 [2024-05-15 11:18:02.140749] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.044 [2024-05-15 11:18:02.140756] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.044 [2024-05-15 11:18:02.140762] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.044 [2024-05-15 11:18:02.143487] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.044 [2024-05-15 11:18:02.152639] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.044 [2024-05-15 11:18:02.153086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.044 [2024-05-15 11:18:02.153330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.044 [2024-05-15 11:18:02.153363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.044 [2024-05-15 11:18:02.153385] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.044 [2024-05-15 11:18:02.153821] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.044 [2024-05-15 11:18:02.153995] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.044 [2024-05-15 11:18:02.154002] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.044 [2024-05-15 11:18:02.154009] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.044 [2024-05-15 11:18:02.156708] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.044 [2024-05-15 11:18:02.165551] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.044 [2024-05-15 11:18:02.165997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.044 [2024-05-15 11:18:02.166096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.044 [2024-05-15 11:18:02.166106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.044 [2024-05-15 11:18:02.166113] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.044 [2024-05-15 11:18:02.166298] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.044 [2024-05-15 11:18:02.166478] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.044 [2024-05-15 11:18:02.166485] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.044 [2024-05-15 11:18:02.166492] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.044 [2024-05-15 11:18:02.169362] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.044 [2024-05-15 11:18:02.178623] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.044 [2024-05-15 11:18:02.179073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.044 [2024-05-15 11:18:02.179352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.044 [2024-05-15 11:18:02.179384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.044 [2024-05-15 11:18:02.179406] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.044 [2024-05-15 11:18:02.179707] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.044 [2024-05-15 11:18:02.179886] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.044 [2024-05-15 11:18:02.179894] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.044 [2024-05-15 11:18:02.179900] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.044 [2024-05-15 11:18:02.182700] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.044 [2024-05-15 11:18:02.191726] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.044 [2024-05-15 11:18:02.192153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.044 [2024-05-15 11:18:02.192376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.044 [2024-05-15 11:18:02.192408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.044 [2024-05-15 11:18:02.192429] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.044 [2024-05-15 11:18:02.192966] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.044 [2024-05-15 11:18:02.193141] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.044 [2024-05-15 11:18:02.193148] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.044 [2024-05-15 11:18:02.193154] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.044 [2024-05-15 11:18:02.195966] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.044 [2024-05-15 11:18:02.204607] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.044 [2024-05-15 11:18:02.205043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.044 [2024-05-15 11:18:02.205273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.044 [2024-05-15 11:18:02.205306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.044 [2024-05-15 11:18:02.205327] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.044 [2024-05-15 11:18:02.205912] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.044 [2024-05-15 11:18:02.206199] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.044 [2024-05-15 11:18:02.206207] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.044 [2024-05-15 11:18:02.206213] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.044 [2024-05-15 11:18:02.208859] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.044 [2024-05-15 11:18:02.217568] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.044 [2024-05-15 11:18:02.218012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.044 [2024-05-15 11:18:02.218324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.044 [2024-05-15 11:18:02.218359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.044 [2024-05-15 11:18:02.218381] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.044 [2024-05-15 11:18:02.218968] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.044 [2024-05-15 11:18:02.219181] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.044 [2024-05-15 11:18:02.219189] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.044 [2024-05-15 11:18:02.219195] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.044 [2024-05-15 11:18:02.221898] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.044 [2024-05-15 11:18:02.230481] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.044 [2024-05-15 11:18:02.230935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.044 [2024-05-15 11:18:02.231210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.044 [2024-05-15 11:18:02.231242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.044 [2024-05-15 11:18:02.231264] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.044 [2024-05-15 11:18:02.231849] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.044 [2024-05-15 11:18:02.232200] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.044 [2024-05-15 11:18:02.232208] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.044 [2024-05-15 11:18:02.232214] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.044 [2024-05-15 11:18:02.234915] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.044 [2024-05-15 11:18:02.243303] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.044 [2024-05-15 11:18:02.243729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.045 [2024-05-15 11:18:02.243867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.045 [2024-05-15 11:18:02.243877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.045 [2024-05-15 11:18:02.243884] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.045 [2024-05-15 11:18:02.244058] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.045 [2024-05-15 11:18:02.244237] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.045 [2024-05-15 11:18:02.244245] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.045 [2024-05-15 11:18:02.244251] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.045 [2024-05-15 11:18:02.246951] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.045 [2024-05-15 11:18:02.256298] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.045 [2024-05-15 11:18:02.256653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.045 [2024-05-15 11:18:02.256892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.045 [2024-05-15 11:18:02.256922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.045 [2024-05-15 11:18:02.256943] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.045 [2024-05-15 11:18:02.257545] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.045 [2024-05-15 11:18:02.258140] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.045 [2024-05-15 11:18:02.258151] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.045 [2024-05-15 11:18:02.258160] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.045 [2024-05-15 11:18:02.262267] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.045 [2024-05-15 11:18:02.269860] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.045 [2024-05-15 11:18:02.270299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.045 [2024-05-15 11:18:02.270526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.045 [2024-05-15 11:18:02.270556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.045 [2024-05-15 11:18:02.270578] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.045 [2024-05-15 11:18:02.270889] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.045 [2024-05-15 11:18:02.271057] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.045 [2024-05-15 11:18:02.271065] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.045 [2024-05-15 11:18:02.271071] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.045 [2024-05-15 11:18:02.273833] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.045 [2024-05-15 11:18:02.282758] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.045 [2024-05-15 11:18:02.283196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.045 [2024-05-15 11:18:02.283461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.045 [2024-05-15 11:18:02.283492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.045 [2024-05-15 11:18:02.283521] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.045 [2024-05-15 11:18:02.284097] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.045 [2024-05-15 11:18:02.284289] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.045 [2024-05-15 11:18:02.284297] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.045 [2024-05-15 11:18:02.284303] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.045 [2024-05-15 11:18:02.287004] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.045 [2024-05-15 11:18:02.295662] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.045 [2024-05-15 11:18:02.295999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.045 [2024-05-15 11:18:02.296159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.045 [2024-05-15 11:18:02.296174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.045 [2024-05-15 11:18:02.296181] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.045 [2024-05-15 11:18:02.296370] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.045 [2024-05-15 11:18:02.296545] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.045 [2024-05-15 11:18:02.296553] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.045 [2024-05-15 11:18:02.296559] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.045 [2024-05-15 11:18:02.299265] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.304 [2024-05-15 11:18:02.308911] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.304 [2024-05-15 11:18:02.309333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.304 [2024-05-15 11:18:02.309547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.304 [2024-05-15 11:18:02.309557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.304 [2024-05-15 11:18:02.309564] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.304 [2024-05-15 11:18:02.309739] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.304 [2024-05-15 11:18:02.309914] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.304 [2024-05-15 11:18:02.309921] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.304 [2024-05-15 11:18:02.309928] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.304 [2024-05-15 11:18:02.312844] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.304 [2024-05-15 11:18:02.321875] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.304 [2024-05-15 11:18:02.322291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.304 [2024-05-15 11:18:02.322428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.304 [2024-05-15 11:18:02.322439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.304 [2024-05-15 11:18:02.322446] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.304 [2024-05-15 11:18:02.322623] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.304 [2024-05-15 11:18:02.322797] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.304 [2024-05-15 11:18:02.322805] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.304 [2024-05-15 11:18:02.322811] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.304 [2024-05-15 11:18:02.325539] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.304 [2024-05-15 11:18:02.334991] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.304 [2024-05-15 11:18:02.335418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.304 [2024-05-15 11:18:02.335593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.304 [2024-05-15 11:18:02.335603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.304 [2024-05-15 11:18:02.335610] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.304 [2024-05-15 11:18:02.335790] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.304 [2024-05-15 11:18:02.335969] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.304 [2024-05-15 11:18:02.335977] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.304 [2024-05-15 11:18:02.335983] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.304 [2024-05-15 11:18:02.338840] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.304 [2024-05-15 11:18:02.348080] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.304 [2024-05-15 11:18:02.348512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.304 [2024-05-15 11:18:02.348744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.304 [2024-05-15 11:18:02.348754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.304 [2024-05-15 11:18:02.348761] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.304 [2024-05-15 11:18:02.348940] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.304 [2024-05-15 11:18:02.349119] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.304 [2024-05-15 11:18:02.349131] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.304 [2024-05-15 11:18:02.349137] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.304 [2024-05-15 11:18:02.351997] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.304 [2024-05-15 11:18:02.361212] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.305 [2024-05-15 11:18:02.361651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.305 [2024-05-15 11:18:02.361857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.305 [2024-05-15 11:18:02.361866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.305 [2024-05-15 11:18:02.361873] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.305 [2024-05-15 11:18:02.362049] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.305 [2024-05-15 11:18:02.362246] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.305 [2024-05-15 11:18:02.362254] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.305 [2024-05-15 11:18:02.362261] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.305 [2024-05-15 11:18:02.365082] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.305 [2024-05-15 11:18:02.374284] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.305 [2024-05-15 11:18:02.374735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.305 [2024-05-15 11:18:02.374921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.305 [2024-05-15 11:18:02.374931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.305 [2024-05-15 11:18:02.374937] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.305 [2024-05-15 11:18:02.375117] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.305 [2024-05-15 11:18:02.375301] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.305 [2024-05-15 11:18:02.375310] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.305 [2024-05-15 11:18:02.375316] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.305 [2024-05-15 11:18:02.378101] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.305 [2024-05-15 11:18:02.387185] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.305 [2024-05-15 11:18:02.387635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.305 [2024-05-15 11:18:02.387809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.305 [2024-05-15 11:18:02.387819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.305 [2024-05-15 11:18:02.387825] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.305 [2024-05-15 11:18:02.387988] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.305 [2024-05-15 11:18:02.388153] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.305 [2024-05-15 11:18:02.388160] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.305 [2024-05-15 11:18:02.388170] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.305 [2024-05-15 11:18:02.390903] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.305 [2024-05-15 11:18:02.400063] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.305 [2024-05-15 11:18:02.400513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.305 [2024-05-15 11:18:02.400785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.305 [2024-05-15 11:18:02.400816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.305 [2024-05-15 11:18:02.400838] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.305 [2024-05-15 11:18:02.401438] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.305 [2024-05-15 11:18:02.401634] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.305 [2024-05-15 11:18:02.401642] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.305 [2024-05-15 11:18:02.401648] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.305 [2024-05-15 11:18:02.404427] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.305 [2024-05-15 11:18:02.413119] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.305 [2024-05-15 11:18:02.413573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.305 [2024-05-15 11:18:02.413798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.305 [2024-05-15 11:18:02.413808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.305 [2024-05-15 11:18:02.413815] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.305 [2024-05-15 11:18:02.413994] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.305 [2024-05-15 11:18:02.414179] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.305 [2024-05-15 11:18:02.414187] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.305 [2024-05-15 11:18:02.414194] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.305 [2024-05-15 11:18:02.417080] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.305 [2024-05-15 11:18:02.426192] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.305 [2024-05-15 11:18:02.426609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.305 [2024-05-15 11:18:02.426751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.305 [2024-05-15 11:18:02.426761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.305 [2024-05-15 11:18:02.426768] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.305 [2024-05-15 11:18:02.426947] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.305 [2024-05-15 11:18:02.427126] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.305 [2024-05-15 11:18:02.427134] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.305 [2024-05-15 11:18:02.427141] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.305 [2024-05-15 11:18:02.430011] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.305 [2024-05-15 11:18:02.439377] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.305 [2024-05-15 11:18:02.439806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.305 [2024-05-15 11:18:02.439957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.305 [2024-05-15 11:18:02.439967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.305 [2024-05-15 11:18:02.439975] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.305 [2024-05-15 11:18:02.440154] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.305 [2024-05-15 11:18:02.440339] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.305 [2024-05-15 11:18:02.440351] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.305 [2024-05-15 11:18:02.440357] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.305 [2024-05-15 11:18:02.443185] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.305 [2024-05-15 11:18:02.452270] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.305 [2024-05-15 11:18:02.452645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.305 [2024-05-15 11:18:02.452874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.305 [2024-05-15 11:18:02.452884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.305 [2024-05-15 11:18:02.452891] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.305 [2024-05-15 11:18:02.453064] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.305 [2024-05-15 11:18:02.453243] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.305 [2024-05-15 11:18:02.453251] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.305 [2024-05-15 11:18:02.453258] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.305 [2024-05-15 11:18:02.455956] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.305 [2024-05-15 11:18:02.465080] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.305 [2024-05-15 11:18:02.465513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.305 [2024-05-15 11:18:02.465732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.305 [2024-05-15 11:18:02.465764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.305 [2024-05-15 11:18:02.465785] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.305 [2024-05-15 11:18:02.466385] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.305 [2024-05-15 11:18:02.466791] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.305 [2024-05-15 11:18:02.466799] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.305 [2024-05-15 11:18:02.466805] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.305 [2024-05-15 11:18:02.469534] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.305 [2024-05-15 11:18:02.478002] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.305 [2024-05-15 11:18:02.478432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.305 [2024-05-15 11:18:02.478653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.305 [2024-05-15 11:18:02.478685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.305 [2024-05-15 11:18:02.478706] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.305 [2024-05-15 11:18:02.479304] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.305 [2024-05-15 11:18:02.479497] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.305 [2024-05-15 11:18:02.479504] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.306 [2024-05-15 11:18:02.479513] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.306 [2024-05-15 11:18:02.482201] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.306 [2024-05-15 11:18:02.490892] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.306 [2024-05-15 11:18:02.491282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.306 [2024-05-15 11:18:02.491497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.306 [2024-05-15 11:18:02.491507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.306 [2024-05-15 11:18:02.491514] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.306 [2024-05-15 11:18:02.491678] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.306 [2024-05-15 11:18:02.491843] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.306 [2024-05-15 11:18:02.491850] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.306 [2024-05-15 11:18:02.491856] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.306 [2024-05-15 11:18:02.494579] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.306 [2024-05-15 11:18:02.503732] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.306 [2024-05-15 11:18:02.504135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.306 [2024-05-15 11:18:02.504358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.306 [2024-05-15 11:18:02.504389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.306 [2024-05-15 11:18:02.504413] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.306 [2024-05-15 11:18:02.504997] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.306 [2024-05-15 11:18:02.505595] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.306 [2024-05-15 11:18:02.505621] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.306 [2024-05-15 11:18:02.505641] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.306 [2024-05-15 11:18:02.508391] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.306 [2024-05-15 11:18:02.516602] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.306 [2024-05-15 11:18:02.517092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.306 [2024-05-15 11:18:02.517327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.306 [2024-05-15 11:18:02.517360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.306 [2024-05-15 11:18:02.517382] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.306 [2024-05-15 11:18:02.517720] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.306 [2024-05-15 11:18:02.517894] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.306 [2024-05-15 11:18:02.517902] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.306 [2024-05-15 11:18:02.517908] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.306 [2024-05-15 11:18:02.520607] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.306 [2024-05-15 11:18:02.529716] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.306 [2024-05-15 11:18:02.530200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.306 [2024-05-15 11:18:02.530429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.306 [2024-05-15 11:18:02.530459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.306 [2024-05-15 11:18:02.530481] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.306 [2024-05-15 11:18:02.530936] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.306 [2024-05-15 11:18:02.531206] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.306 [2024-05-15 11:18:02.531218] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.306 [2024-05-15 11:18:02.531227] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.306 [2024-05-15 11:18:02.535335] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.306 [2024-05-15 11:18:02.543370] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.306 [2024-05-15 11:18:02.543738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.306 [2024-05-15 11:18:02.543912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.306 [2024-05-15 11:18:02.543942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.306 [2024-05-15 11:18:02.543964] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.306 [2024-05-15 11:18:02.544473] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.306 [2024-05-15 11:18:02.544653] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.306 [2024-05-15 11:18:02.544663] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.306 [2024-05-15 11:18:02.544671] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.306 [2024-05-15 11:18:02.547543] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.306 [2024-05-15 11:18:02.556500] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.306 [2024-05-15 11:18:02.556858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.306 [2024-05-15 11:18:02.557016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.306 [2024-05-15 11:18:02.557026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.306 [2024-05-15 11:18:02.557033] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.306 [2024-05-15 11:18:02.557217] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.306 [2024-05-15 11:18:02.557397] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.306 [2024-05-15 11:18:02.557405] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.306 [2024-05-15 11:18:02.557411] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.306 [2024-05-15 11:18:02.560285] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.566 [2024-05-15 11:18:02.569628] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.566 [2024-05-15 11:18:02.570044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.566 [2024-05-15 11:18:02.570189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.566 [2024-05-15 11:18:02.570222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.566 [2024-05-15 11:18:02.570245] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.566 [2024-05-15 11:18:02.570668] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.566 [2024-05-15 11:18:02.570843] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.566 [2024-05-15 11:18:02.570851] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.566 [2024-05-15 11:18:02.570857] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.566 [2024-05-15 11:18:02.573763] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.566 [2024-05-15 11:18:02.582873] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.566 [2024-05-15 11:18:02.583215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.566 [2024-05-15 11:18:02.583427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.567 [2024-05-15 11:18:02.583438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.567 [2024-05-15 11:18:02.583445] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.567 [2024-05-15 11:18:02.583625] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.567 [2024-05-15 11:18:02.583804] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.567 [2024-05-15 11:18:02.583812] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.567 [2024-05-15 11:18:02.583818] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.567 [2024-05-15 11:18:02.586692] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.567 [2024-05-15 11:18:02.595913] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.567 [2024-05-15 11:18:02.596342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.567 [2024-05-15 11:18:02.596579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.567 [2024-05-15 11:18:02.596589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.567 [2024-05-15 11:18:02.596596] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.567 [2024-05-15 11:18:02.596775] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.567 [2024-05-15 11:18:02.596955] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.567 [2024-05-15 11:18:02.596962] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.567 [2024-05-15 11:18:02.596969] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.567 [2024-05-15 11:18:02.599835] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.567 [2024-05-15 11:18:02.609110] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.567 [2024-05-15 11:18:02.609488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.567 [2024-05-15 11:18:02.609726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.567 [2024-05-15 11:18:02.609737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.567 [2024-05-15 11:18:02.609744] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.567 [2024-05-15 11:18:02.609922] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.567 [2024-05-15 11:18:02.610102] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.567 [2024-05-15 11:18:02.610110] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.567 [2024-05-15 11:18:02.610116] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.567 [2024-05-15 11:18:02.612981] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.567 [2024-05-15 11:18:02.622265] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.567 [2024-05-15 11:18:02.622619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.567 [2024-05-15 11:18:02.622852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.567 [2024-05-15 11:18:02.622862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.567 [2024-05-15 11:18:02.622869] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.567 [2024-05-15 11:18:02.623047] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.567 [2024-05-15 11:18:02.623232] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.567 [2024-05-15 11:18:02.623240] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.567 [2024-05-15 11:18:02.623246] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.567 [2024-05-15 11:18:02.626108] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2401406 Killed "${NVMF_APP[@]}" "$@" 00:26:05.567 11:18:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:26:05.567 11:18:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:05.567 11:18:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:05.567 11:18:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@721 -- # xtrace_disable 00:26:05.567 11:18:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:05.567 [2024-05-15 11:18:02.635473] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.567 [2024-05-15 11:18:02.635894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.567 [2024-05-15 11:18:02.636126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.567 [2024-05-15 11:18:02.636137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.567 [2024-05-15 11:18:02.636144] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.567 [2024-05-15 11:18:02.636330] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.567 [2024-05-15 11:18:02.636541] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.567 [2024-05-15 11:18:02.636549] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.567 [2024-05-15 11:18:02.636555] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.567 11:18:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2402987 00:26:05.567 11:18:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2402987 00:26:05.567 11:18:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:05.567 [2024-05-15 11:18:02.639427] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.567 11:18:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@828 -- # '[' -z 2402987 ']' 00:26:05.567 11:18:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:05.567 11:18:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local max_retries=100 00:26:05.567 11:18:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:05.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:05.567 11:18:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@837 -- # xtrace_disable 00:26:05.567 11:18:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:05.567 [2024-05-15 11:18:02.648657] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.567 [2024-05-15 11:18:02.649099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.567 [2024-05-15 11:18:02.649291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.567 [2024-05-15 11:18:02.649304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.567 [2024-05-15 11:18:02.649312] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.567 [2024-05-15 11:18:02.649502] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.568 [2024-05-15 11:18:02.649681] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.568 [2024-05-15 11:18:02.649690] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.568 [2024-05-15 11:18:02.649697] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.568 [2024-05-15 11:18:02.652563] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.568 [2024-05-15 11:18:02.661859] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.568 [2024-05-15 11:18:02.662275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.568 [2024-05-15 11:18:02.662482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.568 [2024-05-15 11:18:02.662492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.568 [2024-05-15 11:18:02.662499] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.568 [2024-05-15 11:18:02.662678] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.568 [2024-05-15 11:18:02.662859] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.568 [2024-05-15 11:18:02.662866] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.568 [2024-05-15 11:18:02.662872] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.568 [2024-05-15 11:18:02.665743] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.568 [2024-05-15 11:18:02.675037] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.568 [2024-05-15 11:18:02.675485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.568 [2024-05-15 11:18:02.675721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.568 [2024-05-15 11:18:02.675732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.568 [2024-05-15 11:18:02.675739] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.568 [2024-05-15 11:18:02.675919] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.568 [2024-05-15 11:18:02.676098] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.568 [2024-05-15 11:18:02.676105] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.568 [2024-05-15 11:18:02.676112] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.568 [2024-05-15 11:18:02.678980] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.568 [2024-05-15 11:18:02.685866] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:26:05.568 [2024-05-15 11:18:02.685911] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:05.568 [2024-05-15 11:18:02.688287] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.568 [2024-05-15 11:18:02.688585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.568 [2024-05-15 11:18:02.688737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.568 [2024-05-15 11:18:02.688749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.568 [2024-05-15 11:18:02.688757] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.568 [2024-05-15 11:18:02.688936] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.568 [2024-05-15 11:18:02.689116] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.568 [2024-05-15 11:18:02.689125] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.568 [2024-05-15 11:18:02.689133] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.568 [2024-05-15 11:18:02.692006] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.568 [2024-05-15 11:18:02.701479] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.568 [2024-05-15 11:18:02.701905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.568 [2024-05-15 11:18:02.702078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.568 [2024-05-15 11:18:02.702088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.568 [2024-05-15 11:18:02.702096] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.568 [2024-05-15 11:18:02.702281] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.568 [2024-05-15 11:18:02.702462] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.568 [2024-05-15 11:18:02.702470] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.568 [2024-05-15 11:18:02.702476] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.568 [2024-05-15 11:18:02.705342] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.568 EAL: No free 2048 kB hugepages reported on node 1 00:26:05.568 [2024-05-15 11:18:02.714551] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.568 [2024-05-15 11:18:02.714865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.568 [2024-05-15 11:18:02.714982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.568 [2024-05-15 11:18:02.714992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.568 [2024-05-15 11:18:02.714999] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.568 [2024-05-15 11:18:02.715183] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.568 [2024-05-15 11:18:02.715364] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.568 [2024-05-15 11:18:02.715372] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.568 [2024-05-15 11:18:02.715378] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.568 [2024-05-15 11:18:02.718267] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.568 [2024-05-15 11:18:02.727649] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.568 [2024-05-15 11:18:02.727928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.568 [2024-05-15 11:18:02.728065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.568 [2024-05-15 11:18:02.728075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.568 [2024-05-15 11:18:02.728082] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.568 [2024-05-15 11:18:02.728279] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.568 [2024-05-15 11:18:02.728458] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.568 [2024-05-15 11:18:02.728466] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.568 [2024-05-15 11:18:02.728473] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.568 [2024-05-15 11:18:02.731300] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.568 [2024-05-15 11:18:02.740796] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.568 [2024-05-15 11:18:02.741284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.568 [2024-05-15 11:18:02.741451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.568 [2024-05-15 11:18:02.741461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.568 [2024-05-15 11:18:02.741468] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.568 [2024-05-15 11:18:02.741648] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.569 [2024-05-15 11:18:02.741827] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.569 [2024-05-15 11:18:02.741835] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.569 [2024-05-15 11:18:02.741842] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.569 [2024-05-15 11:18:02.744670] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.569 [2024-05-15 11:18:02.746484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:05.569 [2024-05-15 11:18:02.753867] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.569 [2024-05-15 11:18:02.754311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.569 [2024-05-15 11:18:02.754431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.569 [2024-05-15 11:18:02.754443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.569 [2024-05-15 11:18:02.754452] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.569 [2024-05-15 11:18:02.754633] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.569 [2024-05-15 11:18:02.754813] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.569 [2024-05-15 11:18:02.754821] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.569 [2024-05-15 11:18:02.754828] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.569 [2024-05-15 11:18:02.757669] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.569 [2024-05-15 11:18:02.767018] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.569 [2024-05-15 11:18:02.767399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.569 [2024-05-15 11:18:02.767506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.569 [2024-05-15 11:18:02.767516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.569 [2024-05-15 11:18:02.767523] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.569 [2024-05-15 11:18:02.767698] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.569 [2024-05-15 11:18:02.767873] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.569 [2024-05-15 11:18:02.767880] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.569 [2024-05-15 11:18:02.767886] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.569 [2024-05-15 11:18:02.770789] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.569 [2024-05-15 11:18:02.780152] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.569 [2024-05-15 11:18:02.780520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.569 [2024-05-15 11:18:02.780634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.569 [2024-05-15 11:18:02.780644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.569 [2024-05-15 11:18:02.780651] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.569 [2024-05-15 11:18:02.780824] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.569 [2024-05-15 11:18:02.780998] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.569 [2024-05-15 11:18:02.781006] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.569 [2024-05-15 11:18:02.781012] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.569 [2024-05-15 11:18:02.783855] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.569 [2024-05-15 11:18:02.793260] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.569 [2024-05-15 11:18:02.793705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.569 [2024-05-15 11:18:02.793899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.569 [2024-05-15 11:18:02.793910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.569 [2024-05-15 11:18:02.793918] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.569 [2024-05-15 11:18:02.794098] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.569 [2024-05-15 11:18:02.794287] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.569 [2024-05-15 11:18:02.794296] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.569 [2024-05-15 11:18:02.794304] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.569 [2024-05-15 11:18:02.797176] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.569 [2024-05-15 11:18:02.806478] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.569 [2024-05-15 11:18:02.806802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.569 [2024-05-15 11:18:02.807012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.569 [2024-05-15 11:18:02.807023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.569 [2024-05-15 11:18:02.807031] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.569 [2024-05-15 11:18:02.807216] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.569 [2024-05-15 11:18:02.807398] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.569 [2024-05-15 11:18:02.807406] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.569 [2024-05-15 11:18:02.807413] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.569 [2024-05-15 11:18:02.810281] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.569 [2024-05-15 11:18:02.819569] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.569 [2024-05-15 11:18:02.820030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.569 [2024-05-15 11:18:02.820198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.569 [2024-05-15 11:18:02.820209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.569 [2024-05-15 11:18:02.820217] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.569 [2024-05-15 11:18:02.820396] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.569 [2024-05-15 11:18:02.820576] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.569 [2024-05-15 11:18:02.820585] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.569 [2024-05-15 11:18:02.820591] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.569 [2024-05-15 11:18:02.823463] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.569 [2024-05-15 11:18:02.829204] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:05.569 [2024-05-15 11:18:02.829230] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:05.569 [2024-05-15 11:18:02.829237] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:05.569 [2024-05-15 11:18:02.829246] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:05.570 [2024-05-15 11:18:02.829252] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:05.570 [2024-05-15 11:18:02.829292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:05.570 [2024-05-15 11:18:02.829502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:05.570 [2024-05-15 11:18:02.829505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:05.830 [2024-05-15 11:18:02.832669] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.830 [2024-05-15 11:18:02.833090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.830 [2024-05-15 11:18:02.833320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.830 [2024-05-15 11:18:02.833332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.830 [2024-05-15 11:18:02.833340] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.831 [2024-05-15 11:18:02.833521] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.831 [2024-05-15 11:18:02.833702] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.831 [2024-05-15 11:18:02.833711] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.831 [2024-05-15 11:18:02.833718] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.831 [2024-05-15 11:18:02.836615] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.831 [2024-05-15 11:18:02.845764] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.831 [2024-05-15 11:18:02.846151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.831 [2024-05-15 11:18:02.846303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.831 [2024-05-15 11:18:02.846315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.831 [2024-05-15 11:18:02.846323] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.831 [2024-05-15 11:18:02.846505] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.831 [2024-05-15 11:18:02.846686] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.831 [2024-05-15 11:18:02.846694] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.831 [2024-05-15 11:18:02.846701] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.831 [2024-05-15 11:18:02.849572] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.831 [2024-05-15 11:18:02.858865] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.831 [2024-05-15 11:18:02.859301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.831 [2024-05-15 11:18:02.859487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.831 [2024-05-15 11:18:02.859498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.831 [2024-05-15 11:18:02.859506] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.831 [2024-05-15 11:18:02.859687] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.831 [2024-05-15 11:18:02.859867] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.831 [2024-05-15 11:18:02.859881] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.831 [2024-05-15 11:18:02.859889] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.831 [2024-05-15 11:18:02.862759] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.831 [2024-05-15 11:18:02.872060] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.831 [2024-05-15 11:18:02.872362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.831 [2024-05-15 11:18:02.872475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.831 [2024-05-15 11:18:02.872486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.831 [2024-05-15 11:18:02.872494] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.831 [2024-05-15 11:18:02.872674] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.831 [2024-05-15 11:18:02.872854] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.831 [2024-05-15 11:18:02.872863] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.831 [2024-05-15 11:18:02.872870] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.831 [2024-05-15 11:18:02.875733] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.831 [2024-05-15 11:18:02.885192] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.831 [2024-05-15 11:18:02.885577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.831 [2024-05-15 11:18:02.885745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.831 [2024-05-15 11:18:02.885756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.831 [2024-05-15 11:18:02.885763] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.831 [2024-05-15 11:18:02.885943] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.831 [2024-05-15 11:18:02.886124] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.831 [2024-05-15 11:18:02.886132] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.831 [2024-05-15 11:18:02.886139] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.831 [2024-05-15 11:18:02.889005] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.831 [2024-05-15 11:18:02.898301] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.831 [2024-05-15 11:18:02.898601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.831 [2024-05-15 11:18:02.898709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.831 [2024-05-15 11:18:02.898720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.831 [2024-05-15 11:18:02.898728] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.831 [2024-05-15 11:18:02.898907] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.831 [2024-05-15 11:18:02.899086] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.831 [2024-05-15 11:18:02.899095] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.831 [2024-05-15 11:18:02.899106] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.831 [2024-05-15 11:18:02.901974] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.831 [2024-05-15 11:18:02.911436] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.831 [2024-05-15 11:18:02.911762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.831 [2024-05-15 11:18:02.911974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.831 [2024-05-15 11:18:02.911984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.831 [2024-05-15 11:18:02.911992] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.831 [2024-05-15 11:18:02.912176] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.831 [2024-05-15 11:18:02.912357] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.831 [2024-05-15 11:18:02.912365] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.831 [2024-05-15 11:18:02.912372] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.831 [2024-05-15 11:18:02.915236] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.831 [2024-05-15 11:18:02.924520] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.831 [2024-05-15 11:18:02.924987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.831 [2024-05-15 11:18:02.925109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.831 [2024-05-15 11:18:02.925120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.831 [2024-05-15 11:18:02.925127] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.831 [2024-05-15 11:18:02.925311] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.831 [2024-05-15 11:18:02.925490] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.831 [2024-05-15 11:18:02.925499] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.831 [2024-05-15 11:18:02.925505] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.831 [2024-05-15 11:18:02.928369] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.831 [2024-05-15 11:18:02.937653] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.831 [2024-05-15 11:18:02.938097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.831 [2024-05-15 11:18:02.938280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.831 [2024-05-15 11:18:02.938291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.831 [2024-05-15 11:18:02.938299] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.831 [2024-05-15 11:18:02.938478] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.831 [2024-05-15 11:18:02.938657] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.831 [2024-05-15 11:18:02.938666] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.831 [2024-05-15 11:18:02.938672] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.831 [2024-05-15 11:18:02.941540] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.831 [2024-05-15 11:18:02.950822] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.831 [2024-05-15 11:18:02.951259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.831 [2024-05-15 11:18:02.951422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.831 [2024-05-15 11:18:02.951432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.831 [2024-05-15 11:18:02.951439] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.831 [2024-05-15 11:18:02.951619] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.831 [2024-05-15 11:18:02.951799] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.831 [2024-05-15 11:18:02.951807] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.831 [2024-05-15 11:18:02.951814] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.831 [2024-05-15 11:18:02.954678] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.832 [2024-05-15 11:18:02.963953] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.832 [2024-05-15 11:18:02.964311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.832 [2024-05-15 11:18:02.964495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.832 [2024-05-15 11:18:02.964506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.832 [2024-05-15 11:18:02.964514] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.832 [2024-05-15 11:18:02.964694] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.832 [2024-05-15 11:18:02.964874] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.832 [2024-05-15 11:18:02.964882] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.832 [2024-05-15 11:18:02.964889] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.832 [2024-05-15 11:18:02.967751] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.832 [2024-05-15 11:18:02.977040] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.832 [2024-05-15 11:18:02.977405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.832 [2024-05-15 11:18:02.977638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.832 [2024-05-15 11:18:02.977648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.832 [2024-05-15 11:18:02.977655] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.832 [2024-05-15 11:18:02.977835] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.832 [2024-05-15 11:18:02.978015] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.832 [2024-05-15 11:18:02.978023] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.832 [2024-05-15 11:18:02.978030] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.832 [2024-05-15 11:18:02.980900] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.832 [2024-05-15 11:18:02.990178] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.832 [2024-05-15 11:18:02.990618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.832 [2024-05-15 11:18:02.990848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.832 [2024-05-15 11:18:02.990858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.832 [2024-05-15 11:18:02.990865] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.832 [2024-05-15 11:18:02.991043] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.832 [2024-05-15 11:18:02.991227] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.832 [2024-05-15 11:18:02.991236] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.832 [2024-05-15 11:18:02.991242] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.832 [2024-05-15 11:18:02.994101] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.832 [2024-05-15 11:18:03.003380] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.832 [2024-05-15 11:18:03.003818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.832 [2024-05-15 11:18:03.003972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.832 [2024-05-15 11:18:03.003983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.832 [2024-05-15 11:18:03.003990] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.832 [2024-05-15 11:18:03.004173] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.832 [2024-05-15 11:18:03.004353] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.832 [2024-05-15 11:18:03.004361] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.832 [2024-05-15 11:18:03.004367] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.832 [2024-05-15 11:18:03.007229] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.832 [2024-05-15 11:18:03.016504] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.832 [2024-05-15 11:18:03.016921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.832 [2024-05-15 11:18:03.017129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.832 [2024-05-15 11:18:03.017140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.832 [2024-05-15 11:18:03.017147] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.832 [2024-05-15 11:18:03.017330] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.832 [2024-05-15 11:18:03.017511] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.832 [2024-05-15 11:18:03.017519] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.832 [2024-05-15 11:18:03.017525] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.832 [2024-05-15 11:18:03.020384] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.832 [2024-05-15 11:18:03.029658] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.832 [2024-05-15 11:18:03.030100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.832 [2024-05-15 11:18:03.030264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.832 [2024-05-15 11:18:03.030275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.832 [2024-05-15 11:18:03.030282] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.832 [2024-05-15 11:18:03.030461] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.832 [2024-05-15 11:18:03.030641] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.832 [2024-05-15 11:18:03.030649] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.832 [2024-05-15 11:18:03.030656] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.832 [2024-05-15 11:18:03.033519] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.832 [2024-05-15 11:18:03.042794] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.832 [2024-05-15 11:18:03.043232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.832 [2024-05-15 11:18:03.043464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.832 [2024-05-15 11:18:03.043474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.832 [2024-05-15 11:18:03.043482] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.832 [2024-05-15 11:18:03.043661] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.832 [2024-05-15 11:18:03.043841] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.832 [2024-05-15 11:18:03.043849] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.832 [2024-05-15 11:18:03.043855] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.832 [2024-05-15 11:18:03.046719] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.832 [2024-05-15 11:18:03.056011] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.832 [2024-05-15 11:18:03.056455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.832 [2024-05-15 11:18:03.056566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.832 [2024-05-15 11:18:03.056577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.832 [2024-05-15 11:18:03.056584] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.832 [2024-05-15 11:18:03.056763] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.832 [2024-05-15 11:18:03.056942] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.832 [2024-05-15 11:18:03.056950] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.833 [2024-05-15 11:18:03.056957] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.833 [2024-05-15 11:18:03.059824] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.833 [2024-05-15 11:18:03.069095] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.833 [2024-05-15 11:18:03.069539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.833 [2024-05-15 11:18:03.069771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.833 [2024-05-15 11:18:03.069784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.833 [2024-05-15 11:18:03.069791] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.833 [2024-05-15 11:18:03.069970] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.833 [2024-05-15 11:18:03.070149] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.833 [2024-05-15 11:18:03.070157] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.833 [2024-05-15 11:18:03.070168] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.833 [2024-05-15 11:18:03.073036] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.833 [2024-05-15 11:18:03.082317] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.833 [2024-05-15 11:18:03.082753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.833 [2024-05-15 11:18:03.082985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.833 [2024-05-15 11:18:03.082995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:05.833 [2024-05-15 11:18:03.083002] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:05.833 [2024-05-15 11:18:03.083185] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:05.833 [2024-05-15 11:18:03.083365] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.833 [2024-05-15 11:18:03.083374] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.833 [2024-05-15 11:18:03.083380] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.833 [2024-05-15 11:18:03.086249] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.099 [2024-05-15 11:18:03.095437] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.099 [2024-05-15 11:18:03.095892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.099 [2024-05-15 11:18:03.096125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.099 [2024-05-15 11:18:03.096136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:06.099 [2024-05-15 11:18:03.096144] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:06.099 [2024-05-15 11:18:03.096326] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:06.099 [2024-05-15 11:18:03.096506] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.099 [2024-05-15 11:18:03.096515] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.099 [2024-05-15 11:18:03.096521] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.099 [2024-05-15 11:18:03.099393] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.099 [2024-05-15 11:18:03.108529] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.099 [2024-05-15 11:18:03.108979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.099 [2024-05-15 11:18:03.109215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.100 [2024-05-15 11:18:03.109226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:06.100 [2024-05-15 11:18:03.109237] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:06.100 [2024-05-15 11:18:03.109416] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:06.100 [2024-05-15 11:18:03.109596] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.100 [2024-05-15 11:18:03.109604] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.100 [2024-05-15 11:18:03.109610] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.100 [2024-05-15 11:18:03.112472] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.100 [2024-05-15 11:18:03.121953] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.100 [2024-05-15 11:18:03.122400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.100 [2024-05-15 11:18:03.122556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.100 [2024-05-15 11:18:03.122566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:06.100 [2024-05-15 11:18:03.122573] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:06.100 [2024-05-15 11:18:03.122754] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:06.100 [2024-05-15 11:18:03.122935] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.100 [2024-05-15 11:18:03.122943] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.100 [2024-05-15 11:18:03.122950] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.100 [2024-05-15 11:18:03.125816] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.100 [2024-05-15 11:18:03.135093] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.100 [2024-05-15 11:18:03.135534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.100 [2024-05-15 11:18:03.135761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.100 [2024-05-15 11:18:03.135772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:06.100 [2024-05-15 11:18:03.135779] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:06.100 [2024-05-15 11:18:03.135958] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:06.100 [2024-05-15 11:18:03.136137] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.100 [2024-05-15 11:18:03.136146] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.100 [2024-05-15 11:18:03.136152] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.100 [2024-05-15 11:18:03.139017] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.100 [2024-05-15 11:18:03.148295] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.100 [2024-05-15 11:18:03.148732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.100 [2024-05-15 11:18:03.148881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.100 [2024-05-15 11:18:03.148891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:06.100 [2024-05-15 11:18:03.148898] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:06.100 [2024-05-15 11:18:03.149080] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:06.100 [2024-05-15 11:18:03.149266] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.100 [2024-05-15 11:18:03.149275] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.100 [2024-05-15 11:18:03.149281] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.100 [2024-05-15 11:18:03.152141] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.100 [2024-05-15 11:18:03.161433] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.100 [2024-05-15 11:18:03.161884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.100 [2024-05-15 11:18:03.162053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.100 [2024-05-15 11:18:03.162063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:06.100 [2024-05-15 11:18:03.162070] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:06.100 [2024-05-15 11:18:03.162255] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:06.100 [2024-05-15 11:18:03.162435] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.100 [2024-05-15 11:18:03.162443] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.100 [2024-05-15 11:18:03.162449] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.100 [2024-05-15 11:18:03.165312] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.100 [2024-05-15 11:18:03.174601] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.100 [2024-05-15 11:18:03.174961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.100 [2024-05-15 11:18:03.175197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.100 [2024-05-15 11:18:03.175207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:06.100 [2024-05-15 11:18:03.175214] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:06.100 [2024-05-15 11:18:03.175393] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:06.100 [2024-05-15 11:18:03.175572] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.100 [2024-05-15 11:18:03.175581] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.100 [2024-05-15 11:18:03.175587] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.100 [2024-05-15 11:18:03.178448] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.100 [2024-05-15 11:18:03.187727] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.100 [2024-05-15 11:18:03.188171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.100 [2024-05-15 11:18:03.188335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.100 [2024-05-15 11:18:03.188346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:06.100 [2024-05-15 11:18:03.188353] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:06.100 [2024-05-15 11:18:03.188532] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:06.100 [2024-05-15 11:18:03.188715] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.100 [2024-05-15 11:18:03.188723] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.100 [2024-05-15 11:18:03.188729] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.100 [2024-05-15 11:18:03.191596] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.100 [2024-05-15 11:18:03.200875] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.100 [2024-05-15 11:18:03.201242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.100 [2024-05-15 11:18:03.201472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.100 [2024-05-15 11:18:03.201483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:06.100 [2024-05-15 11:18:03.201489] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:06.100 [2024-05-15 11:18:03.201669] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:06.100 [2024-05-15 11:18:03.201849] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.100 [2024-05-15 11:18:03.201857] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.100 [2024-05-15 11:18:03.201863] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.100 [2024-05-15 11:18:03.204727] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.100 [2024-05-15 11:18:03.213988] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.100 [2024-05-15 11:18:03.214427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.100 [2024-05-15 11:18:03.214613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.100 [2024-05-15 11:18:03.214623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:06.100 [2024-05-15 11:18:03.214630] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:06.100 [2024-05-15 11:18:03.214809] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:06.100 [2024-05-15 11:18:03.214989] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.100 [2024-05-15 11:18:03.214997] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.100 [2024-05-15 11:18:03.215003] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.100 [2024-05-15 11:18:03.217867] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.100 [2024-05-15 11:18:03.227193] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.100 [2024-05-15 11:18:03.227534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.100 [2024-05-15 11:18:03.227695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.100 [2024-05-15 11:18:03.227706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:06.100 [2024-05-15 11:18:03.227713] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:06.100 [2024-05-15 11:18:03.227892] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:06.100 [2024-05-15 11:18:03.228071] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.100 [2024-05-15 11:18:03.228083] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.100 [2024-05-15 11:18:03.228089] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.100 [2024-05-15 11:18:03.230957] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.100 [2024-05-15 11:18:03.240394] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.100 [2024-05-15 11:18:03.240767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.100 [2024-05-15 11:18:03.240917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.100 [2024-05-15 11:18:03.240927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:06.100 [2024-05-15 11:18:03.240934] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:06.100 [2024-05-15 11:18:03.241112] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:06.100 [2024-05-15 11:18:03.241296] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.100 [2024-05-15 11:18:03.241304] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.100 [2024-05-15 11:18:03.241310] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.100 [2024-05-15 11:18:03.244161] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.100 [2024-05-15 11:18:03.253615] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.100 [2024-05-15 11:18:03.253974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.100 [2024-05-15 11:18:03.254207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.100 [2024-05-15 11:18:03.254218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:06.100 [2024-05-15 11:18:03.254226] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:06.100 [2024-05-15 11:18:03.254406] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:06.100 [2024-05-15 11:18:03.254585] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.100 [2024-05-15 11:18:03.254593] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.100 [2024-05-15 11:18:03.254600] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.100 [2024-05-15 11:18:03.257466] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.100 [2024-05-15 11:18:03.266746] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.100 [2024-05-15 11:18:03.267179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.100 [2024-05-15 11:18:03.267389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.101 [2024-05-15 11:18:03.267400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:06.101 [2024-05-15 11:18:03.267407] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:06.101 [2024-05-15 11:18:03.267587] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:06.101 [2024-05-15 11:18:03.267766] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.101 [2024-05-15 11:18:03.267775] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.101 [2024-05-15 11:18:03.267784] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.101 [2024-05-15 11:18:03.270645] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.101 [2024-05-15 11:18:03.279927] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.101 [2024-05-15 11:18:03.280368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.101 [2024-05-15 11:18:03.280599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.101 [2024-05-15 11:18:03.280609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:06.101 [2024-05-15 11:18:03.280616] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:06.101 [2024-05-15 11:18:03.280795] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:06.101 [2024-05-15 11:18:03.280974] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.101 [2024-05-15 11:18:03.280982] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.101 [2024-05-15 11:18:03.280989] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.101 [2024-05-15 11:18:03.283853] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.101 [2024-05-15 11:18:03.293159] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.101 [2024-05-15 11:18:03.293528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.101 [2024-05-15 11:18:03.293759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.101 [2024-05-15 11:18:03.293769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:06.101 [2024-05-15 11:18:03.293776] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:06.101 [2024-05-15 11:18:03.293955] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:06.101 [2024-05-15 11:18:03.294136] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.101 [2024-05-15 11:18:03.294144] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.101 [2024-05-15 11:18:03.294150] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.101 [2024-05-15 11:18:03.297015] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.101 [2024-05-15 11:18:03.306290] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.101 [2024-05-15 11:18:03.306726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.101 [2024-05-15 11:18:03.306894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.101 [2024-05-15 11:18:03.306904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:06.101 [2024-05-15 11:18:03.306911] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:06.101 [2024-05-15 11:18:03.307090] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:06.101 [2024-05-15 11:18:03.307273] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.101 [2024-05-15 11:18:03.307282] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.101 [2024-05-15 11:18:03.307288] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.101 [2024-05-15 11:18:03.310152] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.101 [2024-05-15 11:18:03.319429] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.101 [2024-05-15 11:18:03.319869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.101 [2024-05-15 11:18:03.320097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.101 [2024-05-15 11:18:03.320108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:06.101 [2024-05-15 11:18:03.320116] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:06.101 [2024-05-15 11:18:03.320298] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:06.101 [2024-05-15 11:18:03.320479] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.101 [2024-05-15 11:18:03.320487] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.101 [2024-05-15 11:18:03.320494] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.101 [2024-05-15 11:18:03.323360] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.101 [2024-05-15 11:18:03.332641] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.101 [2024-05-15 11:18:03.333078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.101 [2024-05-15 11:18:03.333260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.101 [2024-05-15 11:18:03.333271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:06.101 [2024-05-15 11:18:03.333279] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:06.101 [2024-05-15 11:18:03.333459] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:06.101 [2024-05-15 11:18:03.333639] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.101 [2024-05-15 11:18:03.333648] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.101 [2024-05-15 11:18:03.333655] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.101 [2024-05-15 11:18:03.336519] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.101 [2024-05-15 11:18:03.345801] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.101 [2024-05-15 11:18:03.346191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.101 [2024-05-15 11:18:03.346353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.101 [2024-05-15 11:18:03.346363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:06.101 [2024-05-15 11:18:03.346370] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:06.101 [2024-05-15 11:18:03.346549] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:06.101 [2024-05-15 11:18:03.346728] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.101 [2024-05-15 11:18:03.346736] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.101 [2024-05-15 11:18:03.346742] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.101 [2024-05-15 11:18:03.349608] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.101 [2024-05-15 11:18:03.358944] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.101 [2024-05-15 11:18:03.359306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.101 [2024-05-15 11:18:03.359516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.101 [2024-05-15 11:18:03.359527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:06.101 [2024-05-15 11:18:03.359534] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:06.101 [2024-05-15 11:18:03.359713] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:06.101 [2024-05-15 11:18:03.359894] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.101 [2024-05-15 11:18:03.359902] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.101 [2024-05-15 11:18:03.359909] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.361 [2024-05-15 11:18:03.362802] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.361 [2024-05-15 11:18:03.372128] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.361 [2024-05-15 11:18:03.372583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.361 [2024-05-15 11:18:03.372816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.361 [2024-05-15 11:18:03.372827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:06.361 [2024-05-15 11:18:03.372834] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:06.361 [2024-05-15 11:18:03.373014] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:06.361 [2024-05-15 11:18:03.373198] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.361 [2024-05-15 11:18:03.373207] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.361 [2024-05-15 11:18:03.373213] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.361 [2024-05-15 11:18:03.376074] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.361 [2024-05-15 11:18:03.385354] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.362 [2024-05-15 11:18:03.385690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.362 [2024-05-15 11:18:03.385923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.362 [2024-05-15 11:18:03.385935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:06.362 [2024-05-15 11:18:03.385943] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:06.362 [2024-05-15 11:18:03.386122] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:06.362 [2024-05-15 11:18:03.386308] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.362 [2024-05-15 11:18:03.386317] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.362 [2024-05-15 11:18:03.386323] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.362 [2024-05-15 11:18:03.389186] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.362 [2024-05-15 11:18:03.398443] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.362 [2024-05-15 11:18:03.398891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.362 [2024-05-15 11:18:03.399056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.362 [2024-05-15 11:18:03.399066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:06.362 [2024-05-15 11:18:03.399073] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:06.362 [2024-05-15 11:18:03.399255] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:06.362 [2024-05-15 11:18:03.399436] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.362 [2024-05-15 11:18:03.399444] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.362 [2024-05-15 11:18:03.399450] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.362 [2024-05-15 11:18:03.402316] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.362 [2024-05-15 11:18:03.411570] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.362 [2024-05-15 11:18:03.412032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.362 [2024-05-15 11:18:03.412202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.362 [2024-05-15 11:18:03.412214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:06.362 [2024-05-15 11:18:03.412222] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:06.362 [2024-05-15 11:18:03.412401] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:06.362 [2024-05-15 11:18:03.412581] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.362 [2024-05-15 11:18:03.412589] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.362 [2024-05-15 11:18:03.412595] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.362 [2024-05-15 11:18:03.415456] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.362 [2024-05-15 11:18:03.424735] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.362 [2024-05-15 11:18:03.425090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.362 [2024-05-15 11:18:03.425248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.362 [2024-05-15 11:18:03.425260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:06.362 [2024-05-15 11:18:03.425267] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:06.362 [2024-05-15 11:18:03.425446] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:06.362 [2024-05-15 11:18:03.425625] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.362 [2024-05-15 11:18:03.425634] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.362 [2024-05-15 11:18:03.425640] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.362 [2024-05-15 11:18:03.428504] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.362 [2024-05-15 11:18:03.437952] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.362 [2024-05-15 11:18:03.438316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.362 [2024-05-15 11:18:03.438477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.362 [2024-05-15 11:18:03.438494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:06.362 [2024-05-15 11:18:03.438501] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:06.362 [2024-05-15 11:18:03.438681] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:06.362 [2024-05-15 11:18:03.438861] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.362 [2024-05-15 11:18:03.438870] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.362 [2024-05-15 11:18:03.438876] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.362 [2024-05-15 11:18:03.441740] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.362 [2024-05-15 11:18:03.451191] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.362 [2024-05-15 11:18:03.451629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.362 [2024-05-15 11:18:03.451842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.362 [2024-05-15 11:18:03.451853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:06.362 [2024-05-15 11:18:03.451860] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:06.362 [2024-05-15 11:18:03.452039] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:06.362 [2024-05-15 11:18:03.452224] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.362 [2024-05-15 11:18:03.452234] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.362 [2024-05-15 11:18:03.452241] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.362 [2024-05-15 11:18:03.455101] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.362 [2024-05-15 11:18:03.464381] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.362 [2024-05-15 11:18:03.464775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.362 [2024-05-15 11:18:03.464847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.362 [2024-05-15 11:18:03.464857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:06.362 [2024-05-15 11:18:03.464864] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:06.362 [2024-05-15 11:18:03.465042] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:06.362 [2024-05-15 11:18:03.465227] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.362 [2024-05-15 11:18:03.465237] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.362 [2024-05-15 11:18:03.465245] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.362 [2024-05-15 11:18:03.468110] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.362 [2024-05-15 11:18:03.477558] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.362 [2024-05-15 11:18:03.477996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.362 [2024-05-15 11:18:03.478230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.362 [2024-05-15 11:18:03.478241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:06.362 [2024-05-15 11:18:03.478251] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:06.362 [2024-05-15 11:18:03.478431] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:06.362 [2024-05-15 11:18:03.478610] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.362 [2024-05-15 11:18:03.478618] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.362 [2024-05-15 11:18:03.478624] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.362 [2024-05-15 11:18:03.481485] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.362 [2024-05-15 11:18:03.490636] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.362 [2024-05-15 11:18:03.491075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.362 [2024-05-15 11:18:03.491308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.362 [2024-05-15 11:18:03.491319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:06.363 [2024-05-15 11:18:03.491326] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:06.363 [2024-05-15 11:18:03.491506] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:06.363 [2024-05-15 11:18:03.491686] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.363 [2024-05-15 11:18:03.491695] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.363 [2024-05-15 11:18:03.491701] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.363 [2024-05-15 11:18:03.494563] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.363 [2024-05-15 11:18:03.503837] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.363 11:18:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:26:06.363 [2024-05-15 11:18:03.504298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.363 [2024-05-15 11:18:03.504461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.363 [2024-05-15 11:18:03.504473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:06.363 [2024-05-15 11:18:03.504480] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:06.363 11:18:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@861 -- # return 0 00:26:06.363 [2024-05-15 11:18:03.504659] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:06.363 [2024-05-15 11:18:03.504839] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.363 [2024-05-15 11:18:03.504847] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.363 [2024-05-15 11:18:03.504854] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.363 11:18:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:06.363 11:18:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@727 -- # xtrace_disable 00:26:06.363 11:18:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:06.363 [2024-05-15 11:18:03.507722] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.363 [2024-05-15 11:18:03.517004] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.363 [2024-05-15 11:18:03.517313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.363 [2024-05-15 11:18:03.517452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.363 [2024-05-15 11:18:03.517463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:06.363 [2024-05-15 11:18:03.517470] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:06.363 [2024-05-15 11:18:03.517650] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:06.363 [2024-05-15 11:18:03.517831] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.363 [2024-05-15 11:18:03.517839] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.363 [2024-05-15 11:18:03.517845] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.363 [2024-05-15 11:18:03.520711] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.363 [2024-05-15 11:18:03.530161] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.363 [2024-05-15 11:18:03.530461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.363 [2024-05-15 11:18:03.530550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.363 [2024-05-15 11:18:03.530561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:06.363 [2024-05-15 11:18:03.530568] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:06.363 [2024-05-15 11:18:03.530748] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:06.363 [2024-05-15 11:18:03.530928] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.363 [2024-05-15 11:18:03.530936] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.363 [2024-05-15 11:18:03.530943] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.363 [2024-05-15 11:18:03.533808] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.363 11:18:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:06.363 11:18:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:06.363 11:18:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:06.363 11:18:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:06.363 [2024-05-15 11:18:03.543255] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.363 [2024-05-15 11:18:03.543629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.363 [2024-05-15 11:18:03.543744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.363 [2024-05-15 11:18:03.543754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:06.363 [2024-05-15 11:18:03.543761] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:06.363 [2024-05-15 11:18:03.543941] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:06.363 [2024-05-15 11:18:03.544121] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.363 [2024-05-15 11:18:03.544129] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.363 [2024-05-15 11:18:03.544135] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.363 [2024-05-15 11:18:03.546720] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:06.363 [2024-05-15 11:18:03.547002] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.363 11:18:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:06.363 11:18:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:06.363 11:18:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:06.363 11:18:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:06.363 [2024-05-15 11:18:03.556454] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.363 [2024-05-15 11:18:03.556883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.363 [2024-05-15 11:18:03.557038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.363 [2024-05-15 11:18:03.557048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:06.363 [2024-05-15 11:18:03.557055] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:06.363 [2024-05-15 11:18:03.557239] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:06.363 [2024-05-15 11:18:03.557418] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.363 [2024-05-15 11:18:03.557426] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.363 [2024-05-15 11:18:03.557432] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.363 [2024-05-15 11:18:03.560296] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.363 [2024-05-15 11:18:03.569573] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.363 [2024-05-15 11:18:03.570014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.363 [2024-05-15 11:18:03.570224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.363 [2024-05-15 11:18:03.570235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:06.363 [2024-05-15 11:18:03.570242] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:06.363 [2024-05-15 11:18:03.570422] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:06.363 [2024-05-15 11:18:03.570602] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.363 [2024-05-15 11:18:03.570610] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.363 [2024-05-15 11:18:03.570616] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.363 [2024-05-15 11:18:03.573486] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.363 [2024-05-15 11:18:03.582771] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.364 [2024-05-15 11:18:03.583233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.364 [2024-05-15 11:18:03.583464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.364 [2024-05-15 11:18:03.583475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:06.364 [2024-05-15 11:18:03.583482] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:06.364 [2024-05-15 11:18:03.583663] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:06.364 [2024-05-15 11:18:03.583842] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.364 [2024-05-15 11:18:03.583855] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.364 [2024-05-15 11:18:03.583861] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.364 Malloc0 00:26:06.364 11:18:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:06.364 11:18:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:06.364 11:18:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:06.364 11:18:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:06.364 [2024-05-15 11:18:03.586727] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.364 [2024-05-15 11:18:03.596000] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.364 [2024-05-15 11:18:03.596442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.364 11:18:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:06.364 [2024-05-15 11:18:03.596604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.364 [2024-05-15 11:18:03.596615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f1840 with addr=10.0.0.2, port=4420 00:26:06.364 [2024-05-15 11:18:03.596622] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1840 is same with the state(5) to be set 00:26:06.364 [2024-05-15 11:18:03.596801] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1840 (9): Bad file descriptor 00:26:06.364 11:18:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:06.364 [2024-05-15 11:18:03.596980] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.364 [2024-05-15 11:18:03.596989] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.364 [2024-05-15 11:18:03.596996] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.364 11:18:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:06.364 11:18:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:06.364 [2024-05-15 11:18:03.599851] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.364 11:18:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:06.364 11:18:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:06.364 11:18:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:06.364 11:18:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:06.364 [2024-05-15 11:18:03.607856] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:06.364 [2024-05-15 11:18:03.608076] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:06.364 [2024-05-15 11:18:03.609121] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.364 11:18:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:06.364 11:18:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2401886 00:26:06.621 [2024-05-15 11:18:03.732076] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:16.578 00:26:16.578 Latency(us) 00:26:16.578 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:16.578 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:16.578 Verification LBA range: start 0x0 length 0x4000 00:26:16.578 Nvme1n1 : 15.00 7917.89 30.93 12406.11 0.00 6277.50 439.87 13563.10 00:26:16.578 =================================================================================================================== 00:26:16.578 Total : 7917.89 30.93 12406.11 0.00 6277.50 439.87 13563.10 00:26:16.578 11:18:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:26:16.578 11:18:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:16.578 11:18:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:16.578 11:18:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:16.578 11:18:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:16.578 11:18:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:26:16.578 11:18:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:26:16.578 11:18:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:16.578 11:18:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:26:16.578 11:18:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:16.578 11:18:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:26:16.578 11:18:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:16.578 11:18:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:16.578 rmmod nvme_tcp 00:26:16.578 rmmod nvme_fabrics 00:26:16.578 rmmod nvme_keyring 00:26:16.578 11:18:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:16.578 11:18:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:26:16.578 11:18:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:26:16.578 11:18:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 2402987 ']' 00:26:16.578 11:18:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 2402987 00:26:16.578 11:18:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@947 -- # '[' -z 2402987 ']' 00:26:16.578 11:18:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # kill -0 2402987 00:26:16.578 11:18:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # uname 00:26:16.578 11:18:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:26:16.578 11:18:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2402987 00:26:16.578 11:18:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:26:16.578 11:18:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:26:16.578 11:18:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2402987' 00:26:16.578 killing process with pid 2402987 00:26:16.578 11:18:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # kill 2402987 00:26:16.578 [2024-05-15 11:18:12.331609] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:16.578 11:18:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@971 -- # wait 2402987 00:26:16.578 11:18:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:16.578 11:18:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:16.578 11:18:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:16.578 11:18:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:16.578 11:18:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:16.578 11:18:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:16.578 11:18:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:16.578 11:18:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:17.513 11:18:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:17.513 00:26:17.513 real 0m26.155s 00:26:17.513 user 1m3.228s 00:26:17.513 sys 0m6.063s 00:26:17.513 11:18:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:26:17.513 11:18:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:17.513 ************************************ 00:26:17.513 END TEST nvmf_bdevperf 00:26:17.513 ************************************ 00:26:17.513 11:18:14 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:17.513 11:18:14 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:26:17.513 11:18:14 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:26:17.513 11:18:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:17.513 ************************************ 00:26:17.513 START TEST nvmf_target_disconnect 00:26:17.513 ************************************ 00:26:17.513 11:18:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:17.772 * Looking for test storage... 00:26:17.772 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:17.772 11:18:14 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:17.772 11:18:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:26:17.772 11:18:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:17.772 11:18:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:17.772 11:18:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:17.772 11:18:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:17.772 11:18:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:17.772 11:18:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:17.772 11:18:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:17.772 11:18:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:17.772 11:18:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:17.772 11:18:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:17.772 11:18:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:17.772 11:18:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:17.772 11:18:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:17.772 11:18:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:17.772 11:18:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:17.772 11:18:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:17.772 11:18:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:17.772 11:18:14 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:17.772 11:18:14 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:17.772 11:18:14 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:17.772 11:18:14 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.772 11:18:14 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.772 11:18:14 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.772 11:18:14 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:26:17.772 11:18:14 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.772 11:18:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:26:17.772 11:18:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:17.772 11:18:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:17.772 11:18:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:17.772 11:18:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:17.772 11:18:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:17.772 11:18:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:17.772 11:18:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:17.772 11:18:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:17.772 11:18:14 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:17.772 11:18:14 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:26:17.772 11:18:14 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:26:17.772 11:18:14 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:26:17.772 11:18:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:17.772 11:18:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:17.772 11:18:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:17.772 11:18:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:17.772 11:18:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:17.772 11:18:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:17.772 11:18:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:17.772 11:18:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:17.772 11:18:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:17.772 11:18:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:17.772 11:18:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:26:17.772 11:18:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:23.032 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:23.032 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:23.032 Found net devices under 0000:86:00.0: cvl_0_0 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:23.032 Found net devices under 0000:86:00.1: cvl_0_1 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:23.032 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:23.032 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:26:23.032 00:26:23.032 --- 10.0.0.2 ping statistics --- 00:26:23.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:23.032 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:23.032 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:23.032 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:26:23.032 00:26:23.032 --- 10.0.0.1 ping statistics --- 00:26:23.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:23.032 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:23.032 11:18:20 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:26:23.291 11:18:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:26:23.291 11:18:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1104 -- # xtrace_disable 00:26:23.291 11:18:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:23.291 ************************************ 00:26:23.291 START TEST nvmf_target_disconnect_tc1 00:26:23.291 ************************************ 00:26:23.291 11:18:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1122 -- # nvmf_target_disconnect_tc1 00:26:23.291 11:18:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:23.291 11:18:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@649 -- # local es=0 00:26:23.291 11:18:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:23.291 11:18:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:23.291 11:18:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:23.291 11:18:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:23.291 11:18:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:23.291 11:18:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:23.291 11:18:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:23.291 11:18:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:23.292 11:18:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:26:23.292 11:18:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:23.292 EAL: No free 2048 kB hugepages reported on node 1 00:26:23.292 [2024-05-15 11:18:20.421680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.292 [2024-05-15 11:18:20.421962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.292 [2024-05-15 11:18:20.421975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2030ae0 with addr=10.0.0.2, port=4420 00:26:23.292 [2024-05-15 11:18:20.421996] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:23.292 [2024-05-15 11:18:20.422006] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:23.292 [2024-05-15 11:18:20.422013] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:26:23.292 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:26:23.292 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:26:23.292 Initializing NVMe Controllers 00:26:23.292 11:18:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # es=1 00:26:23.292 11:18:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:26:23.292 11:18:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:26:23.292 11:18:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:26:23.292 00:26:23.292 real 0m0.098s 00:26:23.292 user 0m0.043s 00:26:23.292 sys 0m0.054s 00:26:23.292 11:18:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:26:23.292 11:18:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:23.292 ************************************ 00:26:23.292 END TEST nvmf_target_disconnect_tc1 00:26:23.292 ************************************ 00:26:23.292 11:18:20 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:26:23.292 11:18:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:26:23.292 11:18:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1104 -- # xtrace_disable 00:26:23.292 11:18:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:23.292 ************************************ 00:26:23.292 START TEST nvmf_target_disconnect_tc2 00:26:23.292 ************************************ 00:26:23.292 11:18:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1122 -- # nvmf_target_disconnect_tc2 00:26:23.292 11:18:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:26:23.292 11:18:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:23.292 11:18:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:23.292 11:18:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@721 -- # xtrace_disable 00:26:23.292 11:18:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:23.292 11:18:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2408485 00:26:23.292 11:18:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2408485 00:26:23.292 11:18:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:23.292 11:18:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@828 -- # '[' -z 2408485 ']' 00:26:23.292 11:18:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:23.292 11:18:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local max_retries=100 00:26:23.292 11:18:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:23.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:23.292 11:18:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # xtrace_disable 00:26:23.292 11:18:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:23.550 [2024-05-15 11:18:20.557175] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:26:23.550 [2024-05-15 11:18:20.557217] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:23.550 EAL: No free 2048 kB hugepages reported on node 1 00:26:23.550 [2024-05-15 11:18:20.632395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:23.550 [2024-05-15 11:18:20.711542] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:23.550 [2024-05-15 11:18:20.711581] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:23.550 [2024-05-15 11:18:20.711588] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:23.550 [2024-05-15 11:18:20.711595] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:23.550 [2024-05-15 11:18:20.711600] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:23.550 [2024-05-15 11:18:20.711673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:26:23.550 [2024-05-15 11:18:20.712182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:26:23.550 [2024-05-15 11:18:20.712257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:23.550 [2024-05-15 11:18:20.712257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:26:24.156 11:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:26:24.156 11:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@861 -- # return 0 00:26:24.156 11:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:24.156 11:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@727 -- # xtrace_disable 00:26:24.156 11:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:24.438 11:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:24.438 11:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:24.438 11:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:24.438 11:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:24.438 Malloc0 00:26:24.438 11:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:24.438 11:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:24.438 11:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:24.438 11:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:24.438 [2024-05-15 11:18:21.431895] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:24.438 11:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:24.438 11:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:24.438 11:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:24.438 11:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:24.438 11:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:24.438 11:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:24.438 11:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:24.438 11:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:24.438 11:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:24.438 11:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:24.438 11:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:24.438 11:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:24.438 [2024-05-15 11:18:21.459918] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:24.438 [2024-05-15 11:18:21.460190] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:24.438 11:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:24.438 11:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:24.438 11:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:24.438 11:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:24.438 11:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:24.438 11:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2408735 00:26:24.438 11:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:26:24.438 11:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:24.438 EAL: No free 2048 kB hugepages reported on node 1 00:26:26.343 11:18:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2408485 00:26:26.343 11:18:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:26:26.343 Read completed with error (sct=0, sc=8) 00:26:26.343 starting I/O failed 00:26:26.343 Write completed with error (sct=0, sc=8) 00:26:26.343 starting I/O failed 00:26:26.343 Read completed with error (sct=0, sc=8) 00:26:26.343 starting I/O failed 00:26:26.343 Write completed with error (sct=0, sc=8) 00:26:26.343 starting I/O failed 00:26:26.343 Read completed with error (sct=0, sc=8) 00:26:26.343 starting I/O failed 00:26:26.343 Write completed with error (sct=0, sc=8) 00:26:26.343 starting I/O failed 00:26:26.343 Read completed with error (sct=0, sc=8) 00:26:26.343 starting I/O failed 00:26:26.343 Read completed with error (sct=0, sc=8) 00:26:26.343 starting I/O failed 00:26:26.343 Write completed with error (sct=0, sc=8) 00:26:26.343 starting I/O failed 00:26:26.343 Write completed with error (sct=0, sc=8) 00:26:26.343 starting I/O failed 00:26:26.343 Read completed with error (sct=0, sc=8) 00:26:26.343 starting I/O failed 00:26:26.343 Read completed with error (sct=0, sc=8) 00:26:26.343 starting I/O failed 00:26:26.343 Read completed with error (sct=0, sc=8) 00:26:26.343 starting I/O failed 00:26:26.343 Write completed with error (sct=0, sc=8) 00:26:26.343 starting I/O failed 00:26:26.343 Write completed with error (sct=0, sc=8) 00:26:26.343 starting I/O failed 00:26:26.343 Read completed with error (sct=0, sc=8) 00:26:26.343 starting I/O failed 00:26:26.343 Write completed with error (sct=0, sc=8) 00:26:26.343 starting I/O failed 00:26:26.343 Read completed with error (sct=0, sc=8) 00:26:26.343 starting I/O failed 00:26:26.343 Write completed with error (sct=0, sc=8) 00:26:26.343 starting I/O failed 00:26:26.343 Write completed with error (sct=0, sc=8) 00:26:26.343 starting I/O failed 00:26:26.343 Write completed with error (sct=0, sc=8) 00:26:26.343 starting I/O failed 00:26:26.343 Read completed with error (sct=0, sc=8) 00:26:26.343 starting I/O failed 00:26:26.343 Read completed with error (sct=0, sc=8) 00:26:26.343 starting I/O failed 00:26:26.343 Write completed with error (sct=0, sc=8) 00:26:26.343 starting I/O failed 00:26:26.343 Read completed with error (sct=0, sc=8) 00:26:26.343 starting I/O failed 00:26:26.343 Read completed with error (sct=0, sc=8) 00:26:26.343 starting I/O failed 00:26:26.343 Read completed with error (sct=0, sc=8) 00:26:26.343 starting I/O failed 00:26:26.343 Read completed with error (sct=0, sc=8) 00:26:26.343 starting I/O failed 00:26:26.343 Read completed with error (sct=0, sc=8) 00:26:26.343 starting I/O failed 00:26:26.343 Read completed with error (sct=0, sc=8) 00:26:26.343 starting I/O failed 00:26:26.343 Write completed with error (sct=0, sc=8) 00:26:26.343 starting I/O failed 00:26:26.343 Read completed with error (sct=0, sc=8) 00:26:26.343 starting I/O failed 00:26:26.343 Read completed with error (sct=0, sc=8) 00:26:26.343 starting I/O failed 00:26:26.343 Read completed with error (sct=0, sc=8) 00:26:26.343 starting I/O failed 00:26:26.343 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 [2024-05-15 11:18:23.486232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:26.344 Write completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Write completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Write completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Write completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Write completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Write completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Write completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Write completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Write completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Write completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Write completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Write completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Write completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Write completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 [2024-05-15 11:18:23.486442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Write completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Write completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Write completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Write completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Write completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Write completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Write completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Write completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 [2024-05-15 11:18:23.486643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Write completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Write completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Write completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Write completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Write completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Write completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.344 Read completed with error (sct=0, sc=8) 00:26:26.344 starting I/O failed 00:26:26.345 Read completed with error (sct=0, sc=8) 00:26:26.345 starting I/O failed 00:26:26.345 Write completed with error (sct=0, sc=8) 00:26:26.345 starting I/O failed 00:26:26.345 Read completed with error (sct=0, sc=8) 00:26:26.345 starting I/O failed 00:26:26.345 [2024-05-15 11:18:23.486835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.345 [2024-05-15 11:18:23.487028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.345 [2024-05-15 11:18:23.487196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.345 [2024-05-15 11:18:23.487209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.345 qpair failed and we were unable to recover it. 00:26:26.345 [2024-05-15 11:18:23.487469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.345 [2024-05-15 11:18:23.487615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.345 [2024-05-15 11:18:23.487644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.345 qpair failed and we were unable to recover it. 00:26:26.345 [2024-05-15 11:18:23.487971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.345 [2024-05-15 11:18:23.488184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.345 [2024-05-15 11:18:23.488195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.345 qpair failed and we were unable to recover it. 00:26:26.345 [2024-05-15 11:18:23.488453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.345 [2024-05-15 11:18:23.488616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.345 [2024-05-15 11:18:23.488626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.345 qpair failed and we were unable to recover it. 00:26:26.345 [2024-05-15 11:18:23.488780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.345 [2024-05-15 11:18:23.489004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.345 [2024-05-15 11:18:23.489033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.345 qpair failed and we were unable to recover it. 00:26:26.345 [2024-05-15 11:18:23.489347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.345 [2024-05-15 11:18:23.489498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.345 [2024-05-15 11:18:23.489527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.345 qpair failed and we were unable to recover it. 00:26:26.345 [2024-05-15 11:18:23.489724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.345 [2024-05-15 11:18:23.489905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.345 [2024-05-15 11:18:23.489934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.345 qpair failed and we were unable to recover it. 00:26:26.345 [2024-05-15 11:18:23.490186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.345 [2024-05-15 11:18:23.490291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.345 [2024-05-15 11:18:23.490301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.345 qpair failed and we were unable to recover it. 00:26:26.345 [2024-05-15 11:18:23.490458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.345 [2024-05-15 11:18:23.490527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.345 [2024-05-15 11:18:23.490537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.345 qpair failed and we were unable to recover it. 00:26:26.345 [2024-05-15 11:18:23.490626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.345 [2024-05-15 11:18:23.490776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.345 [2024-05-15 11:18:23.490785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.345 qpair failed and we were unable to recover it. 00:26:26.345 [2024-05-15 11:18:23.491017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.345 [2024-05-15 11:18:23.491226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.345 [2024-05-15 11:18:23.491238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.345 qpair failed and we were unable to recover it. 00:26:26.345 [2024-05-15 11:18:23.491472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.345 [2024-05-15 11:18:23.491636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.345 [2024-05-15 11:18:23.491665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.345 qpair failed and we were unable to recover it. 00:26:26.345 [2024-05-15 11:18:23.491863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.345 [2024-05-15 11:18:23.492060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.345 [2024-05-15 11:18:23.492090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.345 qpair failed and we were unable to recover it. 00:26:26.345 [2024-05-15 11:18:23.492277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.345 [2024-05-15 11:18:23.492394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.345 [2024-05-15 11:18:23.492423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.345 qpair failed and we were unable to recover it. 00:26:26.345 [2024-05-15 11:18:23.492696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.345 [2024-05-15 11:18:23.492998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.345 [2024-05-15 11:18:23.493008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.345 qpair failed and we were unable to recover it. 00:26:26.345 [2024-05-15 11:18:23.493155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.345 [2024-05-15 11:18:23.493374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.345 [2024-05-15 11:18:23.493404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.345 qpair failed and we were unable to recover it. 00:26:26.345 [2024-05-15 11:18:23.493674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.345 [2024-05-15 11:18:23.494018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.345 [2024-05-15 11:18:23.494048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.345 qpair failed and we were unable to recover it. 00:26:26.345 [2024-05-15 11:18:23.494299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.345 [2024-05-15 11:18:23.494447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.345 [2024-05-15 11:18:23.494457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.345 qpair failed and we were unable to recover it. 00:26:26.345 [2024-05-15 11:18:23.494687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.345 [2024-05-15 11:18:23.494891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.345 [2024-05-15 11:18:23.494921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.345 qpair failed and we were unable to recover it. 00:26:26.345 [2024-05-15 11:18:23.495058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.345 [2024-05-15 11:18:23.495302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.345 [2024-05-15 11:18:23.495333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.345 qpair failed and we were unable to recover it. 00:26:26.345 [2024-05-15 11:18:23.495599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.345 [2024-05-15 11:18:23.495879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.345 [2024-05-15 11:18:23.495891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.345 qpair failed and we were unable to recover it. 00:26:26.345 [2024-05-15 11:18:23.496065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.345 [2024-05-15 11:18:23.496287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.345 [2024-05-15 11:18:23.496297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.345 qpair failed and we were unable to recover it. 00:26:26.345 [2024-05-15 11:18:23.496519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.345 [2024-05-15 11:18:23.496684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.345 [2024-05-15 11:18:23.496694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.345 qpair failed and we were unable to recover it. 00:26:26.345 [2024-05-15 11:18:23.496864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.345 [2024-05-15 11:18:23.496942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.345 [2024-05-15 11:18:23.496952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.346 qpair failed and we were unable to recover it. 00:26:26.346 [2024-05-15 11:18:23.497173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.497349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.497359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.346 qpair failed and we were unable to recover it. 00:26:26.346 [2024-05-15 11:18:23.497455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.497678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.497691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.346 qpair failed and we were unable to recover it. 00:26:26.346 [2024-05-15 11:18:23.497926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.498100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.498129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.346 qpair failed and we were unable to recover it. 00:26:26.346 [2024-05-15 11:18:23.498453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.498648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.498677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.346 qpair failed and we were unable to recover it. 00:26:26.346 [2024-05-15 11:18:23.498947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.499194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.499225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.346 qpair failed and we were unable to recover it. 00:26:26.346 [2024-05-15 11:18:23.499451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.499717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.499747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.346 qpair failed and we were unable to recover it. 00:26:26.346 [2024-05-15 11:18:23.499936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.500185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.500222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.346 qpair failed and we were unable to recover it. 00:26:26.346 [2024-05-15 11:18:23.500428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.500653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.500682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.346 qpair failed and we were unable to recover it. 00:26:26.346 [2024-05-15 11:18:23.500942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.501149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.501162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.346 qpair failed and we were unable to recover it. 00:26:26.346 [2024-05-15 11:18:23.501379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.501538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.501567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.346 qpair failed and we were unable to recover it. 00:26:26.346 [2024-05-15 11:18:23.501855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.502125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.502153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.346 qpair failed and we were unable to recover it. 00:26:26.346 [2024-05-15 11:18:23.502487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.502675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.502688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.346 qpair failed and we were unable to recover it. 00:26:26.346 [2024-05-15 11:18:23.502918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.503069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.503098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.346 qpair failed and we were unable to recover it. 00:26:26.346 [2024-05-15 11:18:23.503435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.503647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.503676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.346 qpair failed and we were unable to recover it. 00:26:26.346 [2024-05-15 11:18:23.503865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.504043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.504072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.346 qpair failed and we were unable to recover it. 00:26:26.346 [2024-05-15 11:18:23.504314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.504444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.504472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.346 qpair failed and we were unable to recover it. 00:26:26.346 [2024-05-15 11:18:23.504668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.504853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.504893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.346 qpair failed and we were unable to recover it. 00:26:26.346 [2024-05-15 11:18:23.505093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.505364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.505395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.346 qpair failed and we were unable to recover it. 00:26:26.346 [2024-05-15 11:18:23.505638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.505760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.505789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.346 qpair failed and we were unable to recover it. 00:26:26.346 [2024-05-15 11:18:23.505898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.506072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.506084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.346 qpair failed and we were unable to recover it. 00:26:26.346 [2024-05-15 11:18:23.506239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.506396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.506410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.346 qpair failed and we were unable to recover it. 00:26:26.346 [2024-05-15 11:18:23.506559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.506642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.506655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.346 qpair failed and we were unable to recover it. 00:26:26.346 [2024-05-15 11:18:23.506826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.507073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.507104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.346 qpair failed and we were unable to recover it. 00:26:26.346 [2024-05-15 11:18:23.507311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.507502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.507531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.346 qpair failed and we were unable to recover it. 00:26:26.346 [2024-05-15 11:18:23.507723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.507983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.508012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.346 qpair failed and we were unable to recover it. 00:26:26.346 [2024-05-15 11:18:23.508193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.508337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.508351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.346 qpair failed and we were unable to recover it. 00:26:26.346 [2024-05-15 11:18:23.508431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.508604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.346 [2024-05-15 11:18:23.508618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.346 qpair failed and we were unable to recover it. 00:26:26.347 [2024-05-15 11:18:23.508854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.347 [2024-05-15 11:18:23.509108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.347 [2024-05-15 11:18:23.509121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.347 qpair failed and we were unable to recover it. 00:26:26.347 [2024-05-15 11:18:23.509326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.347 [2024-05-15 11:18:23.509494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.347 [2024-05-15 11:18:23.509523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.347 qpair failed and we were unable to recover it. 00:26:26.347 [2024-05-15 11:18:23.509738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.347 [2024-05-15 11:18:23.509984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.347 [2024-05-15 11:18:23.510015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.347 qpair failed and we were unable to recover it. 00:26:26.347 [2024-05-15 11:18:23.510186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.347 [2024-05-15 11:18:23.510433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.347 [2024-05-15 11:18:23.510463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.347 qpair failed and we were unable to recover it. 00:26:26.347 [2024-05-15 11:18:23.510653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.347 [2024-05-15 11:18:23.510851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.347 [2024-05-15 11:18:23.510880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.347 qpair failed and we were unable to recover it. 00:26:26.347 [2024-05-15 11:18:23.511144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.347 [2024-05-15 11:18:23.511392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.347 [2024-05-15 11:18:23.511407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.347 qpair failed and we were unable to recover it. 00:26:26.347 [2024-05-15 11:18:23.511663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.347 [2024-05-15 11:18:23.511824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.347 [2024-05-15 11:18:23.511838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.347 qpair failed and we were unable to recover it. 00:26:26.347 [2024-05-15 11:18:23.512006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.347 [2024-05-15 11:18:23.512223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.347 [2024-05-15 11:18:23.512254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.347 qpair failed and we were unable to recover it. 00:26:26.347 [2024-05-15 11:18:23.512455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.347 [2024-05-15 11:18:23.512702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.347 [2024-05-15 11:18:23.512731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.347 qpair failed and we were unable to recover it. 00:26:26.347 [2024-05-15 11:18:23.512867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.347 [2024-05-15 11:18:23.513143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.347 [2024-05-15 11:18:23.513182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.347 qpair failed and we were unable to recover it. 00:26:26.347 [2024-05-15 11:18:23.513319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.347 [2024-05-15 11:18:23.513599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.347 [2024-05-15 11:18:23.513628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.347 qpair failed and we were unable to recover it. 00:26:26.347 [2024-05-15 11:18:23.513823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.347 [2024-05-15 11:18:23.513988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.347 [2024-05-15 11:18:23.514017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.347 qpair failed and we were unable to recover it. 00:26:26.347 [2024-05-15 11:18:23.514290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.347 [2024-05-15 11:18:23.514412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.347 [2024-05-15 11:18:23.514440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.347 qpair failed and we were unable to recover it. 00:26:26.347 [2024-05-15 11:18:23.514706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.347 [2024-05-15 11:18:23.515015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.347 [2024-05-15 11:18:23.515045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.347 qpair failed and we were unable to recover it. 00:26:26.347 [2024-05-15 11:18:23.515247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.347 [2024-05-15 11:18:23.515456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.347 [2024-05-15 11:18:23.515485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.347 qpair failed and we were unable to recover it. 00:26:26.347 [2024-05-15 11:18:23.515698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.347 [2024-05-15 11:18:23.515902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.347 [2024-05-15 11:18:23.515930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.347 qpair failed and we were unable to recover it. 00:26:26.347 [2024-05-15 11:18:23.516116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.347 [2024-05-15 11:18:23.516265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.347 [2024-05-15 11:18:23.516279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.347 qpair failed and we were unable to recover it. 00:26:26.347 [2024-05-15 11:18:23.516447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.347 [2024-05-15 11:18:23.516597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.347 [2024-05-15 11:18:23.516609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.347 qpair failed and we were unable to recover it. 00:26:26.347 [2024-05-15 11:18:23.516770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.347 [2024-05-15 11:18:23.517023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.347 [2024-05-15 11:18:23.517037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.347 qpair failed and we were unable to recover it. 00:26:26.347 [2024-05-15 11:18:23.517189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.347 [2024-05-15 11:18:23.517436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.347 [2024-05-15 11:18:23.517466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.347 qpair failed and we were unable to recover it. 00:26:26.347 [2024-05-15 11:18:23.517738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.347 [2024-05-15 11:18:23.518021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.347 [2024-05-15 11:18:23.518050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.347 qpair failed and we were unable to recover it. 00:26:26.347 [2024-05-15 11:18:23.518326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.347 [2024-05-15 11:18:23.518517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.347 [2024-05-15 11:18:23.518547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.347 qpair failed and we were unable to recover it. 00:26:26.347 [2024-05-15 11:18:23.518748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.347 [2024-05-15 11:18:23.519028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.347 [2024-05-15 11:18:23.519057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.347 qpair failed and we were unable to recover it. 00:26:26.347 [2024-05-15 11:18:23.519321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.347 [2024-05-15 11:18:23.519449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.347 [2024-05-15 11:18:23.519479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.347 qpair failed and we were unable to recover it. 00:26:26.347 [2024-05-15 11:18:23.519766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.347 [2024-05-15 11:18:23.520093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.348 [2024-05-15 11:18:23.520123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.348 qpair failed and we were unable to recover it. 00:26:26.348 [2024-05-15 11:18:23.520399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.348 [2024-05-15 11:18:23.520587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.348 [2024-05-15 11:18:23.520616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.348 qpair failed and we were unable to recover it. 00:26:26.348 [2024-05-15 11:18:23.520889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.348 [2024-05-15 11:18:23.521076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.348 [2024-05-15 11:18:23.521089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.348 qpair failed and we were unable to recover it. 00:26:26.348 [2024-05-15 11:18:23.521379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.348 [2024-05-15 11:18:23.521619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.348 [2024-05-15 11:18:23.521633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.348 qpair failed and we were unable to recover it. 00:26:26.348 [2024-05-15 11:18:23.521783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.348 [2024-05-15 11:18:23.522059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.348 [2024-05-15 11:18:23.522087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.348 qpair failed and we were unable to recover it. 00:26:26.348 [2024-05-15 11:18:23.522301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.348 [2024-05-15 11:18:23.522399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.348 [2024-05-15 11:18:23.522427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.348 qpair failed and we were unable to recover it. 00:26:26.348 [2024-05-15 11:18:23.522697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.348 [2024-05-15 11:18:23.522943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.348 [2024-05-15 11:18:23.522972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.348 qpair failed and we were unable to recover it. 00:26:26.348 [2024-05-15 11:18:23.523217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.348 [2024-05-15 11:18:23.523407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.348 [2024-05-15 11:18:23.523436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.348 qpair failed and we were unable to recover it. 00:26:26.348 [2024-05-15 11:18:23.523682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.348 [2024-05-15 11:18:23.523973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.348 [2024-05-15 11:18:23.523987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.348 qpair failed and we were unable to recover it. 00:26:26.348 [2024-05-15 11:18:23.524154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.348 [2024-05-15 11:18:23.524264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.348 [2024-05-15 11:18:23.524278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.348 qpair failed and we were unable to recover it. 00:26:26.348 [2024-05-15 11:18:23.524415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.348 [2024-05-15 11:18:23.524644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.348 [2024-05-15 11:18:23.524657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.348 qpair failed and we were unable to recover it. 00:26:26.348 [2024-05-15 11:18:23.524879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.348 [2024-05-15 11:18:23.525058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.348 [2024-05-15 11:18:23.525072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.348 qpair failed and we were unable to recover it. 00:26:26.348 [2024-05-15 11:18:23.525293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.348 [2024-05-15 11:18:23.525419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.348 [2024-05-15 11:18:23.525448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.348 qpair failed and we were unable to recover it. 00:26:26.348 [2024-05-15 11:18:23.525722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.348 [2024-05-15 11:18:23.525977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.348 [2024-05-15 11:18:23.526006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.348 qpair failed and we were unable to recover it. 00:26:26.348 [2024-05-15 11:18:23.526296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.348 [2024-05-15 11:18:23.526414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.348 [2024-05-15 11:18:23.526443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.348 qpair failed and we were unable to recover it. 00:26:26.348 [2024-05-15 11:18:23.526629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.348 [2024-05-15 11:18:23.526756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.348 [2024-05-15 11:18:23.526784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.348 qpair failed and we were unable to recover it. 00:26:26.348 [2024-05-15 11:18:23.526924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.348 [2024-05-15 11:18:23.527128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.348 [2024-05-15 11:18:23.527141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.348 qpair failed and we were unable to recover it. 00:26:26.348 [2024-05-15 11:18:23.527292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.348 [2024-05-15 11:18:23.527453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.348 [2024-05-15 11:18:23.527467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.348 qpair failed and we were unable to recover it. 00:26:26.348 [2024-05-15 11:18:23.527672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.348 [2024-05-15 11:18:23.527841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.348 [2024-05-15 11:18:23.527854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.348 qpair failed and we were unable to recover it. 00:26:26.348 [2024-05-15 11:18:23.528065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.348 [2024-05-15 11:18:23.528184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.348 [2024-05-15 11:18:23.528215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.348 qpair failed and we were unable to recover it. 00:26:26.348 [2024-05-15 11:18:23.528390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.348 [2024-05-15 11:18:23.528578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.348 [2024-05-15 11:18:23.528607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.348 qpair failed and we were unable to recover it. 00:26:26.348 [2024-05-15 11:18:23.528849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.348 [2024-05-15 11:18:23.529030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.348 [2024-05-15 11:18:23.529059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.349 qpair failed and we were unable to recover it. 00:26:26.349 [2024-05-15 11:18:23.529186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.349 [2024-05-15 11:18:23.529288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.349 [2024-05-15 11:18:23.529302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.349 qpair failed and we were unable to recover it. 00:26:26.349 [2024-05-15 11:18:23.529400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.349 [2024-05-15 11:18:23.529575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.349 [2024-05-15 11:18:23.529589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.349 qpair failed and we were unable to recover it. 00:26:26.349 [2024-05-15 11:18:23.529677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.349 [2024-05-15 11:18:23.529816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.349 [2024-05-15 11:18:23.529830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.349 qpair failed and we were unable to recover it. 00:26:26.349 [2024-05-15 11:18:23.529989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.349 [2024-05-15 11:18:23.530174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.349 [2024-05-15 11:18:23.530203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.349 qpair failed and we were unable to recover it. 00:26:26.349 [2024-05-15 11:18:23.530329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.349 [2024-05-15 11:18:23.530566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.349 [2024-05-15 11:18:23.530594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.349 qpair failed and we were unable to recover it. 00:26:26.349 [2024-05-15 11:18:23.530778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.349 [2024-05-15 11:18:23.531017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.349 [2024-05-15 11:18:23.531047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.349 qpair failed and we were unable to recover it. 00:26:26.349 [2024-05-15 11:18:23.531307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.349 [2024-05-15 11:18:23.531401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.349 [2024-05-15 11:18:23.531414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.349 qpair failed and we were unable to recover it. 00:26:26.349 [2024-05-15 11:18:23.531567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.349 [2024-05-15 11:18:23.531722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.349 [2024-05-15 11:18:23.531735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.349 qpair failed and we were unable to recover it. 00:26:26.349 [2024-05-15 11:18:23.531822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.349 [2024-05-15 11:18:23.531955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.349 [2024-05-15 11:18:23.531968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.349 qpair failed and we were unable to recover it. 00:26:26.349 [2024-05-15 11:18:23.532070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.349 [2024-05-15 11:18:23.532171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.349 [2024-05-15 11:18:23.532184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.349 qpair failed and we were unable to recover it. 00:26:26.349 [2024-05-15 11:18:23.532348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.349 [2024-05-15 11:18:23.532423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.349 [2024-05-15 11:18:23.532436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.349 qpair failed and we were unable to recover it. 00:26:26.349 [2024-05-15 11:18:23.532517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.349 [2024-05-15 11:18:23.532754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.349 [2024-05-15 11:18:23.532768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.349 qpair failed and we were unable to recover it. 00:26:26.349 [2024-05-15 11:18:23.532908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.349 [2024-05-15 11:18:23.533096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.349 [2024-05-15 11:18:23.533109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.349 qpair failed and we were unable to recover it. 00:26:26.349 [2024-05-15 11:18:23.533275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.349 [2024-05-15 11:18:23.533504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.349 [2024-05-15 11:18:23.533518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.349 qpair failed and we were unable to recover it. 00:26:26.349 [2024-05-15 11:18:23.533625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.349 [2024-05-15 11:18:23.533734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.349 [2024-05-15 11:18:23.533746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.349 qpair failed and we were unable to recover it. 00:26:26.349 [2024-05-15 11:18:23.533982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.349 [2024-05-15 11:18:23.534079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.349 [2024-05-15 11:18:23.534089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.349 qpair failed and we were unable to recover it. 00:26:26.349 [2024-05-15 11:18:23.534221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.349 [2024-05-15 11:18:23.534310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.349 [2024-05-15 11:18:23.534321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.349 qpair failed and we were unable to recover it. 00:26:26.349 [2024-05-15 11:18:23.534466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.349 [2024-05-15 11:18:23.534537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.349 [2024-05-15 11:18:23.534546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.349 qpair failed and we were unable to recover it. 00:26:26.349 [2024-05-15 11:18:23.534642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.349 [2024-05-15 11:18:23.534780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.349 [2024-05-15 11:18:23.534789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.349 qpair failed and we were unable to recover it. 00:26:26.349 [2024-05-15 11:18:23.534928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.349 [2024-05-15 11:18:23.535159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.349 [2024-05-15 11:18:23.535174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.349 qpair failed and we were unable to recover it. 00:26:26.349 [2024-05-15 11:18:23.535352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.349 [2024-05-15 11:18:23.535432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.349 [2024-05-15 11:18:23.535442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.349 qpair failed and we were unable to recover it. 00:26:26.349 [2024-05-15 11:18:23.535528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.349 [2024-05-15 11:18:23.535599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.349 [2024-05-15 11:18:23.535609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.349 qpair failed and we were unable to recover it. 00:26:26.349 [2024-05-15 11:18:23.535693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.350 [2024-05-15 11:18:23.535903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.350 [2024-05-15 11:18:23.535912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.350 qpair failed and we were unable to recover it. 00:26:26.350 [2024-05-15 11:18:23.536063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.350 [2024-05-15 11:18:23.536277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.350 [2024-05-15 11:18:23.536307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.350 qpair failed and we were unable to recover it. 00:26:26.350 [2024-05-15 11:18:23.536485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.350 [2024-05-15 11:18:23.536788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.350 [2024-05-15 11:18:23.536824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.350 qpair failed and we were unable to recover it. 00:26:26.350 [2024-05-15 11:18:23.537077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.350 [2024-05-15 11:18:23.537174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.350 [2024-05-15 11:18:23.537189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.350 qpair failed and we were unable to recover it. 00:26:26.350 [2024-05-15 11:18:23.537343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.350 [2024-05-15 11:18:23.537511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.350 [2024-05-15 11:18:23.537525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.350 qpair failed and we were unable to recover it. 00:26:26.350 [2024-05-15 11:18:23.537737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.350 [2024-05-15 11:18:23.537821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.350 [2024-05-15 11:18:23.537835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.350 qpair failed and we were unable to recover it. 00:26:26.350 [2024-05-15 11:18:23.537997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.350 [2024-05-15 11:18:23.538147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.350 [2024-05-15 11:18:23.538161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.350 qpair failed and we were unable to recover it. 00:26:26.350 [2024-05-15 11:18:23.538334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.350 [2024-05-15 11:18:23.538492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.350 [2024-05-15 11:18:23.538505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.350 qpair failed and we were unable to recover it. 00:26:26.350 [2024-05-15 11:18:23.538593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.350 [2024-05-15 11:18:23.538679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.350 [2024-05-15 11:18:23.538692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.350 qpair failed and we were unable to recover it. 00:26:26.350 [2024-05-15 11:18:23.538774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.350 [2024-05-15 11:18:23.538866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.350 [2024-05-15 11:18:23.538880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.350 qpair failed and we were unable to recover it. 00:26:26.350 [2024-05-15 11:18:23.539051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.350 [2024-05-15 11:18:23.539138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.350 [2024-05-15 11:18:23.539152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.350 qpair failed and we were unable to recover it. 00:26:26.350 [2024-05-15 11:18:23.539308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.350 [2024-05-15 11:18:23.539451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.350 [2024-05-15 11:18:23.539464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.350 qpair failed and we were unable to recover it. 00:26:26.350 [2024-05-15 11:18:23.539547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.350 [2024-05-15 11:18:23.539625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.350 [2024-05-15 11:18:23.539639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.350 qpair failed and we were unable to recover it. 00:26:26.350 [2024-05-15 11:18:23.539714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.350 [2024-05-15 11:18:23.539876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.350 [2024-05-15 11:18:23.539890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.350 qpair failed and we were unable to recover it. 00:26:26.350 [2024-05-15 11:18:23.539969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.350 [2024-05-15 11:18:23.540042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.350 [2024-05-15 11:18:23.540055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.350 qpair failed and we were unable to recover it. 00:26:26.350 [2024-05-15 11:18:23.540212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.350 [2024-05-15 11:18:23.540298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.350 [2024-05-15 11:18:23.540312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.350 qpair failed and we were unable to recover it. 00:26:26.350 [2024-05-15 11:18:23.540389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.350 [2024-05-15 11:18:23.540466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.350 [2024-05-15 11:18:23.540480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.350 qpair failed and we were unable to recover it. 00:26:26.350 [2024-05-15 11:18:23.540573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.350 [2024-05-15 11:18:23.540715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.350 [2024-05-15 11:18:23.540728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.350 qpair failed and we were unable to recover it. 00:26:26.350 [2024-05-15 11:18:23.540801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.350 [2024-05-15 11:18:23.540882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.350 [2024-05-15 11:18:23.540895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.350 qpair failed and we were unable to recover it. 00:26:26.350 [2024-05-15 11:18:23.540969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.350 [2024-05-15 11:18:23.541053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.350 [2024-05-15 11:18:23.541067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.350 qpair failed and we were unable to recover it. 00:26:26.350 [2024-05-15 11:18:23.541211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.350 [2024-05-15 11:18:23.541307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.350 [2024-05-15 11:18:23.541321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.350 qpair failed and we were unable to recover it. 00:26:26.350 [2024-05-15 11:18:23.541412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.350 [2024-05-15 11:18:23.541566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.350 [2024-05-15 11:18:23.541595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.350 qpair failed and we were unable to recover it. 00:26:26.350 [2024-05-15 11:18:23.541721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.350 [2024-05-15 11:18:23.541825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.350 [2024-05-15 11:18:23.541860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.350 qpair failed and we were unable to recover it. 00:26:26.350 [2024-05-15 11:18:23.541987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.350 [2024-05-15 11:18:23.542178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.350 [2024-05-15 11:18:23.542192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.350 qpair failed and we were unable to recover it. 00:26:26.350 [2024-05-15 11:18:23.542268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.350 [2024-05-15 11:18:23.542409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.350 [2024-05-15 11:18:23.542422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.350 qpair failed and we were unable to recover it. 00:26:26.350 [2024-05-15 11:18:23.542520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.350 [2024-05-15 11:18:23.542658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.350 [2024-05-15 11:18:23.542671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.350 qpair failed and we were unable to recover it. 00:26:26.351 [2024-05-15 11:18:23.542744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.351 [2024-05-15 11:18:23.542947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.351 [2024-05-15 11:18:23.542960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.351 qpair failed and we were unable to recover it. 00:26:26.351 [2024-05-15 11:18:23.543109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.351 [2024-05-15 11:18:23.543192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.351 [2024-05-15 11:18:23.543206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.351 qpair failed and we were unable to recover it. 00:26:26.351 [2024-05-15 11:18:23.543351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.351 [2024-05-15 11:18:23.543494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.351 [2024-05-15 11:18:23.543507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.351 qpair failed and we were unable to recover it. 00:26:26.351 [2024-05-15 11:18:23.543657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.351 [2024-05-15 11:18:23.543859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.351 [2024-05-15 11:18:23.543872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.351 qpair failed and we were unable to recover it. 00:26:26.351 [2024-05-15 11:18:23.543959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.351 [2024-05-15 11:18:23.544047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.351 [2024-05-15 11:18:23.544060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.351 qpair failed and we were unable to recover it. 00:26:26.351 [2024-05-15 11:18:23.544161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.351 [2024-05-15 11:18:23.544307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.351 [2024-05-15 11:18:23.544320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.351 qpair failed and we were unable to recover it. 00:26:26.351 [2024-05-15 11:18:23.544408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.351 [2024-05-15 11:18:23.544543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.351 [2024-05-15 11:18:23.544556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.351 qpair failed and we were unable to recover it. 00:26:26.351 [2024-05-15 11:18:23.544780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.351 [2024-05-15 11:18:23.544900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.351 [2024-05-15 11:18:23.544928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.351 qpair failed and we were unable to recover it. 00:26:26.351 [2024-05-15 11:18:23.545142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.351 [2024-05-15 11:18:23.545371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.351 [2024-05-15 11:18:23.545401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.351 qpair failed and we were unable to recover it. 00:26:26.351 [2024-05-15 11:18:23.545642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.351 [2024-05-15 11:18:23.545775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.351 [2024-05-15 11:18:23.545804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.351 qpair failed and we were unable to recover it. 00:26:26.351 [2024-05-15 11:18:23.546013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.351 [2024-05-15 11:18:23.546186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.351 [2024-05-15 11:18:23.546217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.351 qpair failed and we were unable to recover it. 00:26:26.351 [2024-05-15 11:18:23.546482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.351 [2024-05-15 11:18:23.546579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.351 [2024-05-15 11:18:23.546593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.351 qpair failed and we were unable to recover it. 00:26:26.351 [2024-05-15 11:18:23.546796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.351 [2024-05-15 11:18:23.546914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.351 [2024-05-15 11:18:23.546943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.351 qpair failed and we were unable to recover it. 00:26:26.351 [2024-05-15 11:18:23.547133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.351 [2024-05-15 11:18:23.547287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.351 [2024-05-15 11:18:23.547318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.351 qpair failed and we were unable to recover it. 00:26:26.351 [2024-05-15 11:18:23.547507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.351 [2024-05-15 11:18:23.547678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.351 [2024-05-15 11:18:23.547707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.351 qpair failed and we were unable to recover it. 00:26:26.351 [2024-05-15 11:18:23.547889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.351 [2024-05-15 11:18:23.548069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.351 [2024-05-15 11:18:23.548098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.351 qpair failed and we were unable to recover it. 00:26:26.351 [2024-05-15 11:18:23.548216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.351 [2024-05-15 11:18:23.548396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.351 [2024-05-15 11:18:23.548410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.351 qpair failed and we were unable to recover it. 00:26:26.351 [2024-05-15 11:18:23.548569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.351 [2024-05-15 11:18:23.548656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.351 [2024-05-15 11:18:23.548669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.351 qpair failed and we were unable to recover it. 00:26:26.351 [2024-05-15 11:18:23.548825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.351 [2024-05-15 11:18:23.549030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.351 [2024-05-15 11:18:23.549043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.351 qpair failed and we were unable to recover it. 00:26:26.351 [2024-05-15 11:18:23.549130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.351 [2024-05-15 11:18:23.549356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.351 [2024-05-15 11:18:23.549386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.351 qpair failed and we were unable to recover it. 00:26:26.351 [2024-05-15 11:18:23.549565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.351 [2024-05-15 11:18:23.549681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.351 [2024-05-15 11:18:23.549710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.351 qpair failed and we were unable to recover it. 00:26:26.351 [2024-05-15 11:18:23.549915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.351 [2024-05-15 11:18:23.550089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.351 [2024-05-15 11:18:23.550102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.351 qpair failed and we were unable to recover it. 00:26:26.351 [2024-05-15 11:18:23.550186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.351 [2024-05-15 11:18:23.550324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.351 [2024-05-15 11:18:23.550337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.351 qpair failed and we were unable to recover it. 00:26:26.351 [2024-05-15 11:18:23.550481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.351 [2024-05-15 11:18:23.550637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.351 [2024-05-15 11:18:23.550650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.351 qpair failed and we were unable to recover it. 00:26:26.351 [2024-05-15 11:18:23.550751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.351 [2024-05-15 11:18:23.550889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.351 [2024-05-15 11:18:23.550902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.351 qpair failed and we were unable to recover it. 00:26:26.351 [2024-05-15 11:18:23.551067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.351 [2024-05-15 11:18:23.551220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.352 [2024-05-15 11:18:23.551252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.352 qpair failed and we were unable to recover it. 00:26:26.352 [2024-05-15 11:18:23.551436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.352 [2024-05-15 11:18:23.551605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.352 [2024-05-15 11:18:23.551634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.352 qpair failed and we were unable to recover it. 00:26:26.352 [2024-05-15 11:18:23.551879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.352 [2024-05-15 11:18:23.551994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.352 [2024-05-15 11:18:23.552024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.352 qpair failed and we were unable to recover it. 00:26:26.352 [2024-05-15 11:18:23.552285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.352 [2024-05-15 11:18:23.552549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.352 [2024-05-15 11:18:23.552579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.352 qpair failed and we were unable to recover it. 00:26:26.352 [2024-05-15 11:18:23.552784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.352 [2024-05-15 11:18:23.552882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.352 [2024-05-15 11:18:23.552911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.352 qpair failed and we were unable to recover it. 00:26:26.352 [2024-05-15 11:18:23.553069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.352 [2024-05-15 11:18:23.553214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.352 [2024-05-15 11:18:23.553228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.352 qpair failed and we were unable to recover it. 00:26:26.352 [2024-05-15 11:18:23.553459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.352 [2024-05-15 11:18:23.553656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.352 [2024-05-15 11:18:23.553670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.352 qpair failed and we were unable to recover it. 00:26:26.352 [2024-05-15 11:18:23.553815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.352 [2024-05-15 11:18:23.553990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.352 [2024-05-15 11:18:23.554019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.352 qpair failed and we were unable to recover it. 00:26:26.352 [2024-05-15 11:18:23.554237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.352 [2024-05-15 11:18:23.554407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.352 [2024-05-15 11:18:23.554436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.352 qpair failed and we were unable to recover it. 00:26:26.352 [2024-05-15 11:18:23.554646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.352 [2024-05-15 11:18:23.554826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.352 [2024-05-15 11:18:23.554855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.352 qpair failed and we were unable to recover it. 00:26:26.352 [2024-05-15 11:18:23.554988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.352 [2024-05-15 11:18:23.555182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.352 [2024-05-15 11:18:23.555197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.352 qpair failed and we were unable to recover it. 00:26:26.352 [2024-05-15 11:18:23.555279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.352 [2024-05-15 11:18:23.555350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.352 [2024-05-15 11:18:23.555364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.352 qpair failed and we were unable to recover it. 00:26:26.352 [2024-05-15 11:18:23.555503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.352 [2024-05-15 11:18:23.555662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.352 [2024-05-15 11:18:23.555678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.352 qpair failed and we were unable to recover it. 00:26:26.352 [2024-05-15 11:18:23.555894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.352 [2024-05-15 11:18:23.556057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.352 [2024-05-15 11:18:23.556086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.352 qpair failed and we were unable to recover it. 00:26:26.352 [2024-05-15 11:18:23.556223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.352 [2024-05-15 11:18:23.556343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.352 [2024-05-15 11:18:23.556372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.352 qpair failed and we were unable to recover it. 00:26:26.352 [2024-05-15 11:18:23.556555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.352 [2024-05-15 11:18:23.556835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.352 [2024-05-15 11:18:23.556865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.352 qpair failed and we were unable to recover it. 00:26:26.352 [2024-05-15 11:18:23.557039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.352 [2024-05-15 11:18:23.557122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.352 [2024-05-15 11:18:23.557136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.352 qpair failed and we were unable to recover it. 00:26:26.352 [2024-05-15 11:18:23.557222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.352 [2024-05-15 11:18:23.557377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.352 [2024-05-15 11:18:23.557390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.352 qpair failed and we were unable to recover it. 00:26:26.352 [2024-05-15 11:18:23.557600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.352 [2024-05-15 11:18:23.557791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.352 [2024-05-15 11:18:23.557820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.352 qpair failed and we were unable to recover it. 00:26:26.352 [2024-05-15 11:18:23.558031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.352 [2024-05-15 11:18:23.558155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.352 [2024-05-15 11:18:23.558174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.352 qpair failed and we were unable to recover it. 00:26:26.352 [2024-05-15 11:18:23.558328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.352 [2024-05-15 11:18:23.558513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.352 [2024-05-15 11:18:23.558542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.352 qpair failed and we were unable to recover it. 00:26:26.352 [2024-05-15 11:18:23.558749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.352 [2024-05-15 11:18:23.558885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.352 [2024-05-15 11:18:23.558914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.352 qpair failed and we were unable to recover it. 00:26:26.352 [2024-05-15 11:18:23.559049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.352 [2024-05-15 11:18:23.559295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.352 [2024-05-15 11:18:23.559331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.352 qpair failed and we were unable to recover it. 00:26:26.352 [2024-05-15 11:18:23.559524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.352 [2024-05-15 11:18:23.559696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.352 [2024-05-15 11:18:23.559725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.352 qpair failed and we were unable to recover it. 00:26:26.352 [2024-05-15 11:18:23.559864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.352 [2024-05-15 11:18:23.560049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.352 [2024-05-15 11:18:23.560077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.352 qpair failed and we were unable to recover it. 00:26:26.352 [2024-05-15 11:18:23.560253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.352 [2024-05-15 11:18:23.560490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.352 [2024-05-15 11:18:23.560519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.352 qpair failed and we were unable to recover it. 00:26:26.352 [2024-05-15 11:18:23.560772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.352 [2024-05-15 11:18:23.560894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.353 [2024-05-15 11:18:23.560923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.353 qpair failed and we were unable to recover it. 00:26:26.353 [2024-05-15 11:18:23.561096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.353 [2024-05-15 11:18:23.561212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.353 [2024-05-15 11:18:23.561243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.353 qpair failed and we were unable to recover it. 00:26:26.353 [2024-05-15 11:18:23.561462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.353 [2024-05-15 11:18:23.561637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.353 [2024-05-15 11:18:23.561666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.353 qpair failed and we were unable to recover it. 00:26:26.353 [2024-05-15 11:18:23.561916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.353 [2024-05-15 11:18:23.562134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.353 [2024-05-15 11:18:23.562163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.353 qpair failed and we were unable to recover it. 00:26:26.353 [2024-05-15 11:18:23.562346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.353 [2024-05-15 11:18:23.562479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.353 [2024-05-15 11:18:23.562492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.353 qpair failed and we were unable to recover it. 00:26:26.353 [2024-05-15 11:18:23.562583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.353 [2024-05-15 11:18:23.562728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.353 [2024-05-15 11:18:23.562741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.353 qpair failed and we were unable to recover it. 00:26:26.353 [2024-05-15 11:18:23.562816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.353 [2024-05-15 11:18:23.562890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.353 [2024-05-15 11:18:23.562903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.353 qpair failed and we were unable to recover it. 00:26:26.353 [2024-05-15 11:18:23.563052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.353 [2024-05-15 11:18:23.563144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.353 [2024-05-15 11:18:23.563157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.353 qpair failed and we were unable to recover it. 00:26:26.353 [2024-05-15 11:18:23.563394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.353 [2024-05-15 11:18:23.563536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.353 [2024-05-15 11:18:23.563565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.353 qpair failed and we were unable to recover it. 00:26:26.353 [2024-05-15 11:18:23.563807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.353 [2024-05-15 11:18:23.563931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.353 [2024-05-15 11:18:23.563959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.353 qpair failed and we were unable to recover it. 00:26:26.353 [2024-05-15 11:18:23.564091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.353 [2024-05-15 11:18:23.564266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.353 [2024-05-15 11:18:23.564296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.353 qpair failed and we were unable to recover it. 00:26:26.353 [2024-05-15 11:18:23.564493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.353 [2024-05-15 11:18:23.564627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.353 [2024-05-15 11:18:23.564656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.353 qpair failed and we were unable to recover it. 00:26:26.353 [2024-05-15 11:18:23.564788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.353 [2024-05-15 11:18:23.564978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.353 [2024-05-15 11:18:23.565014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.353 qpair failed and we were unable to recover it. 00:26:26.353 [2024-05-15 11:18:23.565108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.353 [2024-05-15 11:18:23.565246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.353 [2024-05-15 11:18:23.565260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.353 qpair failed and we were unable to recover it. 00:26:26.353 [2024-05-15 11:18:23.565419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.353 [2024-05-15 11:18:23.565629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.353 [2024-05-15 11:18:23.565642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.353 qpair failed and we were unable to recover it. 00:26:26.353 [2024-05-15 11:18:23.565787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.353 [2024-05-15 11:18:23.565875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.353 [2024-05-15 11:18:23.565888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.353 qpair failed and we were unable to recover it. 00:26:26.353 [2024-05-15 11:18:23.565966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.353 [2024-05-15 11:18:23.566027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.353 [2024-05-15 11:18:23.566039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.353 qpair failed and we were unable to recover it. 00:26:26.353 [2024-05-15 11:18:23.566207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.353 [2024-05-15 11:18:23.566361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.353 [2024-05-15 11:18:23.566374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.353 qpair failed and we were unable to recover it. 00:26:26.353 [2024-05-15 11:18:23.566475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.353 [2024-05-15 11:18:23.566627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.353 [2024-05-15 11:18:23.566641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.353 qpair failed and we were unable to recover it. 00:26:26.353 [2024-05-15 11:18:23.566729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.353 [2024-05-15 11:18:23.566802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.353 [2024-05-15 11:18:23.566821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.353 qpair failed and we were unable to recover it. 00:26:26.353 [2024-05-15 11:18:23.566902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.353 [2024-05-15 11:18:23.567119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.353 [2024-05-15 11:18:23.567148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.353 qpair failed and we were unable to recover it. 00:26:26.353 [2024-05-15 11:18:23.567296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.353 [2024-05-15 11:18:23.567470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.353 [2024-05-15 11:18:23.567498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.353 qpair failed and we were unable to recover it. 00:26:26.353 [2024-05-15 11:18:23.567696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.353 [2024-05-15 11:18:23.567877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.353 [2024-05-15 11:18:23.567906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.353 qpair failed and we were unable to recover it. 00:26:26.353 [2024-05-15 11:18:23.568099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.353 [2024-05-15 11:18:23.568211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.353 [2024-05-15 11:18:23.568241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.353 qpair failed and we were unable to recover it. 00:26:26.353 [2024-05-15 11:18:23.568413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.353 [2024-05-15 11:18:23.568545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.354 [2024-05-15 11:18:23.568575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.354 qpair failed and we were unable to recover it. 00:26:26.354 [2024-05-15 11:18:23.568753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.354 [2024-05-15 11:18:23.568890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.354 [2024-05-15 11:18:23.568919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.354 qpair failed and we were unable to recover it. 00:26:26.354 [2024-05-15 11:18:23.569049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.354 [2024-05-15 11:18:23.569304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.354 [2024-05-15 11:18:23.569318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.354 qpair failed and we were unable to recover it. 00:26:26.354 [2024-05-15 11:18:23.569427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.354 [2024-05-15 11:18:23.569637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.354 [2024-05-15 11:18:23.569651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.354 qpair failed and we were unable to recover it. 00:26:26.354 [2024-05-15 11:18:23.569737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.354 [2024-05-15 11:18:23.569903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.354 [2024-05-15 11:18:23.569917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.354 qpair failed and we were unable to recover it. 00:26:26.354 [2024-05-15 11:18:23.570022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.354 [2024-05-15 11:18:23.570253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.354 [2024-05-15 11:18:23.570267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.354 qpair failed and we were unable to recover it. 00:26:26.354 [2024-05-15 11:18:23.570435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.354 [2024-05-15 11:18:23.570535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.354 [2024-05-15 11:18:23.570549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.354 qpair failed and we were unable to recover it. 00:26:26.354 [2024-05-15 11:18:23.570710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.354 [2024-05-15 11:18:23.570867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.354 [2024-05-15 11:18:23.570880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.354 qpair failed and we were unable to recover it. 00:26:26.354 [2024-05-15 11:18:23.570970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.354 [2024-05-15 11:18:23.571115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.354 [2024-05-15 11:18:23.571129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.354 qpair failed and we were unable to recover it. 00:26:26.354 [2024-05-15 11:18:23.571290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.354 [2024-05-15 11:18:23.571462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.354 [2024-05-15 11:18:23.571476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.354 qpair failed and we were unable to recover it. 00:26:26.354 [2024-05-15 11:18:23.571563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.354 [2024-05-15 11:18:23.571652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.354 [2024-05-15 11:18:23.571664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.354 qpair failed and we were unable to recover it. 00:26:26.354 [2024-05-15 11:18:23.571810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.354 [2024-05-15 11:18:23.571951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.354 [2024-05-15 11:18:23.571965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.354 qpair failed and we were unable to recover it. 00:26:26.354 [2024-05-15 11:18:23.572055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.354 [2024-05-15 11:18:23.572224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.354 [2024-05-15 11:18:23.572238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.354 qpair failed and we were unable to recover it. 00:26:26.354 [2024-05-15 11:18:23.572321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.354 [2024-05-15 11:18:23.572470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.354 [2024-05-15 11:18:23.572488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.354 qpair failed and we were unable to recover it. 00:26:26.354 [2024-05-15 11:18:23.572647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.354 [2024-05-15 11:18:23.572902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.354 [2024-05-15 11:18:23.572931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.354 qpair failed and we were unable to recover it. 00:26:26.354 [2024-05-15 11:18:23.573131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.354 [2024-05-15 11:18:23.573256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.354 [2024-05-15 11:18:23.573271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.354 qpair failed and we were unable to recover it. 00:26:26.354 [2024-05-15 11:18:23.573372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.354 [2024-05-15 11:18:23.573529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.354 [2024-05-15 11:18:23.573543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.354 qpair failed and we were unable to recover it. 00:26:26.354 [2024-05-15 11:18:23.573708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.354 [2024-05-15 11:18:23.573863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.354 [2024-05-15 11:18:23.573876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.354 qpair failed and we were unable to recover it. 00:26:26.354 [2024-05-15 11:18:23.573970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.574106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.574119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.355 qpair failed and we were unable to recover it. 00:26:26.355 [2024-05-15 11:18:23.574297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.574397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.574410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.355 qpair failed and we were unable to recover it. 00:26:26.355 [2024-05-15 11:18:23.574618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.574757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.574770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.355 qpair failed and we were unable to recover it. 00:26:26.355 [2024-05-15 11:18:23.574915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.575120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.575133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.355 qpair failed and we were unable to recover it. 00:26:26.355 [2024-05-15 11:18:23.575296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.575506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.575535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.355 qpair failed and we were unable to recover it. 00:26:26.355 [2024-05-15 11:18:23.575647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.575909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.575938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.355 qpair failed and we were unable to recover it. 00:26:26.355 [2024-05-15 11:18:23.576125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.576277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.576291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.355 qpair failed and we were unable to recover it. 00:26:26.355 [2024-05-15 11:18:23.576473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.576556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.576569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.355 qpair failed and we were unable to recover it. 00:26:26.355 [2024-05-15 11:18:23.576714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.576815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.576829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.355 qpair failed and we were unable to recover it. 00:26:26.355 [2024-05-15 11:18:23.576982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.577186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.577200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.355 qpair failed and we were unable to recover it. 00:26:26.355 [2024-05-15 11:18:23.577285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.577437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.577466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.355 qpair failed and we were unable to recover it. 00:26:26.355 [2024-05-15 11:18:23.577580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.577815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.577844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.355 qpair failed and we were unable to recover it. 00:26:26.355 [2024-05-15 11:18:23.578108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.578241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.578271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.355 qpair failed and we were unable to recover it. 00:26:26.355 [2024-05-15 11:18:23.578540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.578746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.578776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.355 qpair failed and we were unable to recover it. 00:26:26.355 [2024-05-15 11:18:23.578974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.579162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.579236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.355 qpair failed and we were unable to recover it. 00:26:26.355 [2024-05-15 11:18:23.579324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.579483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.579497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.355 qpair failed and we were unable to recover it. 00:26:26.355 [2024-05-15 11:18:23.579599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.579738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.579751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.355 qpair failed and we were unable to recover it. 00:26:26.355 [2024-05-15 11:18:23.579861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.579996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.580009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.355 qpair failed and we were unable to recover it. 00:26:26.355 [2024-05-15 11:18:23.580163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.580327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.580340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.355 qpair failed and we were unable to recover it. 00:26:26.355 [2024-05-15 11:18:23.580484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.580660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.580674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.355 qpair failed and we were unable to recover it. 00:26:26.355 [2024-05-15 11:18:23.580843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.580983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.580996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.355 qpair failed and we were unable to recover it. 00:26:26.355 [2024-05-15 11:18:23.581096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.581302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.581316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.355 qpair failed and we were unable to recover it. 00:26:26.355 [2024-05-15 11:18:23.581481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.581570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.581583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.355 qpair failed and we were unable to recover it. 00:26:26.355 [2024-05-15 11:18:23.581672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.581755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.581769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.355 qpair failed and we were unable to recover it. 00:26:26.355 [2024-05-15 11:18:23.582005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.582241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.582271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.355 qpair failed and we were unable to recover it. 00:26:26.355 [2024-05-15 11:18:23.582487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.582728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.582741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.355 qpair failed and we were unable to recover it. 00:26:26.355 [2024-05-15 11:18:23.582845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.355 [2024-05-15 11:18:23.583053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.356 [2024-05-15 11:18:23.583066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.356 qpair failed and we were unable to recover it. 00:26:26.356 [2024-05-15 11:18:23.583217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.356 [2024-05-15 11:18:23.583309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.356 [2024-05-15 11:18:23.583322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.356 qpair failed and we were unable to recover it. 00:26:26.356 [2024-05-15 11:18:23.583540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.356 [2024-05-15 11:18:23.583638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.356 [2024-05-15 11:18:23.583651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.356 qpair failed and we were unable to recover it. 00:26:26.356 [2024-05-15 11:18:23.583725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.356 [2024-05-15 11:18:23.583804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.356 [2024-05-15 11:18:23.583818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.356 qpair failed and we were unable to recover it. 00:26:26.356 [2024-05-15 11:18:23.584004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.356 [2024-05-15 11:18:23.584093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.356 [2024-05-15 11:18:23.584106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.356 qpair failed and we were unable to recover it. 00:26:26.356 [2024-05-15 11:18:23.584184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.356 [2024-05-15 11:18:23.584347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.356 [2024-05-15 11:18:23.584360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.356 qpair failed and we were unable to recover it. 00:26:26.356 [2024-05-15 11:18:23.584511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.356 [2024-05-15 11:18:23.584652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.356 [2024-05-15 11:18:23.584666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.356 qpair failed and we were unable to recover it. 00:26:26.356 [2024-05-15 11:18:23.584774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.356 [2024-05-15 11:18:23.584913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.356 [2024-05-15 11:18:23.584927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.356 qpair failed and we were unable to recover it. 00:26:26.356 [2024-05-15 11:18:23.585023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.356 [2024-05-15 11:18:23.585093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.356 [2024-05-15 11:18:23.585106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.356 qpair failed and we were unable to recover it. 00:26:26.356 [2024-05-15 11:18:23.585278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.356 [2024-05-15 11:18:23.585366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.356 [2024-05-15 11:18:23.585379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.356 qpair failed and we were unable to recover it. 00:26:26.356 [2024-05-15 11:18:23.585590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.356 [2024-05-15 11:18:23.585689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.356 [2024-05-15 11:18:23.585702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.356 qpair failed and we were unable to recover it. 00:26:26.356 [2024-05-15 11:18:23.585858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.356 [2024-05-15 11:18:23.585963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.356 [2024-05-15 11:18:23.585976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.356 qpair failed and we were unable to recover it. 00:26:26.356 [2024-05-15 11:18:23.586082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.356 [2024-05-15 11:18:23.586287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.356 [2024-05-15 11:18:23.586301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.356 qpair failed and we were unable to recover it. 00:26:26.356 [2024-05-15 11:18:23.586386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.356 [2024-05-15 11:18:23.586468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.356 [2024-05-15 11:18:23.586481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.356 qpair failed and we were unable to recover it. 00:26:26.356 [2024-05-15 11:18:23.586633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.356 [2024-05-15 11:18:23.586723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.356 [2024-05-15 11:18:23.586736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.356 qpair failed and we were unable to recover it. 00:26:26.356 [2024-05-15 11:18:23.586878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.356 [2024-05-15 11:18:23.586968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.356 [2024-05-15 11:18:23.586981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.356 qpair failed and we were unable to recover it. 00:26:26.356 [2024-05-15 11:18:23.587131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.356 [2024-05-15 11:18:23.587236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.356 [2024-05-15 11:18:23.587249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.356 qpair failed and we were unable to recover it. 00:26:26.356 [2024-05-15 11:18:23.587390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.356 [2024-05-15 11:18:23.587533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.356 [2024-05-15 11:18:23.587546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.356 qpair failed and we were unable to recover it. 00:26:26.356 [2024-05-15 11:18:23.587622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.356 [2024-05-15 11:18:23.587704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.356 [2024-05-15 11:18:23.587717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.356 qpair failed and we were unable to recover it. 00:26:26.356 [2024-05-15 11:18:23.587808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.356 [2024-05-15 11:18:23.587904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.356 [2024-05-15 11:18:23.587917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.356 qpair failed and we were unable to recover it. 00:26:26.356 [2024-05-15 11:18:23.588023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.356 [2024-05-15 11:18:23.588173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.356 [2024-05-15 11:18:23.588189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.356 qpair failed and we were unable to recover it. 00:26:26.356 [2024-05-15 11:18:23.588275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.356 [2024-05-15 11:18:23.588344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.356 [2024-05-15 11:18:23.588357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.356 qpair failed and we were unable to recover it. 00:26:26.356 [2024-05-15 11:18:23.588455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.356 [2024-05-15 11:18:23.588555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.356 [2024-05-15 11:18:23.588568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.356 qpair failed and we were unable to recover it. 00:26:26.356 [2024-05-15 11:18:23.588710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.356 [2024-05-15 11:18:23.588783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.356 [2024-05-15 11:18:23.588797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.356 qpair failed and we were unable to recover it. 00:26:26.356 [2024-05-15 11:18:23.588886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.356 [2024-05-15 11:18:23.589052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.356 [2024-05-15 11:18:23.589066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.356 qpair failed and we were unable to recover it. 00:26:26.356 [2024-05-15 11:18:23.589227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.356 [2024-05-15 11:18:23.589307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.356 [2024-05-15 11:18:23.589320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.356 qpair failed and we were unable to recover it. 00:26:26.356 [2024-05-15 11:18:23.589408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.356 [2024-05-15 11:18:23.589491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.357 [2024-05-15 11:18:23.589504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.357 qpair failed and we were unable to recover it. 00:26:26.357 [2024-05-15 11:18:23.589642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.357 [2024-05-15 11:18:23.589824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.357 [2024-05-15 11:18:23.589852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.357 qpair failed and we were unable to recover it. 00:26:26.357 [2024-05-15 11:18:23.590043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.357 [2024-05-15 11:18:23.590218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.357 [2024-05-15 11:18:23.590249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.357 qpair failed and we were unable to recover it. 00:26:26.357 [2024-05-15 11:18:23.590362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.357 [2024-05-15 11:18:23.590570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.357 [2024-05-15 11:18:23.590584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.357 qpair failed and we were unable to recover it. 00:26:26.357 [2024-05-15 11:18:23.590669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.357 [2024-05-15 11:18:23.590822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.357 [2024-05-15 11:18:23.590836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.357 qpair failed and we were unable to recover it. 00:26:26.357 [2024-05-15 11:18:23.590920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.357 [2024-05-15 11:18:23.591058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.357 [2024-05-15 11:18:23.591072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.357 qpair failed and we were unable to recover it. 00:26:26.357 [2024-05-15 11:18:23.591161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.357 [2024-05-15 11:18:23.591317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.357 [2024-05-15 11:18:23.591331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.357 qpair failed and we were unable to recover it. 00:26:26.357 [2024-05-15 11:18:23.591472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.357 [2024-05-15 11:18:23.591554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.357 [2024-05-15 11:18:23.591567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.357 qpair failed and we were unable to recover it. 00:26:26.357 [2024-05-15 11:18:23.591658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.357 [2024-05-15 11:18:23.591797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.357 [2024-05-15 11:18:23.591810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.357 qpair failed and we were unable to recover it. 00:26:26.357 [2024-05-15 11:18:23.591907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.357 [2024-05-15 11:18:23.592128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.357 [2024-05-15 11:18:23.592141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.357 qpair failed and we were unable to recover it. 00:26:26.357 [2024-05-15 11:18:23.592246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.357 [2024-05-15 11:18:23.592327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.357 [2024-05-15 11:18:23.592341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.357 qpair failed and we were unable to recover it. 00:26:26.357 [2024-05-15 11:18:23.592484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.357 [2024-05-15 11:18:23.592663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.357 [2024-05-15 11:18:23.592676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.357 qpair failed and we were unable to recover it. 00:26:26.357 [2024-05-15 11:18:23.592773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.357 [2024-05-15 11:18:23.592911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.357 [2024-05-15 11:18:23.592946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.357 qpair failed and we were unable to recover it. 00:26:26.357 [2024-05-15 11:18:23.593061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.357 [2024-05-15 11:18:23.593294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.357 [2024-05-15 11:18:23.593324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.357 qpair failed and we were unable to recover it. 00:26:26.357 [2024-05-15 11:18:23.593531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.357 [2024-05-15 11:18:23.593650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.357 [2024-05-15 11:18:23.593679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.357 qpair failed and we were unable to recover it. 00:26:26.357 [2024-05-15 11:18:23.593880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.357 [2024-05-15 11:18:23.593985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.357 [2024-05-15 11:18:23.594014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.357 qpair failed and we were unable to recover it. 00:26:26.357 [2024-05-15 11:18:23.594255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.357 [2024-05-15 11:18:23.594487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.357 [2024-05-15 11:18:23.594500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.357 qpair failed and we were unable to recover it. 00:26:26.357 [2024-05-15 11:18:23.594654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.357 [2024-05-15 11:18:23.594749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.357 [2024-05-15 11:18:23.594763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.357 qpair failed and we were unable to recover it. 00:26:26.357 [2024-05-15 11:18:23.594939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.357 [2024-05-15 11:18:23.595023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.357 [2024-05-15 11:18:23.595037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.357 qpair failed and we were unable to recover it. 00:26:26.357 [2024-05-15 11:18:23.595197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.357 [2024-05-15 11:18:23.595354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.357 [2024-05-15 11:18:23.595367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.357 qpair failed and we were unable to recover it. 00:26:26.357 [2024-05-15 11:18:23.595610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.357 [2024-05-15 11:18:23.595727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.357 [2024-05-15 11:18:23.595756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.357 qpair failed and we were unable to recover it. 00:26:26.357 [2024-05-15 11:18:23.595882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.357 [2024-05-15 11:18:23.596003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.357 [2024-05-15 11:18:23.596032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.357 qpair failed and we were unable to recover it. 00:26:26.357 [2024-05-15 11:18:23.596245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.357 [2024-05-15 11:18:23.596417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.357 [2024-05-15 11:18:23.596446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.357 qpair failed and we were unable to recover it. 00:26:26.357 [2024-05-15 11:18:23.596637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.357 [2024-05-15 11:18:23.596761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.357 [2024-05-15 11:18:23.596799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.357 qpair failed and we were unable to recover it. 00:26:26.357 [2024-05-15 11:18:23.597066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.357 [2024-05-15 11:18:23.597203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.358 [2024-05-15 11:18:23.597250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.358 qpair failed and we were unable to recover it. 00:26:26.358 [2024-05-15 11:18:23.597562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.358 [2024-05-15 11:18:23.597796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.358 [2024-05-15 11:18:23.597810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.358 qpair failed and we were unable to recover it. 00:26:26.358 [2024-05-15 11:18:23.597898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.358 [2024-05-15 11:18:23.598046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.358 [2024-05-15 11:18:23.598059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.358 qpair failed and we were unable to recover it. 00:26:26.358 [2024-05-15 11:18:23.598211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.358 [2024-05-15 11:18:23.598430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.358 [2024-05-15 11:18:23.598459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.358 qpair failed and we were unable to recover it. 00:26:26.358 [2024-05-15 11:18:23.598751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.358 [2024-05-15 11:18:23.598948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.358 [2024-05-15 11:18:23.598978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.358 qpair failed and we were unable to recover it. 00:26:26.358 [2024-05-15 11:18:23.599100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.358 [2024-05-15 11:18:23.599367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.358 [2024-05-15 11:18:23.599385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.358 qpair failed and we were unable to recover it. 00:26:26.358 [2024-05-15 11:18:23.599567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.358 [2024-05-15 11:18:23.599670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.358 [2024-05-15 11:18:23.599684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.358 qpair failed and we were unable to recover it. 00:26:26.358 [2024-05-15 11:18:23.599870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.358 [2024-05-15 11:18:23.600062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.358 [2024-05-15 11:18:23.600092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.358 qpair failed and we were unable to recover it. 00:26:26.358 [2024-05-15 11:18:23.600286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.358 [2024-05-15 11:18:23.600447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.358 [2024-05-15 11:18:23.600461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.358 qpair failed and we were unable to recover it. 00:26:26.358 [2024-05-15 11:18:23.600625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.358 [2024-05-15 11:18:23.600865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.358 [2024-05-15 11:18:23.600883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.358 qpair failed and we were unable to recover it. 00:26:26.634 [2024-05-15 11:18:23.600992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.634 [2024-05-15 11:18:23.601134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.634 [2024-05-15 11:18:23.601148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.634 qpair failed and we were unable to recover it. 00:26:26.634 [2024-05-15 11:18:23.601311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.634 [2024-05-15 11:18:23.601397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.634 [2024-05-15 11:18:23.601415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.634 qpair failed and we were unable to recover it. 00:26:26.634 [2024-05-15 11:18:23.601576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.634 [2024-05-15 11:18:23.601664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.634 [2024-05-15 11:18:23.601677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.634 qpair failed and we were unable to recover it. 00:26:26.634 [2024-05-15 11:18:23.601767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.634 [2024-05-15 11:18:23.601974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.634 [2024-05-15 11:18:23.601987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.634 qpair failed and we were unable to recover it. 00:26:26.634 [2024-05-15 11:18:23.602073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.634 [2024-05-15 11:18:23.602224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.634 [2024-05-15 11:18:23.602239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.634 qpair failed and we were unable to recover it. 00:26:26.634 [2024-05-15 11:18:23.602355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.634 [2024-05-15 11:18:23.602447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.634 [2024-05-15 11:18:23.602461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.634 qpair failed and we were unable to recover it. 00:26:26.634 [2024-05-15 11:18:23.602554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.634 [2024-05-15 11:18:23.602695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.634 [2024-05-15 11:18:23.602709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.634 qpair failed and we were unable to recover it. 00:26:26.634 [2024-05-15 11:18:23.602800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.634 [2024-05-15 11:18:23.602877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.634 [2024-05-15 11:18:23.602890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.634 qpair failed and we were unable to recover it. 00:26:26.634 [2024-05-15 11:18:23.603123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.634 [2024-05-15 11:18:23.603200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.634 [2024-05-15 11:18:23.603215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.634 qpair failed and we were unable to recover it. 00:26:26.634 [2024-05-15 11:18:23.603294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.634 [2024-05-15 11:18:23.603483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.634 [2024-05-15 11:18:23.603496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.634 qpair failed and we were unable to recover it. 00:26:26.634 [2024-05-15 11:18:23.603588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.634 [2024-05-15 11:18:23.603803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.634 [2024-05-15 11:18:23.603816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.634 qpair failed and we were unable to recover it. 00:26:26.634 [2024-05-15 11:18:23.603904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.634 [2024-05-15 11:18:23.604052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.634 [2024-05-15 11:18:23.604068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.634 qpair failed and we were unable to recover it. 00:26:26.634 [2024-05-15 11:18:23.604159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.634 [2024-05-15 11:18:23.604256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.634 [2024-05-15 11:18:23.604269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.634 qpair failed and we were unable to recover it. 00:26:26.634 [2024-05-15 11:18:23.604422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.634 [2024-05-15 11:18:23.604641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.634 [2024-05-15 11:18:23.604671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.634 qpair failed and we were unable to recover it. 00:26:26.634 [2024-05-15 11:18:23.604878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.634 [2024-05-15 11:18:23.605010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.634 [2024-05-15 11:18:23.605040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.634 qpair failed and we were unable to recover it. 00:26:26.634 [2024-05-15 11:18:23.605276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.634 [2024-05-15 11:18:23.605517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.634 [2024-05-15 11:18:23.605547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.634 qpair failed and we were unable to recover it. 00:26:26.634 [2024-05-15 11:18:23.605673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.634 [2024-05-15 11:18:23.605890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.605919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.635 qpair failed and we were unable to recover it. 00:26:26.635 [2024-05-15 11:18:23.606119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.606211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.606226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.635 qpair failed and we were unable to recover it. 00:26:26.635 [2024-05-15 11:18:23.606323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.606551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.606564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.635 qpair failed and we were unable to recover it. 00:26:26.635 [2024-05-15 11:18:23.606806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.606905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.606919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.635 qpair failed and we were unable to recover it. 00:26:26.635 [2024-05-15 11:18:23.607058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.607144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.607158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.635 qpair failed and we were unable to recover it. 00:26:26.635 [2024-05-15 11:18:23.607248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.607388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.607402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.635 qpair failed and we were unable to recover it. 00:26:26.635 [2024-05-15 11:18:23.607544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.607795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.607808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.635 qpair failed and we were unable to recover it. 00:26:26.635 [2024-05-15 11:18:23.607907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.608056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.608070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.635 qpair failed and we were unable to recover it. 00:26:26.635 [2024-05-15 11:18:23.608223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.608309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.608322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.635 qpair failed and we were unable to recover it. 00:26:26.635 [2024-05-15 11:18:23.608496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.608572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.608586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.635 qpair failed and we were unable to recover it. 00:26:26.635 [2024-05-15 11:18:23.608681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.608889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.608903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.635 qpair failed and we were unable to recover it. 00:26:26.635 [2024-05-15 11:18:23.609073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.609230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.609244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.635 qpair failed and we were unable to recover it. 00:26:26.635 [2024-05-15 11:18:23.609332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.609405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.609418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.635 qpair failed and we were unable to recover it. 00:26:26.635 [2024-05-15 11:18:23.609508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.609593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.609606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.635 qpair failed and we were unable to recover it. 00:26:26.635 [2024-05-15 11:18:23.609812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.609974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.609988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.635 qpair failed and we were unable to recover it. 00:26:26.635 [2024-05-15 11:18:23.610093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.610181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.610194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.635 qpair failed and we were unable to recover it. 00:26:26.635 [2024-05-15 11:18:23.610371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.610524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.610538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.635 qpair failed and we were unable to recover it. 00:26:26.635 [2024-05-15 11:18:23.610625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.610763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.610777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.635 qpair failed and we were unable to recover it. 00:26:26.635 [2024-05-15 11:18:23.610868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.611014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.611028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.635 qpair failed and we were unable to recover it. 00:26:26.635 [2024-05-15 11:18:23.611241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.611448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.611461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.635 qpair failed and we were unable to recover it. 00:26:26.635 [2024-05-15 11:18:23.611641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.611817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.611846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.635 qpair failed and we were unable to recover it. 00:26:26.635 [2024-05-15 11:18:23.612016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.612139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.612179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.635 qpair failed and we were unable to recover it. 00:26:26.635 [2024-05-15 11:18:23.612357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.612549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.612562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.635 qpair failed and we were unable to recover it. 00:26:26.635 [2024-05-15 11:18:23.612812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.613056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.613085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.635 qpair failed and we were unable to recover it. 00:26:26.635 [2024-05-15 11:18:23.613272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.613393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.613421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.635 qpair failed and we were unable to recover it. 00:26:26.635 [2024-05-15 11:18:23.613548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.613731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.613759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.635 qpair failed and we were unable to recover it. 00:26:26.635 [2024-05-15 11:18:23.613869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.614045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.614075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.635 qpair failed and we were unable to recover it. 00:26:26.635 [2024-05-15 11:18:23.614262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.614492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.614505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.635 qpair failed and we were unable to recover it. 00:26:26.635 [2024-05-15 11:18:23.614593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.614744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.635 [2024-05-15 11:18:23.614757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.636 qpair failed and we were unable to recover it. 00:26:26.636 [2024-05-15 11:18:23.614914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.615066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.615080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.636 qpair failed and we were unable to recover it. 00:26:26.636 [2024-05-15 11:18:23.615239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.615321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.615334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.636 qpair failed and we were unable to recover it. 00:26:26.636 [2024-05-15 11:18:23.615495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.615590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.615604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.636 qpair failed and we were unable to recover it. 00:26:26.636 [2024-05-15 11:18:23.615748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.615902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.615915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.636 qpair failed and we were unable to recover it. 00:26:26.636 [2024-05-15 11:18:23.616055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.616138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.616152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.636 qpair failed and we were unable to recover it. 00:26:26.636 [2024-05-15 11:18:23.616296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.616450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.616463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.636 qpair failed and we were unable to recover it. 00:26:26.636 [2024-05-15 11:18:23.616612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.616755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.616769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.636 qpair failed and we were unable to recover it. 00:26:26.636 [2024-05-15 11:18:23.616920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.617079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.617113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.636 qpair failed and we were unable to recover it. 00:26:26.636 [2024-05-15 11:18:23.617380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.617550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.617579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.636 qpair failed and we were unable to recover it. 00:26:26.636 [2024-05-15 11:18:23.617702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.617907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.617936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.636 qpair failed and we were unable to recover it. 00:26:26.636 [2024-05-15 11:18:23.618123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.618324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.618338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.636 qpair failed and we were unable to recover it. 00:26:26.636 [2024-05-15 11:18:23.618498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.618592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.618605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.636 qpair failed and we were unable to recover it. 00:26:26.636 [2024-05-15 11:18:23.618745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.618900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.618913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.636 qpair failed and we were unable to recover it. 00:26:26.636 [2024-05-15 11:18:23.619058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.619272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.619303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.636 qpair failed and we were unable to recover it. 00:26:26.636 [2024-05-15 11:18:23.619548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.619730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.619760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.636 qpair failed and we were unable to recover it. 00:26:26.636 [2024-05-15 11:18:23.620030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.620307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.620337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.636 qpair failed and we were unable to recover it. 00:26:26.636 [2024-05-15 11:18:23.620518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.620603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.620616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.636 qpair failed and we were unable to recover it. 00:26:26.636 [2024-05-15 11:18:23.620779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.620986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.620999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.636 qpair failed and we were unable to recover it. 00:26:26.636 [2024-05-15 11:18:23.621183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.621337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.621367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.636 qpair failed and we were unable to recover it. 00:26:26.636 [2024-05-15 11:18:23.621498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.621610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.621639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.636 qpair failed and we were unable to recover it. 00:26:26.636 [2024-05-15 11:18:23.621898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.622080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.622094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.636 qpair failed and we were unable to recover it. 00:26:26.636 [2024-05-15 11:18:23.622235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.622399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.622412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.636 qpair failed and we were unable to recover it. 00:26:26.636 [2024-05-15 11:18:23.622557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.622715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.622728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.636 qpair failed and we were unable to recover it. 00:26:26.636 [2024-05-15 11:18:23.622809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.622896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.622909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.636 qpair failed and we were unable to recover it. 00:26:26.636 [2024-05-15 11:18:23.623012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.623090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.623103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.636 qpair failed and we were unable to recover it. 00:26:26.636 [2024-05-15 11:18:23.623191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.623366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.623395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.636 qpair failed and we were unable to recover it. 00:26:26.636 [2024-05-15 11:18:23.623600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.623809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.623844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.636 qpair failed and we were unable to recover it. 00:26:26.636 [2024-05-15 11:18:23.624060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.624239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.636 [2024-05-15 11:18:23.624279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.636 qpair failed and we were unable to recover it. 00:26:26.637 [2024-05-15 11:18:23.624370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.624521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.624535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.637 qpair failed and we were unable to recover it. 00:26:26.637 [2024-05-15 11:18:23.624681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.624846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.624863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.637 qpair failed and we were unable to recover it. 00:26:26.637 [2024-05-15 11:18:23.625050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.625139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.625152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.637 qpair failed and we were unable to recover it. 00:26:26.637 [2024-05-15 11:18:23.625312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.625466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.625480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.637 qpair failed and we were unable to recover it. 00:26:26.637 [2024-05-15 11:18:23.625647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.625823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.625837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.637 qpair failed and we were unable to recover it. 00:26:26.637 [2024-05-15 11:18:23.625925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.626007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.626020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.637 qpair failed and we were unable to recover it. 00:26:26.637 [2024-05-15 11:18:23.626124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.626302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.626316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.637 qpair failed and we were unable to recover it. 00:26:26.637 [2024-05-15 11:18:23.626420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.626577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.626591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.637 qpair failed and we were unable to recover it. 00:26:26.637 [2024-05-15 11:18:23.626749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.626841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.626854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.637 qpair failed and we were unable to recover it. 00:26:26.637 [2024-05-15 11:18:23.626936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.627109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.627122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.637 qpair failed and we were unable to recover it. 00:26:26.637 [2024-05-15 11:18:23.627286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.627375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.627406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.637 qpair failed and we were unable to recover it. 00:26:26.637 [2024-05-15 11:18:23.627605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.627813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.627841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.637 qpair failed and we were unable to recover it. 00:26:26.637 [2024-05-15 11:18:23.628029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.628210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.628241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.637 qpair failed and we were unable to recover it. 00:26:26.637 [2024-05-15 11:18:23.628429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.628566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.628579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.637 qpair failed and we were unable to recover it. 00:26:26.637 [2024-05-15 11:18:23.628784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.628890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.628918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.637 qpair failed and we were unable to recover it. 00:26:26.637 [2024-05-15 11:18:23.629116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.629287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.629301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.637 qpair failed and we were unable to recover it. 00:26:26.637 [2024-05-15 11:18:23.629402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.629543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.629557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.637 qpair failed and we were unable to recover it. 00:26:26.637 [2024-05-15 11:18:23.629725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.629861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.629889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.637 qpair failed and we were unable to recover it. 00:26:26.637 [2024-05-15 11:18:23.630017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.630188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.630219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.637 qpair failed and we were unable to recover it. 00:26:26.637 [2024-05-15 11:18:23.630457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.630617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.630630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.637 qpair failed and we were unable to recover it. 00:26:26.637 [2024-05-15 11:18:23.630793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.630872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.630885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.637 qpair failed and we were unable to recover it. 00:26:26.637 [2024-05-15 11:18:23.630982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.631125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.631138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.637 qpair failed and we were unable to recover it. 00:26:26.637 [2024-05-15 11:18:23.631309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.631484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.631521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.637 qpair failed and we were unable to recover it. 00:26:26.637 [2024-05-15 11:18:23.631763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.632024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.632054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.637 qpair failed and we were unable to recover it. 00:26:26.637 [2024-05-15 11:18:23.632360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.632512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.632525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.637 qpair failed and we were unable to recover it. 00:26:26.637 [2024-05-15 11:18:23.632665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.632919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.632948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.637 qpair failed and we were unable to recover it. 00:26:26.637 [2024-05-15 11:18:23.633116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.633310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.633324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.637 qpair failed and we were unable to recover it. 00:26:26.637 [2024-05-15 11:18:23.633508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.633660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.637 [2024-05-15 11:18:23.633696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.637 qpair failed and we were unable to recover it. 00:26:26.638 [2024-05-15 11:18:23.633890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.634014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.634042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.638 qpair failed and we were unable to recover it. 00:26:26.638 [2024-05-15 11:18:23.634254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.634518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.634547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.638 qpair failed and we were unable to recover it. 00:26:26.638 [2024-05-15 11:18:23.634751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.634884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.634918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.638 qpair failed and we were unable to recover it. 00:26:26.638 [2024-05-15 11:18:23.635114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.635377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.635408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.638 qpair failed and we were unable to recover it. 00:26:26.638 [2024-05-15 11:18:23.635529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.635657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.635686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.638 qpair failed and we were unable to recover it. 00:26:26.638 [2024-05-15 11:18:23.635871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.636055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.636084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.638 qpair failed and we were unable to recover it. 00:26:26.638 [2024-05-15 11:18:23.636220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.636394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.636423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.638 qpair failed and we were unable to recover it. 00:26:26.638 [2024-05-15 11:18:23.636602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.636785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.636814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.638 qpair failed and we were unable to recover it. 00:26:26.638 [2024-05-15 11:18:23.637031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.637162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.637198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.638 qpair failed and we were unable to recover it. 00:26:26.638 [2024-05-15 11:18:23.637462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.637599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.637627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.638 qpair failed and we were unable to recover it. 00:26:26.638 [2024-05-15 11:18:23.637838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.638021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.638049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.638 qpair failed and we were unable to recover it. 00:26:26.638 [2024-05-15 11:18:23.638185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.638444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.638457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.638 qpair failed and we were unable to recover it. 00:26:26.638 [2024-05-15 11:18:23.638601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.638783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.638796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.638 qpair failed and we were unable to recover it. 00:26:26.638 [2024-05-15 11:18:23.638953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.639110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.639123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.638 qpair failed and we were unable to recover it. 00:26:26.638 [2024-05-15 11:18:23.639280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.639482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.639496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.638 qpair failed and we were unable to recover it. 00:26:26.638 [2024-05-15 11:18:23.639709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.639925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.639938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.638 qpair failed and we were unable to recover it. 00:26:26.638 [2024-05-15 11:18:23.640082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.640347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.640377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.638 qpair failed and we were unable to recover it. 00:26:26.638 [2024-05-15 11:18:23.640490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.640676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.640706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.638 qpair failed and we were unable to recover it. 00:26:26.638 [2024-05-15 11:18:23.640892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.641099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.641128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.638 qpair failed and we were unable to recover it. 00:26:26.638 [2024-05-15 11:18:23.641252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.641421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.641450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.638 qpair failed and we were unable to recover it. 00:26:26.638 [2024-05-15 11:18:23.641638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.641902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.641931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.638 qpair failed and we were unable to recover it. 00:26:26.638 [2024-05-15 11:18:23.642067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.642202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.642232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.638 qpair failed and we were unable to recover it. 00:26:26.638 [2024-05-15 11:18:23.642494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.642684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.642697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.638 qpair failed and we were unable to recover it. 00:26:26.638 [2024-05-15 11:18:23.642907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.642994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.643008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.638 qpair failed and we were unable to recover it. 00:26:26.638 [2024-05-15 11:18:23.643187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.643344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.643389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.638 qpair failed and we were unable to recover it. 00:26:26.638 [2024-05-15 11:18:23.643628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.643803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.643832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.638 qpair failed and we were unable to recover it. 00:26:26.638 [2024-05-15 11:18:23.644026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.644150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.644189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.638 qpair failed and we were unable to recover it. 00:26:26.638 [2024-05-15 11:18:23.644376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.644550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.638 [2024-05-15 11:18:23.644578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.639 qpair failed and we were unable to recover it. 00:26:26.639 [2024-05-15 11:18:23.644688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.639 [2024-05-15 11:18:23.644943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.639 [2024-05-15 11:18:23.644971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.639 qpair failed and we were unable to recover it. 00:26:26.639 [2024-05-15 11:18:23.645103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.639 [2024-05-15 11:18:23.645189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.639 [2024-05-15 11:18:23.645202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.639 qpair failed and we were unable to recover it. 00:26:26.639 [2024-05-15 11:18:23.645408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.639 [2024-05-15 11:18:23.645499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.639 [2024-05-15 11:18:23.645512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.639 qpair failed and we were unable to recover it. 00:26:26.639 [2024-05-15 11:18:23.645600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.639 [2024-05-15 11:18:23.645740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.639 [2024-05-15 11:18:23.645753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.639 qpair failed and we were unable to recover it. 00:26:26.639 [2024-05-15 11:18:23.645988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.639 [2024-05-15 11:18:23.646213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.639 [2024-05-15 11:18:23.646227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.639 qpair failed and we were unable to recover it. 00:26:26.639 [2024-05-15 11:18:23.646311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.639 [2024-05-15 11:18:23.646407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.639 [2024-05-15 11:18:23.646421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.639 qpair failed and we were unable to recover it. 00:26:26.639 [2024-05-15 11:18:23.646512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.639 [2024-05-15 11:18:23.646612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.639 [2024-05-15 11:18:23.646625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.639 qpair failed and we were unable to recover it. 00:26:26.639 [2024-05-15 11:18:23.646787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.639 [2024-05-15 11:18:23.646870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.639 [2024-05-15 11:18:23.646883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.639 qpair failed and we were unable to recover it. 00:26:26.639 [2024-05-15 11:18:23.647043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.639 [2024-05-15 11:18:23.647182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.639 [2024-05-15 11:18:23.647195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.639 qpair failed and we were unable to recover it. 00:26:26.639 [2024-05-15 11:18:23.647337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.639 [2024-05-15 11:18:23.647473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.639 [2024-05-15 11:18:23.647486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.639 qpair failed and we were unable to recover it. 00:26:26.639 [2024-05-15 11:18:23.647583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.639 [2024-05-15 11:18:23.647645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.639 [2024-05-15 11:18:23.647659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.639 qpair failed and we were unable to recover it. 00:26:26.639 [2024-05-15 11:18:23.647816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.639 [2024-05-15 11:18:23.647971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.639 [2024-05-15 11:18:23.647984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.639 qpair failed and we were unable to recover it. 00:26:26.639 [2024-05-15 11:18:23.648087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.639 [2024-05-15 11:18:23.648162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.639 [2024-05-15 11:18:23.648180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.639 qpair failed and we were unable to recover it. 00:26:26.639 [2024-05-15 11:18:23.648267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.639 [2024-05-15 11:18:23.648407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.639 [2024-05-15 11:18:23.648419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.639 qpair failed and we were unable to recover it. 00:26:26.639 [2024-05-15 11:18:23.648552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.639 [2024-05-15 11:18:23.648640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.639 [2024-05-15 11:18:23.648653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.639 qpair failed and we were unable to recover it. 00:26:26.639 [2024-05-15 11:18:23.648738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.639 [2024-05-15 11:18:23.648904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.639 [2024-05-15 11:18:23.648919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.639 qpair failed and we were unable to recover it. 00:26:26.639 [2024-05-15 11:18:23.649075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.639 [2024-05-15 11:18:23.649174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.639 [2024-05-15 11:18:23.649187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.639 qpair failed and we were unable to recover it. 00:26:26.639 [2024-05-15 11:18:23.649390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.639 [2024-05-15 11:18:23.649558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.639 [2024-05-15 11:18:23.649586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.639 qpair failed and we were unable to recover it. 00:26:26.639 [2024-05-15 11:18:23.649773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.639 [2024-05-15 11:18:23.649950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.639 [2024-05-15 11:18:23.649978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.639 qpair failed and we were unable to recover it. 00:26:26.639 [2024-05-15 11:18:23.650170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.639 [2024-05-15 11:18:23.650254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.639 [2024-05-15 11:18:23.650267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.639 qpair failed and we were unable to recover it. 00:26:26.639 [2024-05-15 11:18:23.650422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.639 [2024-05-15 11:18:23.650648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.639 [2024-05-15 11:18:23.650677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.639 qpair failed and we were unable to recover it. 00:26:26.639 [2024-05-15 11:18:23.650807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.651068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.651098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.640 qpair failed and we were unable to recover it. 00:26:26.640 [2024-05-15 11:18:23.651213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.651417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.651430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.640 qpair failed and we were unable to recover it. 00:26:26.640 [2024-05-15 11:18:23.651521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.651669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.651682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.640 qpair failed and we were unable to recover it. 00:26:26.640 [2024-05-15 11:18:23.651892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.651986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.651999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.640 qpair failed and we were unable to recover it. 00:26:26.640 [2024-05-15 11:18:23.652103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.652190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.652206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.640 qpair failed and we were unable to recover it. 00:26:26.640 [2024-05-15 11:18:23.652370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.652454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.652467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.640 qpair failed and we were unable to recover it. 00:26:26.640 [2024-05-15 11:18:23.652539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.652684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.652698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.640 qpair failed and we were unable to recover it. 00:26:26.640 [2024-05-15 11:18:23.652919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.653023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.653055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.640 qpair failed and we were unable to recover it. 00:26:26.640 [2024-05-15 11:18:23.653265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.653453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.653482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.640 qpair failed and we were unable to recover it. 00:26:26.640 [2024-05-15 11:18:23.653752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.653891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.653920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.640 qpair failed and we were unable to recover it. 00:26:26.640 [2024-05-15 11:18:23.654098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.654297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.654327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.640 qpair failed and we were unable to recover it. 00:26:26.640 [2024-05-15 11:18:23.654535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.654652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.654681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.640 qpair failed and we were unable to recover it. 00:26:26.640 [2024-05-15 11:18:23.654878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.654994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.655024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.640 qpair failed and we were unable to recover it. 00:26:26.640 [2024-05-15 11:18:23.655135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.655340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.655371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.640 qpair failed and we were unable to recover it. 00:26:26.640 [2024-05-15 11:18:23.655615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.655751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.655781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.640 qpair failed and we were unable to recover it. 00:26:26.640 [2024-05-15 11:18:23.656001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.656116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.656145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.640 qpair failed and we were unable to recover it. 00:26:26.640 [2024-05-15 11:18:23.656291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.656475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.656488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.640 qpair failed and we were unable to recover it. 00:26:26.640 [2024-05-15 11:18:23.656570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.656724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.656738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.640 qpair failed and we were unable to recover it. 00:26:26.640 [2024-05-15 11:18:23.656890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.657038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.657053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.640 qpair failed and we were unable to recover it. 00:26:26.640 [2024-05-15 11:18:23.657137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.657211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.657225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.640 qpair failed and we were unable to recover it. 00:26:26.640 [2024-05-15 11:18:23.657372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.657453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.657467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.640 qpair failed and we were unable to recover it. 00:26:26.640 [2024-05-15 11:18:23.657560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.657649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.657662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.640 qpair failed and we were unable to recover it. 00:26:26.640 [2024-05-15 11:18:23.657832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.658004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.658034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.640 qpair failed and we were unable to recover it. 00:26:26.640 [2024-05-15 11:18:23.658162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.658287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.658317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.640 qpair failed and we were unable to recover it. 00:26:26.640 [2024-05-15 11:18:23.658521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.658683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.658712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.640 qpair failed and we were unable to recover it. 00:26:26.640 [2024-05-15 11:18:23.659027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.659188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.659226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.640 qpair failed and we were unable to recover it. 00:26:26.640 [2024-05-15 11:18:23.659366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.659525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.659536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.640 qpair failed and we were unable to recover it. 00:26:26.640 [2024-05-15 11:18:23.659685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.659861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.659891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.640 qpair failed and we were unable to recover it. 00:26:26.640 [2024-05-15 11:18:23.660070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.640 [2024-05-15 11:18:23.660193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.660223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.641 qpair failed and we were unable to recover it. 00:26:26.641 [2024-05-15 11:18:23.660358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.660647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.660656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.641 qpair failed and we were unable to recover it. 00:26:26.641 [2024-05-15 11:18:23.660854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.660985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.660994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.641 qpair failed and we were unable to recover it. 00:26:26.641 [2024-05-15 11:18:23.661086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.661290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.661321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.641 qpair failed and we were unable to recover it. 00:26:26.641 [2024-05-15 11:18:23.661508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.661683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.661712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.641 qpair failed and we were unable to recover it. 00:26:26.641 [2024-05-15 11:18:23.661850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.662048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.662078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.641 qpair failed and we were unable to recover it. 00:26:26.641 [2024-05-15 11:18:23.662198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.662386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.662415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.641 qpair failed and we were unable to recover it. 00:26:26.641 [2024-05-15 11:18:23.662665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.662877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.662912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.641 qpair failed and we were unable to recover it. 00:26:26.641 [2024-05-15 11:18:23.663187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.663366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.663397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.641 qpair failed and we were unable to recover it. 00:26:26.641 [2024-05-15 11:18:23.663588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.663693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.663722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.641 qpair failed and we were unable to recover it. 00:26:26.641 [2024-05-15 11:18:23.663934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.664125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.664155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.641 qpair failed and we were unable to recover it. 00:26:26.641 [2024-05-15 11:18:23.664403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.664519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.664547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.641 qpair failed and we were unable to recover it. 00:26:26.641 [2024-05-15 11:18:23.664725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.664827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.664857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.641 qpair failed and we were unable to recover it. 00:26:26.641 [2024-05-15 11:18:23.665043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.665212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.665255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.641 qpair failed and we were unable to recover it. 00:26:26.641 [2024-05-15 11:18:23.665412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.665483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.665497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.641 qpair failed and we were unable to recover it. 00:26:26.641 [2024-05-15 11:18:23.665652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.665800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.665840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.641 qpair failed and we were unable to recover it. 00:26:26.641 [2024-05-15 11:18:23.666105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.666223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.666253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.641 qpair failed and we were unable to recover it. 00:26:26.641 [2024-05-15 11:18:23.666534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.666639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.666654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.641 qpair failed and we were unable to recover it. 00:26:26.641 [2024-05-15 11:18:23.666760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.666847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.666860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.641 qpair failed and we were unable to recover it. 00:26:26.641 [2024-05-15 11:18:23.667040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.667125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.667137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.641 qpair failed and we were unable to recover it. 00:26:26.641 [2024-05-15 11:18:23.667230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.667377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.667390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.641 qpair failed and we were unable to recover it. 00:26:26.641 [2024-05-15 11:18:23.667511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.667655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.667667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.641 qpair failed and we were unable to recover it. 00:26:26.641 [2024-05-15 11:18:23.667810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.667898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.667911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.641 qpair failed and we were unable to recover it. 00:26:26.641 [2024-05-15 11:18:23.668010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.668100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.668113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.641 qpair failed and we were unable to recover it. 00:26:26.641 [2024-05-15 11:18:23.668294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.668547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.668561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.641 qpair failed and we were unable to recover it. 00:26:26.641 [2024-05-15 11:18:23.668637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.668717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.668730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.641 qpair failed and we were unable to recover it. 00:26:26.641 [2024-05-15 11:18:23.668829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.668980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.668995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.641 qpair failed and we were unable to recover it. 00:26:26.641 [2024-05-15 11:18:23.669150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.669318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.669333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.641 qpair failed and we were unable to recover it. 00:26:26.641 [2024-05-15 11:18:23.669509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.641 [2024-05-15 11:18:23.669646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.669659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.642 qpair failed and we were unable to recover it. 00:26:26.642 [2024-05-15 11:18:23.669734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.669873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.669886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.642 qpair failed and we were unable to recover it. 00:26:26.642 [2024-05-15 11:18:23.669984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.670133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.670146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.642 qpair failed and we were unable to recover it. 00:26:26.642 [2024-05-15 11:18:23.670249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.670491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.670519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.642 qpair failed and we were unable to recover it. 00:26:26.642 [2024-05-15 11:18:23.670711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.670922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.670953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.642 qpair failed and we were unable to recover it. 00:26:26.642 [2024-05-15 11:18:23.671168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.671265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.671278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.642 qpair failed and we were unable to recover it. 00:26:26.642 [2024-05-15 11:18:23.671363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.671453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.671466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.642 qpair failed and we were unable to recover it. 00:26:26.642 [2024-05-15 11:18:23.671707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.671855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.671869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.642 qpair failed and we were unable to recover it. 00:26:26.642 [2024-05-15 11:18:23.671956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.672043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.672056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.642 qpair failed and we were unable to recover it. 00:26:26.642 [2024-05-15 11:18:23.672271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.672359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.672373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.642 qpair failed and we were unable to recover it. 00:26:26.642 [2024-05-15 11:18:23.672461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.672625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.672638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.642 qpair failed and we were unable to recover it. 00:26:26.642 [2024-05-15 11:18:23.672797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.672946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.672960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.642 qpair failed and we were unable to recover it. 00:26:26.642 [2024-05-15 11:18:23.673055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.673136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.673148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.642 qpair failed and we were unable to recover it. 00:26:26.642 [2024-05-15 11:18:23.673322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.673462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.673476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.642 qpair failed and we were unable to recover it. 00:26:26.642 [2024-05-15 11:18:23.673563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.673663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.673675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.642 qpair failed and we were unable to recover it. 00:26:26.642 [2024-05-15 11:18:23.673818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.673904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.673917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.642 qpair failed and we were unable to recover it. 00:26:26.642 [2024-05-15 11:18:23.674061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.674154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.674171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.642 qpair failed and we were unable to recover it. 00:26:26.642 [2024-05-15 11:18:23.674343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.674406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.674419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.642 qpair failed and we were unable to recover it. 00:26:26.642 [2024-05-15 11:18:23.674502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.674584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.674596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.642 qpair failed and we were unable to recover it. 00:26:26.642 [2024-05-15 11:18:23.674688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.674777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.674792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.642 qpair failed and we were unable to recover it. 00:26:26.642 [2024-05-15 11:18:23.674945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.675040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.675054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.642 qpair failed and we were unable to recover it. 00:26:26.642 [2024-05-15 11:18:23.675144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.675220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.675233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.642 qpair failed and we were unable to recover it. 00:26:26.642 [2024-05-15 11:18:23.675311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.675413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.675426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.642 qpair failed and we were unable to recover it. 00:26:26.642 [2024-05-15 11:18:23.675637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.675840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.675854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.642 qpair failed and we were unable to recover it. 00:26:26.642 [2024-05-15 11:18:23.675946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.676009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.676022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.642 qpair failed and we were unable to recover it. 00:26:26.642 [2024-05-15 11:18:23.676117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.676320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.676334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.642 qpair failed and we were unable to recover it. 00:26:26.642 [2024-05-15 11:18:23.676440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.676527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.676540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.642 qpair failed and we were unable to recover it. 00:26:26.642 [2024-05-15 11:18:23.676678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.676815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.676829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.642 qpair failed and we were unable to recover it. 00:26:26.642 [2024-05-15 11:18:23.676971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.642 [2024-05-15 11:18:23.677185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.643 [2024-05-15 11:18:23.677216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.643 qpair failed and we were unable to recover it. 00:26:26.643 [2024-05-15 11:18:23.677359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.643 [2024-05-15 11:18:23.677527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.643 [2024-05-15 11:18:23.677571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.643 qpair failed and we were unable to recover it. 00:26:26.643 [2024-05-15 11:18:23.677657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.643 [2024-05-15 11:18:23.677862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.643 [2024-05-15 11:18:23.677876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.643 qpair failed and we were unable to recover it. 00:26:26.643 [2024-05-15 11:18:23.677954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.643 [2024-05-15 11:18:23.678114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.643 [2024-05-15 11:18:23.678144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.643 qpair failed and we were unable to recover it. 00:26:26.643 [2024-05-15 11:18:23.678343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.643 [2024-05-15 11:18:23.678583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.643 [2024-05-15 11:18:23.678612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.643 qpair failed and we were unable to recover it. 00:26:26.643 [2024-05-15 11:18:23.678840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.643 [2024-05-15 11:18:23.679007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.643 [2024-05-15 11:18:23.679020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.643 qpair failed and we were unable to recover it. 00:26:26.643 [2024-05-15 11:18:23.679196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.643 [2024-05-15 11:18:23.679334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.643 [2024-05-15 11:18:23.679347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.643 qpair failed and we were unable to recover it. 00:26:26.643 [2024-05-15 11:18:23.679424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.643 [2024-05-15 11:18:23.679562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.643 [2024-05-15 11:18:23.679574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.643 qpair failed and we were unable to recover it. 00:26:26.643 [2024-05-15 11:18:23.679755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.643 [2024-05-15 11:18:23.679923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.643 [2024-05-15 11:18:23.679952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.643 qpair failed and we were unable to recover it. 00:26:26.643 [2024-05-15 11:18:23.680202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.643 [2024-05-15 11:18:23.680381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.643 [2024-05-15 11:18:23.680394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.643 qpair failed and we were unable to recover it. 00:26:26.643 [2024-05-15 11:18:23.680480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.643 [2024-05-15 11:18:23.680586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.643 [2024-05-15 11:18:23.680599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.643 qpair failed and we were unable to recover it. 00:26:26.643 [2024-05-15 11:18:23.680806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.643 [2024-05-15 11:18:23.680901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.643 [2024-05-15 11:18:23.680916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.643 qpair failed and we were unable to recover it. 00:26:26.643 [2024-05-15 11:18:23.681086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.643 [2024-05-15 11:18:23.681210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.643 [2024-05-15 11:18:23.681240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.643 qpair failed and we were unable to recover it. 00:26:26.643 [2024-05-15 11:18:23.681373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.643 [2024-05-15 11:18:23.681574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.643 [2024-05-15 11:18:23.681603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.643 qpair failed and we were unable to recover it. 00:26:26.643 [2024-05-15 11:18:23.681848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.643 [2024-05-15 11:18:23.682057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.643 [2024-05-15 11:18:23.682086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.643 qpair failed and we were unable to recover it. 00:26:26.643 [2024-05-15 11:18:23.682276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.643 [2024-05-15 11:18:23.682489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.643 [2024-05-15 11:18:23.682518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.643 qpair failed and we were unable to recover it. 00:26:26.643 [2024-05-15 11:18:23.682701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.643 [2024-05-15 11:18:23.682816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.643 [2024-05-15 11:18:23.682844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.643 qpair failed and we were unable to recover it. 00:26:26.643 [2024-05-15 11:18:23.683055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.643 [2024-05-15 11:18:23.683178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.643 [2024-05-15 11:18:23.683208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.643 qpair failed and we were unable to recover it. 00:26:26.643 [2024-05-15 11:18:23.683337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.643 [2024-05-15 11:18:23.683521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.643 [2024-05-15 11:18:23.683550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.643 qpair failed and we were unable to recover it. 00:26:26.643 [2024-05-15 11:18:23.683744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.643 [2024-05-15 11:18:23.683943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.643 [2024-05-15 11:18:23.683971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.643 qpair failed and we were unable to recover it. 00:26:26.643 [2024-05-15 11:18:23.684233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.643 [2024-05-15 11:18:23.684481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.643 [2024-05-15 11:18:23.684494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.643 qpair failed and we were unable to recover it. 00:26:26.643 [2024-05-15 11:18:23.684580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.643 [2024-05-15 11:18:23.684726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.643 [2024-05-15 11:18:23.684741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.643 qpair failed and we were unable to recover it. 00:26:26.643 [2024-05-15 11:18:23.684840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.643 [2024-05-15 11:18:23.685004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.685016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.644 qpair failed and we were unable to recover it. 00:26:26.644 [2024-05-15 11:18:23.685198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.685355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.685368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.644 qpair failed and we were unable to recover it. 00:26:26.644 [2024-05-15 11:18:23.685476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.685564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.685578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.644 qpair failed and we were unable to recover it. 00:26:26.644 [2024-05-15 11:18:23.685720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.685969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.685983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.644 qpair failed and we were unable to recover it. 00:26:26.644 [2024-05-15 11:18:23.686146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.686231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.686244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.644 qpair failed and we were unable to recover it. 00:26:26.644 [2024-05-15 11:18:23.686397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.686489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.686502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.644 qpair failed and we were unable to recover it. 00:26:26.644 [2024-05-15 11:18:23.686684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.686772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.686785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.644 qpair failed and we were unable to recover it. 00:26:26.644 [2024-05-15 11:18:23.686891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.686984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.686997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.644 qpair failed and we were unable to recover it. 00:26:26.644 [2024-05-15 11:18:23.687159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.687271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.687285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.644 qpair failed and we were unable to recover it. 00:26:26.644 [2024-05-15 11:18:23.687462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.687546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.687562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.644 qpair failed and we were unable to recover it. 00:26:26.644 [2024-05-15 11:18:23.687734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.687819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.687832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.644 qpair failed and we were unable to recover it. 00:26:26.644 [2024-05-15 11:18:23.687937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.688102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.688115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.644 qpair failed and we were unable to recover it. 00:26:26.644 [2024-05-15 11:18:23.688269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.688417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.688430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.644 qpair failed and we were unable to recover it. 00:26:26.644 [2024-05-15 11:18:23.688520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.688612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.688625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.644 qpair failed and we were unable to recover it. 00:26:26.644 [2024-05-15 11:18:23.688725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.688835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.688848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.644 qpair failed and we were unable to recover it. 00:26:26.644 [2024-05-15 11:18:23.688988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.689069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.689082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.644 qpair failed and we were unable to recover it. 00:26:26.644 [2024-05-15 11:18:23.689299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.689378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.689392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.644 qpair failed and we were unable to recover it. 00:26:26.644 [2024-05-15 11:18:23.689525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.689656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.689684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.644 qpair failed and we were unable to recover it. 00:26:26.644 [2024-05-15 11:18:23.689809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.689992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.690022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.644 qpair failed and we were unable to recover it. 00:26:26.644 [2024-05-15 11:18:23.690127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.690303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.690317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.644 qpair failed and we were unable to recover it. 00:26:26.644 [2024-05-15 11:18:23.690417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.690513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.690527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.644 qpair failed and we were unable to recover it. 00:26:26.644 [2024-05-15 11:18:23.690758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.690965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.690978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.644 qpair failed and we were unable to recover it. 00:26:26.644 [2024-05-15 11:18:23.691062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.691230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.691243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.644 qpair failed and we were unable to recover it. 00:26:26.644 [2024-05-15 11:18:23.691388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.691479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.691492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.644 qpair failed and we were unable to recover it. 00:26:26.644 [2024-05-15 11:18:23.691600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.691755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.691768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.644 qpair failed and we were unable to recover it. 00:26:26.644 [2024-05-15 11:18:23.691852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.691923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.691937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.644 qpair failed and we were unable to recover it. 00:26:26.644 [2024-05-15 11:18:23.692091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.692324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.692338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.644 qpair failed and we were unable to recover it. 00:26:26.644 [2024-05-15 11:18:23.692486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.692552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.644 [2024-05-15 11:18:23.692565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.644 qpair failed and we were unable to recover it. 00:26:26.644 [2024-05-15 11:18:23.692693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.692844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.692857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.645 qpair failed and we were unable to recover it. 00:26:26.645 [2024-05-15 11:18:23.693090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.693200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.693213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.645 qpair failed and we were unable to recover it. 00:26:26.645 [2024-05-15 11:18:23.693356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.693528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.693558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.645 qpair failed and we were unable to recover it. 00:26:26.645 [2024-05-15 11:18:23.693821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.694089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.694118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.645 qpair failed and we were unable to recover it. 00:26:26.645 [2024-05-15 11:18:23.694303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.694507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.694535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.645 qpair failed and we were unable to recover it. 00:26:26.645 [2024-05-15 11:18:23.694710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.694971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.695000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.645 qpair failed and we were unable to recover it. 00:26:26.645 [2024-05-15 11:18:23.695143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.695327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.695359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.645 qpair failed and we were unable to recover it. 00:26:26.645 [2024-05-15 11:18:23.695509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.695588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.695601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.645 qpair failed and we were unable to recover it. 00:26:26.645 [2024-05-15 11:18:23.695761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.695970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.695998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.645 qpair failed and we were unable to recover it. 00:26:26.645 [2024-05-15 11:18:23.696124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.696242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.696272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.645 qpair failed and we were unable to recover it. 00:26:26.645 [2024-05-15 11:18:23.696450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.696735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.696765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.645 qpair failed and we were unable to recover it. 00:26:26.645 [2024-05-15 11:18:23.696951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.697070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.697100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.645 qpair failed and we were unable to recover it. 00:26:26.645 [2024-05-15 11:18:23.697286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.697520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.697534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.645 qpair failed and we were unable to recover it. 00:26:26.645 [2024-05-15 11:18:23.697693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.697861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.697894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.645 qpair failed and we were unable to recover it. 00:26:26.645 [2024-05-15 11:18:23.698141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.698358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.698388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.645 qpair failed and we were unable to recover it. 00:26:26.645 [2024-05-15 11:18:23.698523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.698726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.698739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.645 qpair failed and we were unable to recover it. 00:26:26.645 [2024-05-15 11:18:23.698833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.698983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.698996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.645 qpair failed and we were unable to recover it. 00:26:26.645 [2024-05-15 11:18:23.699152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.699330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.699343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.645 qpair failed and we were unable to recover it. 00:26:26.645 [2024-05-15 11:18:23.699495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.699584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.699598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.645 qpair failed and we were unable to recover it. 00:26:26.645 [2024-05-15 11:18:23.699685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.699832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.699845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.645 qpair failed and we were unable to recover it. 00:26:26.645 [2024-05-15 11:18:23.699994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.700119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.700149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.645 qpair failed and we were unable to recover it. 00:26:26.645 [2024-05-15 11:18:23.700390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.700593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.700622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.645 qpair failed and we were unable to recover it. 00:26:26.645 [2024-05-15 11:18:23.700811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.700997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.701026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.645 qpair failed and we were unable to recover it. 00:26:26.645 [2024-05-15 11:18:23.701207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.701379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.701408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.645 qpair failed and we were unable to recover it. 00:26:26.645 [2024-05-15 11:18:23.701580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.701719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.701747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.645 qpair failed and we were unable to recover it. 00:26:26.645 [2024-05-15 11:18:23.701990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.702162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.702179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.645 qpair failed and we were unable to recover it. 00:26:26.645 [2024-05-15 11:18:23.702340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.702414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.702427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.645 qpair failed and we were unable to recover it. 00:26:26.645 [2024-05-15 11:18:23.702596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.702745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.645 [2024-05-15 11:18:23.702758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.645 qpair failed and we were unable to recover it. 00:26:26.645 [2024-05-15 11:18:23.702903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.702995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.703009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.646 qpair failed and we were unable to recover it. 00:26:26.646 [2024-05-15 11:18:23.703086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.703162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.703179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.646 qpair failed and we were unable to recover it. 00:26:26.646 [2024-05-15 11:18:23.703330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.703399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.703411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.646 qpair failed and we were unable to recover it. 00:26:26.646 [2024-05-15 11:18:23.703485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.703719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.703732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.646 qpair failed and we were unable to recover it. 00:26:26.646 [2024-05-15 11:18:23.703821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.703908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.703921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.646 qpair failed and we were unable to recover it. 00:26:26.646 [2024-05-15 11:18:23.704014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.704153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.704177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.646 qpair failed and we were unable to recover it. 00:26:26.646 [2024-05-15 11:18:23.704272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.704349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.704362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.646 qpair failed and we were unable to recover it. 00:26:26.646 [2024-05-15 11:18:23.704457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.704567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.704581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.646 qpair failed and we were unable to recover it. 00:26:26.646 [2024-05-15 11:18:23.704666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.704749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.704761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.646 qpair failed and we were unable to recover it. 00:26:26.646 [2024-05-15 11:18:23.704841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.704947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.704960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.646 qpair failed and we were unable to recover it. 00:26:26.646 [2024-05-15 11:18:23.705055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.705142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.705155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.646 qpair failed and we were unable to recover it. 00:26:26.646 [2024-05-15 11:18:23.705234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.705312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.705325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.646 qpair failed and we were unable to recover it. 00:26:26.646 [2024-05-15 11:18:23.705399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.705552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.705564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.646 qpair failed and we were unable to recover it. 00:26:26.646 [2024-05-15 11:18:23.705639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.705716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.705729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.646 qpair failed and we were unable to recover it. 00:26:26.646 [2024-05-15 11:18:23.705892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.705978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.705991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.646 qpair failed and we were unable to recover it. 00:26:26.646 [2024-05-15 11:18:23.706153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.706250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.706275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.646 qpair failed and we were unable to recover it. 00:26:26.646 [2024-05-15 11:18:23.706368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.706479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.706492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.646 qpair failed and we were unable to recover it. 00:26:26.646 [2024-05-15 11:18:23.706632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.706749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.706777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.646 qpair failed and we were unable to recover it. 00:26:26.646 [2024-05-15 11:18:23.706908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.707023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.707051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.646 qpair failed and we were unable to recover it. 00:26:26.646 [2024-05-15 11:18:23.707241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.707414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.707443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.646 qpair failed and we were unable to recover it. 00:26:26.646 [2024-05-15 11:18:23.707560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.707670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.707684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.646 qpair failed and we were unable to recover it. 00:26:26.646 [2024-05-15 11:18:23.707830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.707917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.707930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.646 qpair failed and we were unable to recover it. 00:26:26.646 [2024-05-15 11:18:23.708107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.708313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.708327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.646 qpair failed and we were unable to recover it. 00:26:26.646 [2024-05-15 11:18:23.708477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.708557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.708570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.646 qpair failed and we were unable to recover it. 00:26:26.646 [2024-05-15 11:18:23.708744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.708915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.708932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.646 qpair failed and we were unable to recover it. 00:26:26.646 [2024-05-15 11:18:23.709079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.709175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.709190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.646 qpair failed and we were unable to recover it. 00:26:26.646 [2024-05-15 11:18:23.709268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.709357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.709371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.646 qpair failed and we were unable to recover it. 00:26:26.646 [2024-05-15 11:18:23.709522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.709604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.646 [2024-05-15 11:18:23.709618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.646 qpair failed and we were unable to recover it. 00:26:26.647 [2024-05-15 11:18:23.709782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.709931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.709945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.647 qpair failed and we were unable to recover it. 00:26:26.647 [2024-05-15 11:18:23.710088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.710227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.710242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.647 qpair failed and we were unable to recover it. 00:26:26.647 [2024-05-15 11:18:23.710318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.710466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.710479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.647 qpair failed and we were unable to recover it. 00:26:26.647 [2024-05-15 11:18:23.710640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.710798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.710811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.647 qpair failed and we were unable to recover it. 00:26:26.647 [2024-05-15 11:18:23.710951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.711041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.711054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.647 qpair failed and we were unable to recover it. 00:26:26.647 [2024-05-15 11:18:23.711206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.711297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.711310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.647 qpair failed and we were unable to recover it. 00:26:26.647 [2024-05-15 11:18:23.711502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.711681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.711711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.647 qpair failed and we were unable to recover it. 00:26:26.647 [2024-05-15 11:18:23.711962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.712146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.712187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.647 qpair failed and we were unable to recover it. 00:26:26.647 [2024-05-15 11:18:23.712310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.712399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.712412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.647 qpair failed and we were unable to recover it. 00:26:26.647 [2024-05-15 11:18:23.712489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.712627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.712640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.647 qpair failed and we were unable to recover it. 00:26:26.647 [2024-05-15 11:18:23.712796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.712975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.712988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.647 qpair failed and we were unable to recover it. 00:26:26.647 [2024-05-15 11:18:23.713075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.713259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.713272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.647 qpair failed and we were unable to recover it. 00:26:26.647 [2024-05-15 11:18:23.713367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.713447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.713460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.647 qpair failed and we were unable to recover it. 00:26:26.647 [2024-05-15 11:18:23.713612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.713819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.713832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.647 qpair failed and we were unable to recover it. 00:26:26.647 [2024-05-15 11:18:23.714044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.714141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.714154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.647 qpair failed and we were unable to recover it. 00:26:26.647 [2024-05-15 11:18:23.714258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.714416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.714429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.647 qpair failed and we were unable to recover it. 00:26:26.647 [2024-05-15 11:18:23.714581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.714746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.714780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.647 qpair failed and we were unable to recover it. 00:26:26.647 [2024-05-15 11:18:23.714978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.715097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.715126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.647 qpair failed and we were unable to recover it. 00:26:26.647 [2024-05-15 11:18:23.715384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.715501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.715529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.647 qpair failed and we were unable to recover it. 00:26:26.647 [2024-05-15 11:18:23.715740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.715995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.716023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.647 qpair failed and we were unable to recover it. 00:26:26.647 [2024-05-15 11:18:23.716147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.716422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.716452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.647 qpair failed and we were unable to recover it. 00:26:26.647 [2024-05-15 11:18:23.716633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.716863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.716891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.647 qpair failed and we were unable to recover it. 00:26:26.647 [2024-05-15 11:18:23.717177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.717366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.717394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.647 qpair failed and we were unable to recover it. 00:26:26.647 [2024-05-15 11:18:23.717514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.717616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.717628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.647 qpair failed and we were unable to recover it. 00:26:26.647 [2024-05-15 11:18:23.717770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.717929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.717942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.647 qpair failed and we were unable to recover it. 00:26:26.647 [2024-05-15 11:18:23.718148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.718249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.718263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.647 qpair failed and we were unable to recover it. 00:26:26.647 [2024-05-15 11:18:23.718349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.718447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.718460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.647 qpair failed and we were unable to recover it. 00:26:26.647 [2024-05-15 11:18:23.718613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.718759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.718772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.647 qpair failed and we were unable to recover it. 00:26:26.647 [2024-05-15 11:18:23.718924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.647 [2024-05-15 11:18:23.719098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.648 [2024-05-15 11:18:23.719111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.648 qpair failed and we were unable to recover it. 00:26:26.648 [2024-05-15 11:18:23.719204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.648 [2024-05-15 11:18:23.719382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.648 [2024-05-15 11:18:23.719411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.648 qpair failed and we were unable to recover it. 00:26:26.648 [2024-05-15 11:18:23.719544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.648 [2024-05-15 11:18:23.719675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.648 [2024-05-15 11:18:23.719703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.648 qpair failed and we were unable to recover it. 00:26:26.648 [2024-05-15 11:18:23.719830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.648 [2024-05-15 11:18:23.720012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.648 [2024-05-15 11:18:23.720040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.648 qpair failed and we were unable to recover it. 00:26:26.648 [2024-05-15 11:18:23.720173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.648 [2024-05-15 11:18:23.720412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.648 [2024-05-15 11:18:23.720441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.648 qpair failed and we were unable to recover it. 00:26:26.648 [2024-05-15 11:18:23.720578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.648 [2024-05-15 11:18:23.720720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.648 [2024-05-15 11:18:23.720733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.648 qpair failed and we were unable to recover it. 00:26:26.648 [2024-05-15 11:18:23.720927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.648 [2024-05-15 11:18:23.721146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.648 [2024-05-15 11:18:23.721159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.648 qpair failed and we were unable to recover it. 00:26:26.648 [2024-05-15 11:18:23.721251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.648 [2024-05-15 11:18:23.721348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.648 [2024-05-15 11:18:23.721360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.648 qpair failed and we were unable to recover it. 00:26:26.648 [2024-05-15 11:18:23.721595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.648 [2024-05-15 11:18:23.721684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.648 [2024-05-15 11:18:23.721697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.648 qpair failed and we were unable to recover it. 00:26:26.648 [2024-05-15 11:18:23.721925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.648 [2024-05-15 11:18:23.722097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.648 [2024-05-15 11:18:23.722126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.648 qpair failed and we were unable to recover it. 00:26:26.648 [2024-05-15 11:18:23.722266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.648 [2024-05-15 11:18:23.722410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.648 [2024-05-15 11:18:23.722439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.648 qpair failed and we were unable to recover it. 00:26:26.648 [2024-05-15 11:18:23.722628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.648 [2024-05-15 11:18:23.722796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.648 [2024-05-15 11:18:23.722810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.648 qpair failed and we were unable to recover it. 00:26:26.648 [2024-05-15 11:18:23.722970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.648 [2024-05-15 11:18:23.723099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.648 [2024-05-15 11:18:23.723112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.648 qpair failed and we were unable to recover it. 00:26:26.648 [2024-05-15 11:18:23.723202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.648 [2024-05-15 11:18:23.723372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.648 [2024-05-15 11:18:23.723385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.648 qpair failed and we were unable to recover it. 00:26:26.648 [2024-05-15 11:18:23.723554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.648 [2024-05-15 11:18:23.723716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.648 [2024-05-15 11:18:23.723729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.648 qpair failed and we were unable to recover it. 00:26:26.648 [2024-05-15 11:18:23.723988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.648 [2024-05-15 11:18:23.724106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.648 [2024-05-15 11:18:23.724135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.648 qpair failed and we were unable to recover it. 00:26:26.648 [2024-05-15 11:18:23.724304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.648 [2024-05-15 11:18:23.724571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.648 [2024-05-15 11:18:23.724585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.648 qpair failed and we were unable to recover it. 00:26:26.648 [2024-05-15 11:18:23.724734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.648 [2024-05-15 11:18:23.724886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.648 [2024-05-15 11:18:23.724898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.648 qpair failed and we were unable to recover it. 00:26:26.648 [2024-05-15 11:18:23.724997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.648 [2024-05-15 11:18:23.725137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.648 [2024-05-15 11:18:23.725150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.648 qpair failed and we were unable to recover it. 00:26:26.648 [2024-05-15 11:18:23.725313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.648 [2024-05-15 11:18:23.725411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.648 [2024-05-15 11:18:23.725424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.648 qpair failed and we were unable to recover it. 00:26:26.648 [2024-05-15 11:18:23.725578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.648 [2024-05-15 11:18:23.725715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.648 [2024-05-15 11:18:23.725728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.648 qpair failed and we were unable to recover it. 00:26:26.648 [2024-05-15 11:18:23.725817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.648 [2024-05-15 11:18:23.725918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.648 [2024-05-15 11:18:23.725931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.648 qpair failed and we were unable to recover it. 00:26:26.648 [2024-05-15 11:18:23.726077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.648 [2024-05-15 11:18:23.726153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.726170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.649 qpair failed and we were unable to recover it. 00:26:26.649 [2024-05-15 11:18:23.726237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.726330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.726344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.649 qpair failed and we were unable to recover it. 00:26:26.649 [2024-05-15 11:18:23.726447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.726554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.726568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.649 qpair failed and we were unable to recover it. 00:26:26.649 [2024-05-15 11:18:23.726655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.726734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.726747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.649 qpair failed and we were unable to recover it. 00:26:26.649 [2024-05-15 11:18:23.726820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.726905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.726917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.649 qpair failed and we were unable to recover it. 00:26:26.649 [2024-05-15 11:18:23.727011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.727154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.727171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.649 qpair failed and we were unable to recover it. 00:26:26.649 [2024-05-15 11:18:23.727264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.727351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.727364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.649 qpair failed and we were unable to recover it. 00:26:26.649 [2024-05-15 11:18:23.727522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.727731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.727743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.649 qpair failed and we were unable to recover it. 00:26:26.649 [2024-05-15 11:18:23.727826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.727909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.727922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.649 qpair failed and we were unable to recover it. 00:26:26.649 [2024-05-15 11:18:23.728016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.728168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.728182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.649 qpair failed and we were unable to recover it. 00:26:26.649 [2024-05-15 11:18:23.728263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.728400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.728414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.649 qpair failed and we were unable to recover it. 00:26:26.649 [2024-05-15 11:18:23.728559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.728643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.728656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.649 qpair failed and we were unable to recover it. 00:26:26.649 [2024-05-15 11:18:23.728819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.728965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.728978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.649 qpair failed and we were unable to recover it. 00:26:26.649 [2024-05-15 11:18:23.729065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.729154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.729170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.649 qpair failed and we were unable to recover it. 00:26:26.649 [2024-05-15 11:18:23.729276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.729416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.729429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.649 qpair failed and we were unable to recover it. 00:26:26.649 [2024-05-15 11:18:23.729583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.729689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.729702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.649 qpair failed and we were unable to recover it. 00:26:26.649 [2024-05-15 11:18:23.729870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.730048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.730077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.649 qpair failed and we were unable to recover it. 00:26:26.649 [2024-05-15 11:18:23.730201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.730401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.730434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.649 qpair failed and we were unable to recover it. 00:26:26.649 [2024-05-15 11:18:23.730546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.730682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.730695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.649 qpair failed and we were unable to recover it. 00:26:26.649 [2024-05-15 11:18:23.730837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.730986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.730999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.649 qpair failed and we were unable to recover it. 00:26:26.649 [2024-05-15 11:18:23.731157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.731320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.731335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.649 qpair failed and we were unable to recover it. 00:26:26.649 [2024-05-15 11:18:23.731585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.731780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.731809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.649 qpair failed and we were unable to recover it. 00:26:26.649 [2024-05-15 11:18:23.731984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.732267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.732297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.649 qpair failed and we were unable to recover it. 00:26:26.649 [2024-05-15 11:18:23.732417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.732576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.732590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.649 qpair failed and we were unable to recover it. 00:26:26.649 [2024-05-15 11:18:23.732678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.732764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.732776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.649 qpair failed and we were unable to recover it. 00:26:26.649 [2024-05-15 11:18:23.732915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.733076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.733089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.649 qpair failed and we were unable to recover it. 00:26:26.649 [2024-05-15 11:18:23.733262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.733375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.733403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.649 qpair failed and we were unable to recover it. 00:26:26.649 [2024-05-15 11:18:23.733596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.733701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.733730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.649 qpair failed and we were unable to recover it. 00:26:26.649 [2024-05-15 11:18:23.733901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.734044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.649 [2024-05-15 11:18:23.734057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.650 qpair failed and we were unable to recover it. 00:26:26.650 [2024-05-15 11:18:23.734220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.734453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.734466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.650 qpair failed and we were unable to recover it. 00:26:26.650 [2024-05-15 11:18:23.734618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.734794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.734830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.650 qpair failed and we were unable to recover it. 00:26:26.650 [2024-05-15 11:18:23.734953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.735068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.735095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.650 qpair failed and we were unable to recover it. 00:26:26.650 [2024-05-15 11:18:23.735303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.735473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.735502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.650 qpair failed and we were unable to recover it. 00:26:26.650 [2024-05-15 11:18:23.735708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.735893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.735920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.650 qpair failed and we were unable to recover it. 00:26:26.650 [2024-05-15 11:18:23.736062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.736184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.736212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.650 qpair failed and we were unable to recover it. 00:26:26.650 [2024-05-15 11:18:23.736454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.736721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.736764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.650 qpair failed and we were unable to recover it. 00:26:26.650 [2024-05-15 11:18:23.736866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.737016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.737029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.650 qpair failed and we were unable to recover it. 00:26:26.650 [2024-05-15 11:18:23.737262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.737425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.737439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.650 qpair failed and we were unable to recover it. 00:26:26.650 [2024-05-15 11:18:23.737656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.737762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.737774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.650 qpair failed and we were unable to recover it. 00:26:26.650 [2024-05-15 11:18:23.737864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.737961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.737974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.650 qpair failed and we were unable to recover it. 00:26:26.650 [2024-05-15 11:18:23.738067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.738162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.738181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.650 qpair failed and we were unable to recover it. 00:26:26.650 [2024-05-15 11:18:23.738282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.738446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.738460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.650 qpair failed and we were unable to recover it. 00:26:26.650 [2024-05-15 11:18:23.738604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.738691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.738704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.650 qpair failed and we were unable to recover it. 00:26:26.650 [2024-05-15 11:18:23.738859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.738932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.738944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.650 qpair failed and we were unable to recover it. 00:26:26.650 [2024-05-15 11:18:23.739106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.739209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.739223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.650 qpair failed and we were unable to recover it. 00:26:26.650 [2024-05-15 11:18:23.739300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.739383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.739397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.650 qpair failed and we were unable to recover it. 00:26:26.650 [2024-05-15 11:18:23.739500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.739582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.739594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.650 qpair failed and we were unable to recover it. 00:26:26.650 [2024-05-15 11:18:23.739737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.739819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.739832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.650 qpair failed and we were unable to recover it. 00:26:26.650 [2024-05-15 11:18:23.740041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.740146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.740159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.650 qpair failed and we were unable to recover it. 00:26:26.650 [2024-05-15 11:18:23.740350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.740489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.740502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.650 qpair failed and we were unable to recover it. 00:26:26.650 [2024-05-15 11:18:23.740581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.740734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.740747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.650 qpair failed and we were unable to recover it. 00:26:26.650 [2024-05-15 11:18:23.740837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.740916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.740930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.650 qpair failed and we were unable to recover it. 00:26:26.650 [2024-05-15 11:18:23.741074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.741147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.741160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.650 qpair failed and we were unable to recover it. 00:26:26.650 [2024-05-15 11:18:23.741394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.741468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.741482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.650 qpair failed and we were unable to recover it. 00:26:26.650 [2024-05-15 11:18:23.741621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.741771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.741785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.650 qpair failed and we were unable to recover it. 00:26:26.650 [2024-05-15 11:18:23.741947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.742100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.742113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.650 qpair failed and we were unable to recover it. 00:26:26.650 [2024-05-15 11:18:23.742252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.742338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.650 [2024-05-15 11:18:23.742350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.650 qpair failed and we were unable to recover it. 00:26:26.650 [2024-05-15 11:18:23.742426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.742531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.742545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.651 qpair failed and we were unable to recover it. 00:26:26.651 [2024-05-15 11:18:23.742636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.742774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.742790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.651 qpair failed and we were unable to recover it. 00:26:26.651 [2024-05-15 11:18:23.742877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.743015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.743029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.651 qpair failed and we were unable to recover it. 00:26:26.651 [2024-05-15 11:18:23.743128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.743209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.743222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.651 qpair failed and we were unable to recover it. 00:26:26.651 [2024-05-15 11:18:23.743362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.743505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.743518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.651 qpair failed and we were unable to recover it. 00:26:26.651 [2024-05-15 11:18:23.743664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.743731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.743744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.651 qpair failed and we were unable to recover it. 00:26:26.651 [2024-05-15 11:18:23.743891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.744024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.744038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.651 qpair failed and we were unable to recover it. 00:26:26.651 [2024-05-15 11:18:23.744192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.744264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.744277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.651 qpair failed and we were unable to recover it. 00:26:26.651 [2024-05-15 11:18:23.744361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.744498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.744511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.651 qpair failed and we were unable to recover it. 00:26:26.651 [2024-05-15 11:18:23.744595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.744676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.744689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.651 qpair failed and we were unable to recover it. 00:26:26.651 [2024-05-15 11:18:23.744771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.744998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.745012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.651 qpair failed and we were unable to recover it. 00:26:26.651 [2024-05-15 11:18:23.745168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.745260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.745273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.651 qpair failed and we were unable to recover it. 00:26:26.651 [2024-05-15 11:18:23.745380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.745533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.745546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.651 qpair failed and we were unable to recover it. 00:26:26.651 [2024-05-15 11:18:23.745684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.745755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.745768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.651 qpair failed and we were unable to recover it. 00:26:26.651 [2024-05-15 11:18:23.745855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.745927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.745940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.651 qpair failed and we were unable to recover it. 00:26:26.651 [2024-05-15 11:18:23.746101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.746180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.746193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.651 qpair failed and we were unable to recover it. 00:26:26.651 [2024-05-15 11:18:23.746281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.746354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.746366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.651 qpair failed and we were unable to recover it. 00:26:26.651 [2024-05-15 11:18:23.746441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.746586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.746600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.651 qpair failed and we were unable to recover it. 00:26:26.651 [2024-05-15 11:18:23.746815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.746885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.746898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.651 qpair failed and we were unable to recover it. 00:26:26.651 [2024-05-15 11:18:23.747053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.747130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.747143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.651 qpair failed and we were unable to recover it. 00:26:26.651 [2024-05-15 11:18:23.747380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.747469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.747482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.651 qpair failed and we were unable to recover it. 00:26:26.651 [2024-05-15 11:18:23.747576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.747664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.747678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.651 qpair failed and we were unable to recover it. 00:26:26.651 [2024-05-15 11:18:23.747782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.747920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.747934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.651 qpair failed and we were unable to recover it. 00:26:26.651 [2024-05-15 11:18:23.748076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.748163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.748181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.651 qpair failed and we were unable to recover it. 00:26:26.651 [2024-05-15 11:18:23.748345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.748440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.748453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.651 qpair failed and we were unable to recover it. 00:26:26.651 [2024-05-15 11:18:23.748596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.748665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.748677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.651 qpair failed and we were unable to recover it. 00:26:26.651 [2024-05-15 11:18:23.748819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.748897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.748910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.651 qpair failed and we were unable to recover it. 00:26:26.651 [2024-05-15 11:18:23.748998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.749201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.749215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.651 qpair failed and we were unable to recover it. 00:26:26.651 [2024-05-15 11:18:23.749360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.651 [2024-05-15 11:18:23.749594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.749607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.652 qpair failed and we were unable to recover it. 00:26:26.652 [2024-05-15 11:18:23.749684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.749821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.749834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.652 qpair failed and we were unable to recover it. 00:26:26.652 [2024-05-15 11:18:23.749994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.750082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.750095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.652 qpair failed and we were unable to recover it. 00:26:26.652 [2024-05-15 11:18:23.750173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.750381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.750394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.652 qpair failed and we were unable to recover it. 00:26:26.652 [2024-05-15 11:18:23.750495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.750582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.750595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.652 qpair failed and we were unable to recover it. 00:26:26.652 [2024-05-15 11:18:23.750756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.750841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.750854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.652 qpair failed and we were unable to recover it. 00:26:26.652 [2024-05-15 11:18:23.750949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.751026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.751039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.652 qpair failed and we were unable to recover it. 00:26:26.652 [2024-05-15 11:18:23.751114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.751259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.751272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.652 qpair failed and we were unable to recover it. 00:26:26.652 [2024-05-15 11:18:23.751434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.751591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.751604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.652 qpair failed and we were unable to recover it. 00:26:26.652 [2024-05-15 11:18:23.751688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.751834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.751847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.652 qpair failed and we were unable to recover it. 00:26:26.652 [2024-05-15 11:18:23.751937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.752139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.752153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.652 qpair failed and we were unable to recover it. 00:26:26.652 [2024-05-15 11:18:23.752247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.752384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.752397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.652 qpair failed and we were unable to recover it. 00:26:26.652 [2024-05-15 11:18:23.752541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.752705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.752718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.652 qpair failed and we were unable to recover it. 00:26:26.652 [2024-05-15 11:18:23.752817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.752908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.752922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.652 qpair failed and we were unable to recover it. 00:26:26.652 [2024-05-15 11:18:23.753002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.753176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.753190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.652 qpair failed and we were unable to recover it. 00:26:26.652 [2024-05-15 11:18:23.753278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.753377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.753389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.652 qpair failed and we were unable to recover it. 00:26:26.652 [2024-05-15 11:18:23.753477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.753569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.753583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.652 qpair failed and we were unable to recover it. 00:26:26.652 [2024-05-15 11:18:23.753674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.753835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.753848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.652 qpair failed and we were unable to recover it. 00:26:26.652 [2024-05-15 11:18:23.753930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.754019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.754032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.652 qpair failed and we were unable to recover it. 00:26:26.652 [2024-05-15 11:18:23.754124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.754211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.754226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.652 qpair failed and we were unable to recover it. 00:26:26.652 [2024-05-15 11:18:23.754409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.754485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.754499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.652 qpair failed and we were unable to recover it. 00:26:26.652 [2024-05-15 11:18:23.754642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.754802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.754815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.652 qpair failed and we were unable to recover it. 00:26:26.652 [2024-05-15 11:18:23.754975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.755055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.755069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.652 qpair failed and we were unable to recover it. 00:26:26.652 [2024-05-15 11:18:23.755183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.755277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.755290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.652 qpair failed and we were unable to recover it. 00:26:26.652 [2024-05-15 11:18:23.755449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.755637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.755652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.652 qpair failed and we were unable to recover it. 00:26:26.652 [2024-05-15 11:18:23.755733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.755832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.755845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.652 qpair failed and we were unable to recover it. 00:26:26.652 [2024-05-15 11:18:23.756052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.756153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.756173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.652 qpair failed and we were unable to recover it. 00:26:26.652 [2024-05-15 11:18:23.756335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.756485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.756499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.652 qpair failed and we were unable to recover it. 00:26:26.652 [2024-05-15 11:18:23.756649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.756732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.652 [2024-05-15 11:18:23.756746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.653 qpair failed and we were unable to recover it. 00:26:26.653 [2024-05-15 11:18:23.756841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.653 [2024-05-15 11:18:23.756937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.653 [2024-05-15 11:18:23.756951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.653 qpair failed and we were unable to recover it. 00:26:26.653 [2024-05-15 11:18:23.757174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.653 [2024-05-15 11:18:23.757320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.653 [2024-05-15 11:18:23.757333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.653 qpair failed and we were unable to recover it. 00:26:26.653 [2024-05-15 11:18:23.757406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.653 [2024-05-15 11:18:23.757476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.653 [2024-05-15 11:18:23.757489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.653 qpair failed and we were unable to recover it. 00:26:26.653 [2024-05-15 11:18:23.757574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.653 [2024-05-15 11:18:23.757643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.653 [2024-05-15 11:18:23.757656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.653 qpair failed and we were unable to recover it. 00:26:26.653 [2024-05-15 11:18:23.757745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.653 [2024-05-15 11:18:23.757834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.653 [2024-05-15 11:18:23.757846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.653 qpair failed and we were unable to recover it. 00:26:26.653 [2024-05-15 11:18:23.758003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.653 [2024-05-15 11:18:23.758154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.653 [2024-05-15 11:18:23.758172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.653 qpair failed and we were unable to recover it. 00:26:26.653 [2024-05-15 11:18:23.758253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.653 [2024-05-15 11:18:23.758348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.653 [2024-05-15 11:18:23.758361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.653 qpair failed and we were unable to recover it. 00:26:26.653 [2024-05-15 11:18:23.758461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.653 [2024-05-15 11:18:23.758529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.653 [2024-05-15 11:18:23.758543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.653 qpair failed and we were unable to recover it. 00:26:26.653 [2024-05-15 11:18:23.758699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.653 [2024-05-15 11:18:23.758803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.653 [2024-05-15 11:18:23.758817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.653 qpair failed and we were unable to recover it. 00:26:26.653 [2024-05-15 11:18:23.758976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.653 [2024-05-15 11:18:23.759051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.653 [2024-05-15 11:18:23.759064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.653 qpair failed and we were unable to recover it. 00:26:26.653 [2024-05-15 11:18:23.759142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.653 [2024-05-15 11:18:23.759246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.653 [2024-05-15 11:18:23.759261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.653 qpair failed and we were unable to recover it. 00:26:26.653 [2024-05-15 11:18:23.759416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.653 [2024-05-15 11:18:23.759552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.653 [2024-05-15 11:18:23.759565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.653 qpair failed and we were unable to recover it. 00:26:26.653 [2024-05-15 11:18:23.759731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.653 [2024-05-15 11:18:23.759870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.653 [2024-05-15 11:18:23.759885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.653 qpair failed and we were unable to recover it. 00:26:26.653 [2024-05-15 11:18:23.760050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.653 [2024-05-15 11:18:23.760143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.653 [2024-05-15 11:18:23.760156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.653 qpair failed and we were unable to recover it. 00:26:26.653 [2024-05-15 11:18:23.760250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.653 [2024-05-15 11:18:23.760455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.653 [2024-05-15 11:18:23.760468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.653 qpair failed and we were unable to recover it. 00:26:26.653 [2024-05-15 11:18:23.760543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.653 [2024-05-15 11:18:23.760625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.653 [2024-05-15 11:18:23.760638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.653 qpair failed and we were unable to recover it. 00:26:26.653 [2024-05-15 11:18:23.760724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.653 [2024-05-15 11:18:23.760883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.653 [2024-05-15 11:18:23.760897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.653 qpair failed and we were unable to recover it. 00:26:26.653 [2024-05-15 11:18:23.761037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.653 [2024-05-15 11:18:23.761118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.653 [2024-05-15 11:18:23.761131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.653 qpair failed and we were unable to recover it. 00:26:26.653 [2024-05-15 11:18:23.761293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.653 [2024-05-15 11:18:23.761382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.653 [2024-05-15 11:18:23.761395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.653 qpair failed and we were unable to recover it. 00:26:26.653 [2024-05-15 11:18:23.761485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.653 [2024-05-15 11:18:23.761644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.653 [2024-05-15 11:18:23.761657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.653 qpair failed and we were unable to recover it. 00:26:26.653 [2024-05-15 11:18:23.761747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.653 [2024-05-15 11:18:23.761890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.653 [2024-05-15 11:18:23.761905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.653 qpair failed and we were unable to recover it. 00:26:26.653 [2024-05-15 11:18:23.762128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.653 [2024-05-15 11:18:23.762279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.653 [2024-05-15 11:18:23.762292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.653 qpair failed and we were unable to recover it. 00:26:26.654 [2024-05-15 11:18:23.762380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.762468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.762481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.654 qpair failed and we were unable to recover it. 00:26:26.654 [2024-05-15 11:18:23.762584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.762720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.762733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.654 qpair failed and we were unable to recover it. 00:26:26.654 [2024-05-15 11:18:23.762941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.763092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.763106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.654 qpair failed and we were unable to recover it. 00:26:26.654 [2024-05-15 11:18:23.763191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.763331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.763345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.654 qpair failed and we were unable to recover it. 00:26:26.654 [2024-05-15 11:18:23.763446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.763523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.763536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.654 qpair failed and we were unable to recover it. 00:26:26.654 [2024-05-15 11:18:23.763749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.763895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.763909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.654 qpair failed and we were unable to recover it. 00:26:26.654 [2024-05-15 11:18:23.763989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.764139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.764152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.654 qpair failed and we were unable to recover it. 00:26:26.654 [2024-05-15 11:18:23.764248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.764408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.764421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.654 qpair failed and we were unable to recover it. 00:26:26.654 [2024-05-15 11:18:23.764568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.764644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.764660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.654 qpair failed and we were unable to recover it. 00:26:26.654 [2024-05-15 11:18:23.764824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.765004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.765018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.654 qpair failed and we were unable to recover it. 00:26:26.654 [2024-05-15 11:18:23.765093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.765236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.765250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.654 qpair failed and we were unable to recover it. 00:26:26.654 [2024-05-15 11:18:23.765387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.765604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.765619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.654 qpair failed and we were unable to recover it. 00:26:26.654 [2024-05-15 11:18:23.765792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.766017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.766029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.654 qpair failed and we were unable to recover it. 00:26:26.654 [2024-05-15 11:18:23.766171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.766346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.766360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.654 qpair failed and we were unable to recover it. 00:26:26.654 [2024-05-15 11:18:23.766451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.766663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.766677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.654 qpair failed and we were unable to recover it. 00:26:26.654 [2024-05-15 11:18:23.766854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.766992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.767005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.654 qpair failed and we were unable to recover it. 00:26:26.654 [2024-05-15 11:18:23.767097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.767199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.767213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.654 qpair failed and we were unable to recover it. 00:26:26.654 [2024-05-15 11:18:23.767297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.767436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.767449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.654 qpair failed and we were unable to recover it. 00:26:26.654 [2024-05-15 11:18:23.767591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.767671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.767684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.654 qpair failed and we were unable to recover it. 00:26:26.654 [2024-05-15 11:18:23.767832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.767914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.767927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.654 qpair failed and we were unable to recover it. 00:26:26.654 [2024-05-15 11:18:23.768070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.768171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.768185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.654 qpair failed and we were unable to recover it. 00:26:26.654 [2024-05-15 11:18:23.768338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.768417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.768430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.654 qpair failed and we were unable to recover it. 00:26:26.654 [2024-05-15 11:18:23.768509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.768669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.768682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.654 qpair failed and we were unable to recover it. 00:26:26.654 [2024-05-15 11:18:23.768761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.768834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.768847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.654 qpair failed and we were unable to recover it. 00:26:26.654 [2024-05-15 11:18:23.768996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.769084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.769099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.654 qpair failed and we were unable to recover it. 00:26:26.654 [2024-05-15 11:18:23.769248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.769394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.769408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.654 qpair failed and we were unable to recover it. 00:26:26.654 [2024-05-15 11:18:23.769482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.769648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.769661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.654 qpair failed and we were unable to recover it. 00:26:26.654 [2024-05-15 11:18:23.769805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.654 [2024-05-15 11:18:23.769956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.769969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.655 qpair failed and we were unable to recover it. 00:26:26.655 [2024-05-15 11:18:23.770184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.770322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.770336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.655 qpair failed and we were unable to recover it. 00:26:26.655 [2024-05-15 11:18:23.770485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.770571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.770584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.655 qpair failed and we were unable to recover it. 00:26:26.655 [2024-05-15 11:18:23.770690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.770776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.770789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.655 qpair failed and we were unable to recover it. 00:26:26.655 [2024-05-15 11:18:23.770878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.771020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.771033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.655 qpair failed and we were unable to recover it. 00:26:26.655 [2024-05-15 11:18:23.771107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.771267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.771281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.655 qpair failed and we were unable to recover it. 00:26:26.655 [2024-05-15 11:18:23.771464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.771607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.771620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.655 qpair failed and we were unable to recover it. 00:26:26.655 [2024-05-15 11:18:23.771762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.771862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.771875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.655 qpair failed and we were unable to recover it. 00:26:26.655 [2024-05-15 11:18:23.771959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.772110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.772123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.655 qpair failed and we were unable to recover it. 00:26:26.655 [2024-05-15 11:18:23.772277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.772417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.772430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.655 qpair failed and we were unable to recover it. 00:26:26.655 [2024-05-15 11:18:23.772569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.772727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.772740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.655 qpair failed and we were unable to recover it. 00:26:26.655 [2024-05-15 11:18:23.772842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.773070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.773084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.655 qpair failed and we were unable to recover it. 00:26:26.655 [2024-05-15 11:18:23.773247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.773333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.773346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.655 qpair failed and we were unable to recover it. 00:26:26.655 [2024-05-15 11:18:23.773446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.773544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.773557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.655 qpair failed and we were unable to recover it. 00:26:26.655 [2024-05-15 11:18:23.773717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.773874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.773887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.655 qpair failed and we were unable to recover it. 00:26:26.655 [2024-05-15 11:18:23.773985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.774142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.774155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.655 qpair failed and we were unable to recover it. 00:26:26.655 [2024-05-15 11:18:23.774320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.774462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.774475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.655 qpair failed and we were unable to recover it. 00:26:26.655 [2024-05-15 11:18:23.774619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.774788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.774801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.655 qpair failed and we were unable to recover it. 00:26:26.655 [2024-05-15 11:18:23.774891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.774981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.774994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.655 qpair failed and we were unable to recover it. 00:26:26.655 [2024-05-15 11:18:23.775158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.775321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.775335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.655 qpair failed and we were unable to recover it. 00:26:26.655 [2024-05-15 11:18:23.775518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.775722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.775735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.655 qpair failed and we were unable to recover it. 00:26:26.655 [2024-05-15 11:18:23.775820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.775998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.776013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.655 qpair failed and we were unable to recover it. 00:26:26.655 [2024-05-15 11:18:23.776214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.776396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.776410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.655 qpair failed and we were unable to recover it. 00:26:26.655 [2024-05-15 11:18:23.776648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.776792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.776805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.655 qpair failed and we were unable to recover it. 00:26:26.655 [2024-05-15 11:18:23.777022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.777101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.777114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.655 qpair failed and we were unable to recover it. 00:26:26.655 [2024-05-15 11:18:23.777254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.777394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.777408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.655 qpair failed and we were unable to recover it. 00:26:26.655 [2024-05-15 11:18:23.777579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.777736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.777749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.655 qpair failed and we were unable to recover it. 00:26:26.655 [2024-05-15 11:18:23.777827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.777915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.777928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.655 qpair failed and we were unable to recover it. 00:26:26.655 [2024-05-15 11:18:23.778011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.655 [2024-05-15 11:18:23.778103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.778119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.656 qpair failed and we were unable to recover it. 00:26:26.656 [2024-05-15 11:18:23.778284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.778484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.778497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.656 qpair failed and we were unable to recover it. 00:26:26.656 [2024-05-15 11:18:23.778590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.778744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.778759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.656 qpair failed and we were unable to recover it. 00:26:26.656 [2024-05-15 11:18:23.778850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.779053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.779068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.656 qpair failed and we were unable to recover it. 00:26:26.656 [2024-05-15 11:18:23.779146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.779312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.779327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.656 qpair failed and we were unable to recover it. 00:26:26.656 [2024-05-15 11:18:23.779422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.779497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.779510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.656 qpair failed and we were unable to recover it. 00:26:26.656 [2024-05-15 11:18:23.779653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.779797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.779811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.656 qpair failed and we were unable to recover it. 00:26:26.656 [2024-05-15 11:18:23.779974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.780135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.780148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.656 qpair failed and we were unable to recover it. 00:26:26.656 [2024-05-15 11:18:23.780320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.780473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.780486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.656 qpair failed and we were unable to recover it. 00:26:26.656 [2024-05-15 11:18:23.780589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.780673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.780686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.656 qpair failed and we were unable to recover it. 00:26:26.656 [2024-05-15 11:18:23.780832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.780990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.781010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.656 qpair failed and we were unable to recover it. 00:26:26.656 [2024-05-15 11:18:23.781221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.781391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.781404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.656 qpair failed and we were unable to recover it. 00:26:26.656 [2024-05-15 11:18:23.781490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.781560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.781573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.656 qpair failed and we were unable to recover it. 00:26:26.656 [2024-05-15 11:18:23.781728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.781891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.781907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.656 qpair failed and we were unable to recover it. 00:26:26.656 [2024-05-15 11:18:23.782146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.782250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.782264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.656 qpair failed and we were unable to recover it. 00:26:26.656 [2024-05-15 11:18:23.782493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.782581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.782594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.656 qpair failed and we were unable to recover it. 00:26:26.656 [2024-05-15 11:18:23.782683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.782884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.782899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.656 qpair failed and we were unable to recover it. 00:26:26.656 [2024-05-15 11:18:23.783063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.783160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.783177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.656 qpair failed and we were unable to recover it. 00:26:26.656 [2024-05-15 11:18:23.783273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.783419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.783432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.656 qpair failed and we were unable to recover it. 00:26:26.656 [2024-05-15 11:18:23.783595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.783772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.783787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.656 qpair failed and we were unable to recover it. 00:26:26.656 [2024-05-15 11:18:23.783961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.784134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.784149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.656 qpair failed and we were unable to recover it. 00:26:26.656 [2024-05-15 11:18:23.784251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.784344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.784358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.656 qpair failed and we were unable to recover it. 00:26:26.656 [2024-05-15 11:18:23.784550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.784638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.784652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.656 qpair failed and we were unable to recover it. 00:26:26.656 [2024-05-15 11:18:23.784757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.784833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.784847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.656 qpair failed and we were unable to recover it. 00:26:26.656 [2024-05-15 11:18:23.785039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.785113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.785126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.656 qpair failed and we were unable to recover it. 00:26:26.656 [2024-05-15 11:18:23.785284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.785358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.785372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.656 qpair failed and we were unable to recover it. 00:26:26.656 [2024-05-15 11:18:23.785527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.785610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.785624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.656 qpair failed and we were unable to recover it. 00:26:26.656 [2024-05-15 11:18:23.785732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.785904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.656 [2024-05-15 11:18:23.785918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.656 qpair failed and we were unable to recover it. 00:26:26.656 [2024-05-15 11:18:23.786069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.786149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.786162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.657 qpair failed and we were unable to recover it. 00:26:26.657 [2024-05-15 11:18:23.786250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.786408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.786421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.657 qpair failed and we were unable to recover it. 00:26:26.657 [2024-05-15 11:18:23.786511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.786587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.786600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.657 qpair failed and we were unable to recover it. 00:26:26.657 [2024-05-15 11:18:23.786692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.786769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.786782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.657 qpair failed and we were unable to recover it. 00:26:26.657 [2024-05-15 11:18:23.787010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.787080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.787093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.657 qpair failed and we were unable to recover it. 00:26:26.657 [2024-05-15 11:18:23.787190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.787280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.787293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.657 qpair failed and we were unable to recover it. 00:26:26.657 [2024-05-15 11:18:23.787503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.787577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.787591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.657 qpair failed and we were unable to recover it. 00:26:26.657 [2024-05-15 11:18:23.787675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.787896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.787910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.657 qpair failed and we were unable to recover it. 00:26:26.657 [2024-05-15 11:18:23.787997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.788101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.788114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.657 qpair failed and we were unable to recover it. 00:26:26.657 [2024-05-15 11:18:23.788287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.788500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.788514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.657 qpair failed and we were unable to recover it. 00:26:26.657 [2024-05-15 11:18:23.788609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.788750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.788763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.657 qpair failed and we were unable to recover it. 00:26:26.657 [2024-05-15 11:18:23.789006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.789161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.789179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.657 qpair failed and we were unable to recover it. 00:26:26.657 [2024-05-15 11:18:23.789270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.789420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.789433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.657 qpair failed and we were unable to recover it. 00:26:26.657 [2024-05-15 11:18:23.789658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.789892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.789905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.657 qpair failed and we were unable to recover it. 00:26:26.657 [2024-05-15 11:18:23.789980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.790066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.790080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.657 qpair failed and we were unable to recover it. 00:26:26.657 [2024-05-15 11:18:23.790230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.790389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.790403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.657 qpair failed and we were unable to recover it. 00:26:26.657 [2024-05-15 11:18:23.790569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.790721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.790734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.657 qpair failed and we were unable to recover it. 00:26:26.657 [2024-05-15 11:18:23.790891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.790980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.790994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.657 qpair failed and we were unable to recover it. 00:26:26.657 [2024-05-15 11:18:23.791135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.791225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.791239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.657 qpair failed and we were unable to recover it. 00:26:26.657 [2024-05-15 11:18:23.791379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.791526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.791539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.657 qpair failed and we were unable to recover it. 00:26:26.657 [2024-05-15 11:18:23.791683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.791763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.791776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.657 qpair failed and we were unable to recover it. 00:26:26.657 [2024-05-15 11:18:23.791933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.792142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.792155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.657 qpair failed and we were unable to recover it. 00:26:26.657 [2024-05-15 11:18:23.792246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.792470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.792484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.657 qpair failed and we were unable to recover it. 00:26:26.657 [2024-05-15 11:18:23.792577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.792653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.792666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.657 qpair failed and we were unable to recover it. 00:26:26.657 [2024-05-15 11:18:23.792750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.792902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.792916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.657 qpair failed and we were unable to recover it. 00:26:26.657 [2024-05-15 11:18:23.793006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.793077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.793090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.657 qpair failed and we were unable to recover it. 00:26:26.657 [2024-05-15 11:18:23.793234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.793309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.793323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.657 qpair failed and we were unable to recover it. 00:26:26.657 [2024-05-15 11:18:23.793415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.793569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.657 [2024-05-15 11:18:23.793582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.657 qpair failed and we were unable to recover it. 00:26:26.658 [2024-05-15 11:18:23.793796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.658 [2024-05-15 11:18:23.793966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.658 [2024-05-15 11:18:23.793979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.658 qpair failed and we were unable to recover it. 00:26:26.658 [2024-05-15 11:18:23.794071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.658 [2024-05-15 11:18:23.794292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.658 [2024-05-15 11:18:23.794306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.658 qpair failed and we were unable to recover it. 00:26:26.658 [2024-05-15 11:18:23.794471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.658 [2024-05-15 11:18:23.794562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.658 [2024-05-15 11:18:23.794575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.658 qpair failed and we were unable to recover it. 00:26:26.658 [2024-05-15 11:18:23.794729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.658 [2024-05-15 11:18:23.794881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.658 [2024-05-15 11:18:23.794894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.658 qpair failed and we were unable to recover it. 00:26:26.658 [2024-05-15 11:18:23.794979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.658 [2024-05-15 11:18:23.795186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.658 [2024-05-15 11:18:23.795200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.658 qpair failed and we were unable to recover it. 00:26:26.658 [2024-05-15 11:18:23.795303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.658 [2024-05-15 11:18:23.795457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.658 [2024-05-15 11:18:23.795473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.658 qpair failed and we were unable to recover it. 00:26:26.658 [2024-05-15 11:18:23.795618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.658 [2024-05-15 11:18:23.795709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.658 [2024-05-15 11:18:23.795722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.658 qpair failed and we were unable to recover it. 00:26:26.658 [2024-05-15 11:18:23.795882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.658 [2024-05-15 11:18:23.795954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.658 [2024-05-15 11:18:23.795967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.658 qpair failed and we were unable to recover it. 00:26:26.658 [2024-05-15 11:18:23.796064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.658 [2024-05-15 11:18:23.796292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.658 [2024-05-15 11:18:23.796306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.658 qpair failed and we were unable to recover it. 00:26:26.658 [2024-05-15 11:18:23.796398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.658 [2024-05-15 11:18:23.796477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.658 [2024-05-15 11:18:23.796490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.658 qpair failed and we were unable to recover it. 00:26:26.658 [2024-05-15 11:18:23.796670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.658 [2024-05-15 11:18:23.796828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.658 [2024-05-15 11:18:23.796842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.658 qpair failed and we were unable to recover it. 00:26:26.658 [2024-05-15 11:18:23.796910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.658 [2024-05-15 11:18:23.797023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.658 [2024-05-15 11:18:23.797036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.658 qpair failed and we were unable to recover it. 00:26:26.658 [2024-05-15 11:18:23.797194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.658 [2024-05-15 11:18:23.797334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.658 [2024-05-15 11:18:23.797348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.658 qpair failed and we were unable to recover it. 00:26:26.658 [2024-05-15 11:18:23.797453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.658 [2024-05-15 11:18:23.797544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.658 [2024-05-15 11:18:23.797557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.658 qpair failed and we were unable to recover it. 00:26:26.658 [2024-05-15 11:18:23.797733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.658 [2024-05-15 11:18:23.797963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.658 [2024-05-15 11:18:23.797977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.658 qpair failed and we were unable to recover it. 00:26:26.658 [2024-05-15 11:18:23.798065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.658 [2024-05-15 11:18:23.798220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.658 [2024-05-15 11:18:23.798234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.658 qpair failed and we were unable to recover it. 00:26:26.658 [2024-05-15 11:18:23.798371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.658 [2024-05-15 11:18:23.798443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.658 [2024-05-15 11:18:23.798456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.658 qpair failed and we were unable to recover it. 00:26:26.658 [2024-05-15 11:18:23.798619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.658 [2024-05-15 11:18:23.798777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.658 [2024-05-15 11:18:23.798790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.658 qpair failed and we were unable to recover it. 00:26:26.658 [2024-05-15 11:18:23.798948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.658 [2024-05-15 11:18:23.799103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.658 [2024-05-15 11:18:23.799116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.658 qpair failed and we were unable to recover it. 00:26:26.658 [2024-05-15 11:18:23.799257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.658 [2024-05-15 11:18:23.799416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.658 [2024-05-15 11:18:23.799429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.658 qpair failed and we were unable to recover it. 00:26:26.658 [2024-05-15 11:18:23.799570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.658 [2024-05-15 11:18:23.799645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.658 [2024-05-15 11:18:23.799659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.658 qpair failed and we were unable to recover it. 00:26:26.658 [2024-05-15 11:18:23.799744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.658 [2024-05-15 11:18:23.799973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.799987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.659 qpair failed and we were unable to recover it. 00:26:26.659 [2024-05-15 11:18:23.800150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.800229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.800242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.659 qpair failed and we were unable to recover it. 00:26:26.659 [2024-05-15 11:18:23.800331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.800419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.800432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.659 qpair failed and we were unable to recover it. 00:26:26.659 [2024-05-15 11:18:23.800587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.800685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.800698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.659 qpair failed and we were unable to recover it. 00:26:26.659 [2024-05-15 11:18:23.800903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.800985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.800999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.659 qpair failed and we were unable to recover it. 00:26:26.659 [2024-05-15 11:18:23.801090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.801177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.801190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.659 qpair failed and we were unable to recover it. 00:26:26.659 [2024-05-15 11:18:23.801272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.801419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.801432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.659 qpair failed and we were unable to recover it. 00:26:26.659 [2024-05-15 11:18:23.801573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.801733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.801746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.659 qpair failed and we were unable to recover it. 00:26:26.659 [2024-05-15 11:18:23.801910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.802065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.802079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.659 qpair failed and we were unable to recover it. 00:26:26.659 [2024-05-15 11:18:23.802161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.802324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.802338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.659 qpair failed and we were unable to recover it. 00:26:26.659 [2024-05-15 11:18:23.802442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.802649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.802662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.659 qpair failed and we were unable to recover it. 00:26:26.659 [2024-05-15 11:18:23.802749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.802818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.802831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.659 qpair failed and we were unable to recover it. 00:26:26.659 [2024-05-15 11:18:23.802990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.803195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.803209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.659 qpair failed and we were unable to recover it. 00:26:26.659 [2024-05-15 11:18:23.803374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.803464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.803477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.659 qpair failed and we were unable to recover it. 00:26:26.659 [2024-05-15 11:18:23.803628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.803717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.803730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.659 qpair failed and we were unable to recover it. 00:26:26.659 [2024-05-15 11:18:23.803820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.803980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.803992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.659 qpair failed and we were unable to recover it. 00:26:26.659 [2024-05-15 11:18:23.804097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.804328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.804342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.659 qpair failed and we were unable to recover it. 00:26:26.659 [2024-05-15 11:18:23.804425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.804516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.804529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.659 qpair failed and we were unable to recover it. 00:26:26.659 [2024-05-15 11:18:23.804694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.804781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.804794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.659 qpair failed and we were unable to recover it. 00:26:26.659 [2024-05-15 11:18:23.804883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.805031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.805044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.659 qpair failed and we were unable to recover it. 00:26:26.659 [2024-05-15 11:18:23.805297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.805398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.805411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.659 qpair failed and we were unable to recover it. 00:26:26.659 [2024-05-15 11:18:23.805567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.805734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.805748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.659 qpair failed and we were unable to recover it. 00:26:26.659 [2024-05-15 11:18:23.805904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.805996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.806009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.659 qpair failed and we were unable to recover it. 00:26:26.659 [2024-05-15 11:18:23.806175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.806332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.806345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.659 qpair failed and we were unable to recover it. 00:26:26.659 [2024-05-15 11:18:23.806422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.806568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.806581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.659 qpair failed and we were unable to recover it. 00:26:26.659 [2024-05-15 11:18:23.806731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.806964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.806978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.659 qpair failed and we were unable to recover it. 00:26:26.659 [2024-05-15 11:18:23.807051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.807147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.807161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.659 qpair failed and we were unable to recover it. 00:26:26.659 [2024-05-15 11:18:23.807267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.807476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.659 [2024-05-15 11:18:23.807489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.659 qpair failed and we were unable to recover it. 00:26:26.659 [2024-05-15 11:18:23.807588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.807664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.807677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.660 qpair failed and we were unable to recover it. 00:26:26.660 [2024-05-15 11:18:23.807777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.807868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.807881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.660 qpair failed and we were unable to recover it. 00:26:26.660 [2024-05-15 11:18:23.807970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.808054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.808067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.660 qpair failed and we were unable to recover it. 00:26:26.660 [2024-05-15 11:18:23.808212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.808302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.808314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.660 qpair failed and we were unable to recover it. 00:26:26.660 [2024-05-15 11:18:23.808376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.808462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.808474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.660 qpair failed and we were unable to recover it. 00:26:26.660 [2024-05-15 11:18:23.808626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.808761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.808774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.660 qpair failed and we were unable to recover it. 00:26:26.660 [2024-05-15 11:18:23.808876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.809024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.809038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.660 qpair failed and we were unable to recover it. 00:26:26.660 [2024-05-15 11:18:23.809218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.809294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.809310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.660 qpair failed and we were unable to recover it. 00:26:26.660 [2024-05-15 11:18:23.809488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.809668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.809683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.660 qpair failed and we were unable to recover it. 00:26:26.660 [2024-05-15 11:18:23.809764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.809912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.809929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.660 qpair failed and we were unable to recover it. 00:26:26.660 [2024-05-15 11:18:23.810084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.810178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.810199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.660 qpair failed and we were unable to recover it. 00:26:26.660 [2024-05-15 11:18:23.810304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.810462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.810476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.660 qpair failed and we were unable to recover it. 00:26:26.660 [2024-05-15 11:18:23.810619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.810791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.810805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.660 qpair failed and we were unable to recover it. 00:26:26.660 [2024-05-15 11:18:23.810889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.810978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.810991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.660 qpair failed and we were unable to recover it. 00:26:26.660 [2024-05-15 11:18:23.811156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.811347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.811362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.660 qpair failed and we were unable to recover it. 00:26:26.660 [2024-05-15 11:18:23.811537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.811705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.811717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.660 qpair failed and we were unable to recover it. 00:26:26.660 [2024-05-15 11:18:23.811818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.811899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.811913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.660 qpair failed and we were unable to recover it. 00:26:26.660 [2024-05-15 11:18:23.812014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.812107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.812121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.660 qpair failed and we were unable to recover it. 00:26:26.660 [2024-05-15 11:18:23.812212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.812303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.812317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.660 qpair failed and we were unable to recover it. 00:26:26.660 [2024-05-15 11:18:23.812410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.812548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.812562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.660 qpair failed and we were unable to recover it. 00:26:26.660 [2024-05-15 11:18:23.812638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.812786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.812800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.660 qpair failed and we were unable to recover it. 00:26:26.660 [2024-05-15 11:18:23.812884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.812970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.812983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.660 qpair failed and we were unable to recover it. 00:26:26.660 [2024-05-15 11:18:23.813075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.813155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.813177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.660 qpair failed and we were unable to recover it. 00:26:26.660 [2024-05-15 11:18:23.813388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.813602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.813615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.660 qpair failed and we were unable to recover it. 00:26:26.660 [2024-05-15 11:18:23.813707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.813874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.813887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.660 qpair failed and we were unable to recover it. 00:26:26.660 [2024-05-15 11:18:23.814031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.814238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.814253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.660 qpair failed and we were unable to recover it. 00:26:26.660 [2024-05-15 11:18:23.814348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.814434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.814448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.660 qpair failed and we were unable to recover it. 00:26:26.660 [2024-05-15 11:18:23.814546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.814702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.660 [2024-05-15 11:18:23.814718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.660 qpair failed and we were unable to recover it. 00:26:26.661 [2024-05-15 11:18:23.814824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.814997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.815010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.661 qpair failed and we were unable to recover it. 00:26:26.661 [2024-05-15 11:18:23.815152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.815312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.815329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.661 qpair failed and we were unable to recover it. 00:26:26.661 [2024-05-15 11:18:23.815540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.815761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.815776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.661 qpair failed and we were unable to recover it. 00:26:26.661 [2024-05-15 11:18:23.815936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.816024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.816038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.661 qpair failed and we were unable to recover it. 00:26:26.661 [2024-05-15 11:18:23.816176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.816337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.816350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.661 qpair failed and we were unable to recover it. 00:26:26.661 [2024-05-15 11:18:23.816558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.816710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.816724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.661 qpair failed and we were unable to recover it. 00:26:26.661 [2024-05-15 11:18:23.816811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.817014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.817028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.661 qpair failed and we were unable to recover it. 00:26:26.661 [2024-05-15 11:18:23.817120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.817285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.817298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.661 qpair failed and we were unable to recover it. 00:26:26.661 [2024-05-15 11:18:23.817389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.817476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.817489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.661 qpair failed and we were unable to recover it. 00:26:26.661 [2024-05-15 11:18:23.817644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.817716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.817729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.661 qpair failed and we were unable to recover it. 00:26:26.661 [2024-05-15 11:18:23.817912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.818054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.818067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.661 qpair failed and we were unable to recover it. 00:26:26.661 [2024-05-15 11:18:23.818154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.818280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.818294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.661 qpair failed and we were unable to recover it. 00:26:26.661 [2024-05-15 11:18:23.818383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.818519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.818532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.661 qpair failed and we were unable to recover it. 00:26:26.661 [2024-05-15 11:18:23.818714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.818802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.818815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.661 qpair failed and we were unable to recover it. 00:26:26.661 [2024-05-15 11:18:23.818911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.818990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.819003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.661 qpair failed and we were unable to recover it. 00:26:26.661 [2024-05-15 11:18:23.819106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.819208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.819223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.661 qpair failed and we were unable to recover it. 00:26:26.661 [2024-05-15 11:18:23.819324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.819493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.819506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.661 qpair failed and we were unable to recover it. 00:26:26.661 [2024-05-15 11:18:23.819586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.819659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.819671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.661 qpair failed and we were unable to recover it. 00:26:26.661 [2024-05-15 11:18:23.819776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.820008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.820022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.661 qpair failed and we were unable to recover it. 00:26:26.661 [2024-05-15 11:18:23.820113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.820207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.820220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.661 qpair failed and we were unable to recover it. 00:26:26.661 [2024-05-15 11:18:23.820312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.820385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.820401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.661 qpair failed and we were unable to recover it. 00:26:26.661 [2024-05-15 11:18:23.820588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.820667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.820680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.661 qpair failed and we were unable to recover it. 00:26:26.661 [2024-05-15 11:18:23.820764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.820852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.820866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.661 qpair failed and we were unable to recover it. 00:26:26.661 [2024-05-15 11:18:23.820939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.821038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.821051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.661 qpair failed and we were unable to recover it. 00:26:26.661 [2024-05-15 11:18:23.821204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.821344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.821356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.661 qpair failed and we were unable to recover it. 00:26:26.661 [2024-05-15 11:18:23.821447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.821595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.821607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.661 qpair failed and we were unable to recover it. 00:26:26.661 [2024-05-15 11:18:23.821747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.821887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.821901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.661 qpair failed and we were unable to recover it. 00:26:26.661 [2024-05-15 11:18:23.821986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.822067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.661 [2024-05-15 11:18:23.822080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.662 qpair failed and we were unable to recover it. 00:26:26.662 [2024-05-15 11:18:23.822183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.822342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.822355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.662 qpair failed and we were unable to recover it. 00:26:26.662 [2024-05-15 11:18:23.822443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.822575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.822588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.662 qpair failed and we were unable to recover it. 00:26:26.662 [2024-05-15 11:18:23.822683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.822776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.822789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.662 qpair failed and we were unable to recover it. 00:26:26.662 [2024-05-15 11:18:23.822931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.823086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.823100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.662 qpair failed and we were unable to recover it. 00:26:26.662 [2024-05-15 11:18:23.823178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.823246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.823258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.662 qpair failed and we were unable to recover it. 00:26:26.662 [2024-05-15 11:18:23.823417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.823581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.823594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.662 qpair failed and we were unable to recover it. 00:26:26.662 [2024-05-15 11:18:23.823738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.823907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.823920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.662 qpair failed and we were unable to recover it. 00:26:26.662 [2024-05-15 11:18:23.824014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.824183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.824196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.662 qpair failed and we were unable to recover it. 00:26:26.662 [2024-05-15 11:18:23.824299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.824371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.824384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.662 qpair failed and we were unable to recover it. 00:26:26.662 [2024-05-15 11:18:23.824616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.824756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.824769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.662 qpair failed and we were unable to recover it. 00:26:26.662 [2024-05-15 11:18:23.824866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.825016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.825029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.662 qpair failed and we were unable to recover it. 00:26:26.662 [2024-05-15 11:18:23.825258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.825368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.825381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.662 qpair failed and we were unable to recover it. 00:26:26.662 [2024-05-15 11:18:23.825545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.825629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.825642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.662 qpair failed and we were unable to recover it. 00:26:26.662 [2024-05-15 11:18:23.825777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.825917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.825930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.662 qpair failed and we were unable to recover it. 00:26:26.662 [2024-05-15 11:18:23.826089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.826255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.826269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.662 qpair failed and we were unable to recover it. 00:26:26.662 [2024-05-15 11:18:23.826365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.826446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.826460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.662 qpair failed and we were unable to recover it. 00:26:26.662 [2024-05-15 11:18:23.826560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.826645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.826657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.662 qpair failed and we were unable to recover it. 00:26:26.662 [2024-05-15 11:18:23.826822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.826928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.826941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.662 qpair failed and we were unable to recover it. 00:26:26.662 [2024-05-15 11:18:23.827033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.827185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.827199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.662 qpair failed and we were unable to recover it. 00:26:26.662 [2024-05-15 11:18:23.827458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.827532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.827544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.662 qpair failed and we were unable to recover it. 00:26:26.662 [2024-05-15 11:18:23.827717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.827868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.827881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.662 qpair failed and we were unable to recover it. 00:26:26.662 [2024-05-15 11:18:23.828022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.828107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.828120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.662 qpair failed and we were unable to recover it. 00:26:26.662 [2024-05-15 11:18:23.828281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.828367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.828381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.662 qpair failed and we were unable to recover it. 00:26:26.662 [2024-05-15 11:18:23.828491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.828688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.828701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.662 qpair failed and we were unable to recover it. 00:26:26.662 [2024-05-15 11:18:23.828802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.828918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.828927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.662 qpair failed and we were unable to recover it. 00:26:26.662 [2024-05-15 11:18:23.829066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.829143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.829152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.662 qpair failed and we were unable to recover it. 00:26:26.662 [2024-05-15 11:18:23.829244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.829334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.829344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.662 qpair failed and we were unable to recover it. 00:26:26.662 [2024-05-15 11:18:23.829497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.662 [2024-05-15 11:18:23.829629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.663 [2024-05-15 11:18:23.829638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.663 qpair failed and we were unable to recover it. 00:26:26.663 [2024-05-15 11:18:23.829763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.663 [2024-05-15 11:18:23.829830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.663 [2024-05-15 11:18:23.829839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.663 qpair failed and we were unable to recover it. 00:26:26.663 [2024-05-15 11:18:23.829975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.663 [2024-05-15 11:18:23.830199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.663 [2024-05-15 11:18:23.830210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.663 qpair failed and we were unable to recover it. 00:26:26.663 [2024-05-15 11:18:23.830294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.663 [2024-05-15 11:18:23.830421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.663 [2024-05-15 11:18:23.830431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.663 qpair failed and we were unable to recover it. 00:26:26.663 [2024-05-15 11:18:23.830519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.663 [2024-05-15 11:18:23.830619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.663 [2024-05-15 11:18:23.830628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.663 qpair failed and we were unable to recover it. 00:26:26.663 [2024-05-15 11:18:23.830702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.663 [2024-05-15 11:18:23.830901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.663 [2024-05-15 11:18:23.830911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.663 qpair failed and we were unable to recover it. 00:26:26.663 [2024-05-15 11:18:23.830995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.663 [2024-05-15 11:18:23.831084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.663 [2024-05-15 11:18:23.831094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.663 qpair failed and we were unable to recover it. 00:26:26.663 [2024-05-15 11:18:23.831174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.663 [2024-05-15 11:18:23.831258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.663 [2024-05-15 11:18:23.831267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.663 qpair failed and we were unable to recover it. 00:26:26.663 [2024-05-15 11:18:23.831347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.663 [2024-05-15 11:18:23.831499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.663 [2024-05-15 11:18:23.831509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.663 qpair failed and we were unable to recover it. 00:26:26.663 [2024-05-15 11:18:23.831662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.663 [2024-05-15 11:18:23.831729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.663 [2024-05-15 11:18:23.831739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.663 qpair failed and we were unable to recover it. 00:26:26.663 [2024-05-15 11:18:23.831799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.663 [2024-05-15 11:18:23.832001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.663 [2024-05-15 11:18:23.832011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.663 qpair failed and we were unable to recover it. 00:26:26.663 [2024-05-15 11:18:23.832210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.663 [2024-05-15 11:18:23.832285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.663 [2024-05-15 11:18:23.832295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.663 qpair failed and we were unable to recover it. 00:26:26.663 [2024-05-15 11:18:23.832495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.663 [2024-05-15 11:18:23.832694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.663 [2024-05-15 11:18:23.832704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.663 qpair failed and we were unable to recover it. 00:26:26.663 [2024-05-15 11:18:23.832870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.663 [2024-05-15 11:18:23.832951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.663 [2024-05-15 11:18:23.832960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.663 qpair failed and we were unable to recover it. 00:26:26.663 [2024-05-15 11:18:23.833107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.663 [2024-05-15 11:18:23.833173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.663 [2024-05-15 11:18:23.833183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.663 qpair failed and we were unable to recover it. 00:26:26.663 [2024-05-15 11:18:23.833326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.663 [2024-05-15 11:18:23.833486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.663 [2024-05-15 11:18:23.833495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.663 qpair failed and we were unable to recover it. 00:26:26.663 [2024-05-15 11:18:23.833576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.663 [2024-05-15 11:18:23.833723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.663 [2024-05-15 11:18:23.833732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.663 qpair failed and we were unable to recover it. 00:26:26.663 [2024-05-15 11:18:23.833829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.663 [2024-05-15 11:18:23.833962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.663 [2024-05-15 11:18:23.833972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.663 qpair failed and we were unable to recover it. 00:26:26.663 [2024-05-15 11:18:23.834135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.663 [2024-05-15 11:18:23.834286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.663 [2024-05-15 11:18:23.834297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.663 qpair failed and we were unable to recover it. 00:26:26.663 [2024-05-15 11:18:23.834364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.663 [2024-05-15 11:18:23.834512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.663 [2024-05-15 11:18:23.834522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.663 qpair failed and we were unable to recover it. 00:26:26.663 [2024-05-15 11:18:23.834613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.663 [2024-05-15 11:18:23.834695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.663 [2024-05-15 11:18:23.834705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.663 qpair failed and we were unable to recover it. 00:26:26.663 [2024-05-15 11:18:23.834790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.663 [2024-05-15 11:18:23.834930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.663 [2024-05-15 11:18:23.834940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.663 qpair failed and we were unable to recover it. 00:26:26.663 [2024-05-15 11:18:23.835087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.835173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.835183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.664 qpair failed and we were unable to recover it. 00:26:26.664 [2024-05-15 11:18:23.835348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.835443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.835453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.664 qpair failed and we were unable to recover it. 00:26:26.664 [2024-05-15 11:18:23.835600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.835765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.835774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.664 qpair failed and we were unable to recover it. 00:26:26.664 [2024-05-15 11:18:23.835851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.835925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.835934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.664 qpair failed and we were unable to recover it. 00:26:26.664 [2024-05-15 11:18:23.836025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.836196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.836206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.664 qpair failed and we were unable to recover it. 00:26:26.664 [2024-05-15 11:18:23.836288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.836372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.836382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.664 qpair failed and we were unable to recover it. 00:26:26.664 [2024-05-15 11:18:23.836476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.836629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.836639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.664 qpair failed and we were unable to recover it. 00:26:26.664 [2024-05-15 11:18:23.836722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.836788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.836798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.664 qpair failed and we were unable to recover it. 00:26:26.664 [2024-05-15 11:18:23.836870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.836937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.836947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.664 qpair failed and we were unable to recover it. 00:26:26.664 [2024-05-15 11:18:23.837175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.837244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.837255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.664 qpair failed and we were unable to recover it. 00:26:26.664 [2024-05-15 11:18:23.837455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.837515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.837524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.664 qpair failed and we were unable to recover it. 00:26:26.664 [2024-05-15 11:18:23.837670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.837868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.837878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.664 qpair failed and we were unable to recover it. 00:26:26.664 [2024-05-15 11:18:23.837966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.838123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.838133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.664 qpair failed and we were unable to recover it. 00:26:26.664 [2024-05-15 11:18:23.838209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.838289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.838299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.664 qpair failed and we were unable to recover it. 00:26:26.664 [2024-05-15 11:18:23.838429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.838507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.838517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.664 qpair failed and we were unable to recover it. 00:26:26.664 [2024-05-15 11:18:23.838661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.838752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.838762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.664 qpair failed and we were unable to recover it. 00:26:26.664 [2024-05-15 11:18:23.838839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.838904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.838914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.664 qpair failed and we were unable to recover it. 00:26:26.664 [2024-05-15 11:18:23.838995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.839131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.839141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.664 qpair failed and we were unable to recover it. 00:26:26.664 [2024-05-15 11:18:23.839215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.839361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.839371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.664 qpair failed and we were unable to recover it. 00:26:26.664 [2024-05-15 11:18:23.839503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.839699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.839709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.664 qpair failed and we were unable to recover it. 00:26:26.664 [2024-05-15 11:18:23.839853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.839997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.840006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.664 qpair failed and we were unable to recover it. 00:26:26.664 [2024-05-15 11:18:23.840086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.840172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.840182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.664 qpair failed and we were unable to recover it. 00:26:26.664 [2024-05-15 11:18:23.840427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.840511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.840520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.664 qpair failed and we were unable to recover it. 00:26:26.664 [2024-05-15 11:18:23.840657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.840792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.840801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.664 qpair failed and we were unable to recover it. 00:26:26.664 [2024-05-15 11:18:23.840869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.840999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.841011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.664 qpair failed and we were unable to recover it. 00:26:26.664 [2024-05-15 11:18:23.841098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.841228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.841238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.664 qpair failed and we were unable to recover it. 00:26:26.664 [2024-05-15 11:18:23.841300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.841385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.841395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.664 qpair failed and we were unable to recover it. 00:26:26.664 [2024-05-15 11:18:23.841597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.841742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.664 [2024-05-15 11:18:23.841752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.665 qpair failed and we were unable to recover it. 00:26:26.665 [2024-05-15 11:18:23.841883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.842027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.842037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.665 qpair failed and we were unable to recover it. 00:26:26.665 [2024-05-15 11:18:23.842174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.842247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.842258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.665 qpair failed and we were unable to recover it. 00:26:26.665 [2024-05-15 11:18:23.842332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.842508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.842517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.665 qpair failed and we were unable to recover it. 00:26:26.665 [2024-05-15 11:18:23.842660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.842735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.842745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.665 qpair failed and we were unable to recover it. 00:26:26.665 [2024-05-15 11:18:23.842832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.842912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.842921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.665 qpair failed and we were unable to recover it. 00:26:26.665 [2024-05-15 11:18:23.843018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.843186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.843197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.665 qpair failed and we were unable to recover it. 00:26:26.665 [2024-05-15 11:18:23.843286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.843426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.843438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.665 qpair failed and we were unable to recover it. 00:26:26.665 [2024-05-15 11:18:23.843527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.843601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.843610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.665 qpair failed and we were unable to recover it. 00:26:26.665 [2024-05-15 11:18:23.843695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.843760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.843770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.665 qpair failed and we were unable to recover it. 00:26:26.665 [2024-05-15 11:18:23.843842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.843922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.843932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.665 qpair failed and we were unable to recover it. 00:26:26.665 [2024-05-15 11:18:23.844071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.844150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.844160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.665 qpair failed and we were unable to recover it. 00:26:26.665 [2024-05-15 11:18:23.844253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.844390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.844400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.665 qpair failed and we were unable to recover it. 00:26:26.665 [2024-05-15 11:18:23.844550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.844623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.844632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.665 qpair failed and we were unable to recover it. 00:26:26.665 [2024-05-15 11:18:23.844699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.844862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.844872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.665 qpair failed and we were unable to recover it. 00:26:26.665 [2024-05-15 11:18:23.845084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.845171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.845180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.665 qpair failed and we were unable to recover it. 00:26:26.665 [2024-05-15 11:18:23.845263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.845422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.845433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.665 qpair failed and we were unable to recover it. 00:26:26.665 [2024-05-15 11:18:23.845568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.845721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.845733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.665 qpair failed and we were unable to recover it. 00:26:26.665 [2024-05-15 11:18:23.845887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.846021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.846031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.665 qpair failed and we were unable to recover it. 00:26:26.665 [2024-05-15 11:18:23.846173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.846258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.846268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.665 qpair failed and we were unable to recover it. 00:26:26.665 [2024-05-15 11:18:23.846343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.846399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.846407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.665 qpair failed and we were unable to recover it. 00:26:26.665 [2024-05-15 11:18:23.846495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.846626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.846636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.665 qpair failed and we were unable to recover it. 00:26:26.665 [2024-05-15 11:18:23.846798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.846882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.846891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.665 qpair failed and we were unable to recover it. 00:26:26.665 [2024-05-15 11:18:23.847041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.847138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.847147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.665 qpair failed and we were unable to recover it. 00:26:26.665 [2024-05-15 11:18:23.847240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.847320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.847330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.665 qpair failed and we were unable to recover it. 00:26:26.665 [2024-05-15 11:18:23.847474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.847623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.847633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.665 qpair failed and we were unable to recover it. 00:26:26.665 [2024-05-15 11:18:23.847776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.847919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.847928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.665 qpair failed and we were unable to recover it. 00:26:26.665 [2024-05-15 11:18:23.848074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.848151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.848163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.665 qpair failed and we were unable to recover it. 00:26:26.665 [2024-05-15 11:18:23.848243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.848330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.848341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.665 qpair failed and we were unable to recover it. 00:26:26.665 [2024-05-15 11:18:23.848418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.665 [2024-05-15 11:18:23.848504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.848513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.666 qpair failed and we were unable to recover it. 00:26:26.666 [2024-05-15 11:18:23.848632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.848770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.848780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.666 qpair failed and we were unable to recover it. 00:26:26.666 [2024-05-15 11:18:23.848874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.848948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.848957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.666 qpair failed and we were unable to recover it. 00:26:26.666 [2024-05-15 11:18:23.849039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.849118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.849128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.666 qpair failed and we were unable to recover it. 00:26:26.666 [2024-05-15 11:18:23.849204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.849336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.849345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.666 qpair failed and we were unable to recover it. 00:26:26.666 [2024-05-15 11:18:23.849424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.849486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.849496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.666 qpair failed and we were unable to recover it. 00:26:26.666 [2024-05-15 11:18:23.849569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.849630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.849640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.666 qpair failed and we were unable to recover it. 00:26:26.666 [2024-05-15 11:18:23.849779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.849905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.849915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.666 qpair failed and we were unable to recover it. 00:26:26.666 [2024-05-15 11:18:23.850001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.850089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.850099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.666 qpair failed and we were unable to recover it. 00:26:26.666 [2024-05-15 11:18:23.850173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.850320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.850330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.666 qpair failed and we were unable to recover it. 00:26:26.666 [2024-05-15 11:18:23.850464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.850528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.850538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.666 qpair failed and we were unable to recover it. 00:26:26.666 [2024-05-15 11:18:23.850618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.850747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.850757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.666 qpair failed and we were unable to recover it. 00:26:26.666 [2024-05-15 11:18:23.850899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.850982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.850992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.666 qpair failed and we were unable to recover it. 00:26:26.666 [2024-05-15 11:18:23.851075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.851152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.851162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.666 qpair failed and we were unable to recover it. 00:26:26.666 [2024-05-15 11:18:23.851325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.851423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.851433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.666 qpair failed and we were unable to recover it. 00:26:26.666 [2024-05-15 11:18:23.851510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.851607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.851617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.666 qpair failed and we were unable to recover it. 00:26:26.666 [2024-05-15 11:18:23.851775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.851860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.851870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.666 qpair failed and we were unable to recover it. 00:26:26.666 [2024-05-15 11:18:23.851942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.852007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.852018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.666 qpair failed and we were unable to recover it. 00:26:26.666 [2024-05-15 11:18:23.852119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.852194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.852204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.666 qpair failed and we were unable to recover it. 00:26:26.666 [2024-05-15 11:18:23.852291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.852353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.852362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.666 qpair failed and we were unable to recover it. 00:26:26.666 [2024-05-15 11:18:23.852581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.852672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.852681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.666 qpair failed and we were unable to recover it. 00:26:26.666 [2024-05-15 11:18:23.852751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.852908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.852918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.666 qpair failed and we were unable to recover it. 00:26:26.666 [2024-05-15 11:18:23.852986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.853065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.853075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.666 qpair failed and we were unable to recover it. 00:26:26.666 [2024-05-15 11:18:23.853144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.853346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.853356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.666 qpair failed and we were unable to recover it. 00:26:26.666 [2024-05-15 11:18:23.853452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.853617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.853627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.666 qpair failed and we were unable to recover it. 00:26:26.666 [2024-05-15 11:18:23.853789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.853871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.853880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.666 qpair failed and we were unable to recover it. 00:26:26.666 [2024-05-15 11:18:23.853959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.854036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.854046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.666 qpair failed and we were unable to recover it. 00:26:26.666 [2024-05-15 11:18:23.854187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.854254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.854263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.666 qpair failed and we were unable to recover it. 00:26:26.666 [2024-05-15 11:18:23.854396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.854544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.854554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.666 qpair failed and we were unable to recover it. 00:26:26.666 [2024-05-15 11:18:23.854643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.854715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.666 [2024-05-15 11:18:23.854724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.666 qpair failed and we were unable to recover it. 00:26:26.667 [2024-05-15 11:18:23.854855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.667 [2024-05-15 11:18:23.854947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.667 [2024-05-15 11:18:23.854956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.667 qpair failed and we were unable to recover it. 00:26:26.667 [2024-05-15 11:18:23.855020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.667 [2024-05-15 11:18:23.855113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.667 [2024-05-15 11:18:23.855123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.667 qpair failed and we were unable to recover it. 00:26:26.667 [2024-05-15 11:18:23.855202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.667 [2024-05-15 11:18:23.855375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.667 [2024-05-15 11:18:23.855385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.667 qpair failed and we were unable to recover it. 00:26:26.667 [2024-05-15 11:18:23.855532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.667 [2024-05-15 11:18:23.855639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.667 [2024-05-15 11:18:23.855649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.667 qpair failed and we were unable to recover it. 00:26:26.667 [2024-05-15 11:18:23.855731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.667 [2024-05-15 11:18:23.855800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.667 [2024-05-15 11:18:23.855810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.667 qpair failed and we were unable to recover it. 00:26:26.667 [2024-05-15 11:18:23.855890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.667 [2024-05-15 11:18:23.855976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.667 [2024-05-15 11:18:23.855986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.667 qpair failed and we were unable to recover it. 00:26:26.667 [2024-05-15 11:18:23.856129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.667 [2024-05-15 11:18:23.856265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.667 [2024-05-15 11:18:23.856277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.667 qpair failed and we were unable to recover it. 00:26:26.667 [2024-05-15 11:18:23.856369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.667 [2024-05-15 11:18:23.856501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.667 [2024-05-15 11:18:23.856511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.667 qpair failed and we were unable to recover it. 00:26:26.667 [2024-05-15 11:18:23.856587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.667 [2024-05-15 11:18:23.856719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.667 [2024-05-15 11:18:23.856728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.667 qpair failed and we were unable to recover it. 00:26:26.667 [2024-05-15 11:18:23.856865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.667 [2024-05-15 11:18:23.857114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.667 [2024-05-15 11:18:23.857124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.667 qpair failed and we were unable to recover it. 00:26:26.667 [2024-05-15 11:18:23.857207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.667 [2024-05-15 11:18:23.857279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.667 [2024-05-15 11:18:23.857289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.667 qpair failed and we were unable to recover it. 00:26:26.667 [2024-05-15 11:18:23.857431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.667 [2024-05-15 11:18:23.857563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.667 [2024-05-15 11:18:23.857573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.667 qpair failed and we were unable to recover it. 00:26:26.667 [2024-05-15 11:18:23.857649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.667 [2024-05-15 11:18:23.857715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.667 [2024-05-15 11:18:23.857724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.667 qpair failed and we were unable to recover it. 00:26:26.667 [2024-05-15 11:18:23.857815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.667 [2024-05-15 11:18:23.857905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.667 [2024-05-15 11:18:23.857915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.667 qpair failed and we were unable to recover it. 00:26:26.667 [2024-05-15 11:18:23.858048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.667 [2024-05-15 11:18:23.858130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.667 [2024-05-15 11:18:23.858140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.667 qpair failed and we were unable to recover it. 00:26:26.667 [2024-05-15 11:18:23.858275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.667 [2024-05-15 11:18:23.858352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.667 [2024-05-15 11:18:23.858361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.667 qpair failed and we were unable to recover it. 00:26:26.667 [2024-05-15 11:18:23.858515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.667 [2024-05-15 11:18:23.858739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.667 [2024-05-15 11:18:23.858749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.667 qpair failed and we were unable to recover it. 00:26:26.667 [2024-05-15 11:18:23.858817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.667 [2024-05-15 11:18:23.858897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.667 [2024-05-15 11:18:23.858906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.667 qpair failed and we were unable to recover it. 00:26:26.667 [2024-05-15 11:18:23.858992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.667 [2024-05-15 11:18:23.859059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.667 [2024-05-15 11:18:23.859070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.667 qpair failed and we were unable to recover it. 00:26:26.667 [2024-05-15 11:18:23.859156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.667 [2024-05-15 11:18:23.859245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.667 [2024-05-15 11:18:23.859255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.667 qpair failed and we were unable to recover it. 00:26:26.667 [2024-05-15 11:18:23.859331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.667 [2024-05-15 11:18:23.859392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.667 [2024-05-15 11:18:23.859401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.667 qpair failed and we were unable to recover it. 00:26:26.667 [2024-05-15 11:18:23.859496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.667 [2024-05-15 11:18:23.859561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.667 [2024-05-15 11:18:23.859571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.667 qpair failed and we were unable to recover it. 00:26:26.667 [2024-05-15 11:18:23.859707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.667 [2024-05-15 11:18:23.859770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.667 [2024-05-15 11:18:23.859780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.667 qpair failed and we were unable to recover it. 00:26:26.667 [2024-05-15 11:18:23.859988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.667 [2024-05-15 11:18:23.860133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.667 [2024-05-15 11:18:23.860143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.668 qpair failed and we were unable to recover it. 00:26:26.668 [2024-05-15 11:18:23.860304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.860384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.860394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.668 qpair failed and we were unable to recover it. 00:26:26.668 [2024-05-15 11:18:23.860477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.860622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.860632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.668 qpair failed and we were unable to recover it. 00:26:26.668 [2024-05-15 11:18:23.860784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.860852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.860861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.668 qpair failed and we were unable to recover it. 00:26:26.668 [2024-05-15 11:18:23.861076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.861156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.861170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.668 qpair failed and we were unable to recover it. 00:26:26.668 [2024-05-15 11:18:23.861236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.861314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.861324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.668 qpair failed and we were unable to recover it. 00:26:26.668 [2024-05-15 11:18:23.861391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.861454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.861463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.668 qpair failed and we were unable to recover it. 00:26:26.668 [2024-05-15 11:18:23.861543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.861686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.861696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.668 qpair failed and we were unable to recover it. 00:26:26.668 [2024-05-15 11:18:23.861789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.861867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.861877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.668 qpair failed and we were unable to recover it. 00:26:26.668 [2024-05-15 11:18:23.862049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.862131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.862140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.668 qpair failed and we were unable to recover it. 00:26:26.668 [2024-05-15 11:18:23.862217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.862321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.862331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.668 qpair failed and we were unable to recover it. 00:26:26.668 [2024-05-15 11:18:23.862424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.862568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.862578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.668 qpair failed and we were unable to recover it. 00:26:26.668 [2024-05-15 11:18:23.862660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.862741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.862751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.668 qpair failed and we were unable to recover it. 00:26:26.668 [2024-05-15 11:18:23.862824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.862982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.862991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.668 qpair failed and we were unable to recover it. 00:26:26.668 [2024-05-15 11:18:23.863057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.863138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.863148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.668 qpair failed and we were unable to recover it. 00:26:26.668 [2024-05-15 11:18:23.863237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.863307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.863317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.668 qpair failed and we were unable to recover it. 00:26:26.668 [2024-05-15 11:18:23.863493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.863662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.863679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.668 qpair failed and we were unable to recover it. 00:26:26.668 [2024-05-15 11:18:23.863792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.863936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.863950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.668 qpair failed and we were unable to recover it. 00:26:26.668 [2024-05-15 11:18:23.864093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.864232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.864248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.668 qpair failed and we were unable to recover it. 00:26:26.668 [2024-05-15 11:18:23.864348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.864487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.864500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.668 qpair failed and we were unable to recover it. 00:26:26.668 [2024-05-15 11:18:23.864643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.864787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.864801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.668 qpair failed and we were unable to recover it. 00:26:26.668 [2024-05-15 11:18:23.864948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.865023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.865036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.668 qpair failed and we were unable to recover it. 00:26:26.668 [2024-05-15 11:18:23.865182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.865249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.865262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.668 qpair failed and we were unable to recover it. 00:26:26.668 [2024-05-15 11:18:23.865434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.865602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.865616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.668 qpair failed and we were unable to recover it. 00:26:26.668 [2024-05-15 11:18:23.865712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.865821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.865835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.668 qpair failed and we were unable to recover it. 00:26:26.668 [2024-05-15 11:18:23.865991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.866074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.866087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.668 qpair failed and we were unable to recover it. 00:26:26.668 [2024-05-15 11:18:23.866242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.866390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.866404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.668 qpair failed and we were unable to recover it. 00:26:26.668 [2024-05-15 11:18:23.866563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.866722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.866736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.668 qpair failed and we were unable to recover it. 00:26:26.668 [2024-05-15 11:18:23.866893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.866970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.866982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.668 qpair failed and we were unable to recover it. 00:26:26.668 [2024-05-15 11:18:23.867078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.668 [2024-05-15 11:18:23.867148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.867160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.669 qpair failed and we were unable to recover it. 00:26:26.669 [2024-05-15 11:18:23.867245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.867319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.867332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.669 qpair failed and we were unable to recover it. 00:26:26.669 [2024-05-15 11:18:23.867419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.867558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.867569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.669 qpair failed and we were unable to recover it. 00:26:26.669 [2024-05-15 11:18:23.867646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.867743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.867756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.669 qpair failed and we were unable to recover it. 00:26:26.669 [2024-05-15 11:18:23.867835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.867921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.867932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.669 qpair failed and we were unable to recover it. 00:26:26.669 [2024-05-15 11:18:23.868015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.868169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.868183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.669 qpair failed and we were unable to recover it. 00:26:26.669 [2024-05-15 11:18:23.868339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.868476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.868489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.669 qpair failed and we were unable to recover it. 00:26:26.669 [2024-05-15 11:18:23.868559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.868770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.868785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.669 qpair failed and we were unable to recover it. 00:26:26.669 [2024-05-15 11:18:23.868933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.869068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.869080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.669 qpair failed and we were unable to recover it. 00:26:26.669 [2024-05-15 11:18:23.869173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.869260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.869272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.669 qpair failed and we were unable to recover it. 00:26:26.669 [2024-05-15 11:18:23.869507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.869580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.869592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.669 qpair failed and we were unable to recover it. 00:26:26.669 [2024-05-15 11:18:23.869682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.869820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.869833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.669 qpair failed and we were unable to recover it. 00:26:26.669 [2024-05-15 11:18:23.869918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.870002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.870015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.669 qpair failed and we were unable to recover it. 00:26:26.669 [2024-05-15 11:18:23.870106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.870190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.870203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.669 qpair failed and we were unable to recover it. 00:26:26.669 [2024-05-15 11:18:23.870293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.870473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.870485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.669 qpair failed and we were unable to recover it. 00:26:26.669 [2024-05-15 11:18:23.870573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.870730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.870743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.669 qpair failed and we were unable to recover it. 00:26:26.669 [2024-05-15 11:18:23.870903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.871064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.871077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.669 qpair failed and we were unable to recover it. 00:26:26.669 [2024-05-15 11:18:23.871175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.871269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.871281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.669 qpair failed and we were unable to recover it. 00:26:26.669 [2024-05-15 11:18:23.871365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.871449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.871461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.669 qpair failed and we were unable to recover it. 00:26:26.669 [2024-05-15 11:18:23.871605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.871688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.871700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.669 qpair failed and we were unable to recover it. 00:26:26.669 [2024-05-15 11:18:23.871855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.871944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.871957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.669 qpair failed and we were unable to recover it. 00:26:26.669 [2024-05-15 11:18:23.872172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.872411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.872425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.669 qpair failed and we were unable to recover it. 00:26:26.669 [2024-05-15 11:18:23.872525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.872606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.872619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.669 qpair failed and we were unable to recover it. 00:26:26.669 [2024-05-15 11:18:23.872762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.872831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.872844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.669 qpair failed and we were unable to recover it. 00:26:26.669 [2024-05-15 11:18:23.872989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.873076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.873096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.669 qpair failed and we were unable to recover it. 00:26:26.669 [2024-05-15 11:18:23.873174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.873331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.873345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.669 qpair failed and we were unable to recover it. 00:26:26.669 [2024-05-15 11:18:23.873583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.873673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.873686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.669 qpair failed and we were unable to recover it. 00:26:26.669 [2024-05-15 11:18:23.873795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.873940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.873954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.669 qpair failed and we were unable to recover it. 00:26:26.669 [2024-05-15 11:18:23.874098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.874238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.874253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.669 qpair failed and we were unable to recover it. 00:26:26.669 [2024-05-15 11:18:23.874356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.874510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.874524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.669 qpair failed and we were unable to recover it. 00:26:26.669 [2024-05-15 11:18:23.874613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.874765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.669 [2024-05-15 11:18:23.874778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.670 qpair failed and we were unable to recover it. 00:26:26.670 [2024-05-15 11:18:23.874880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.670 [2024-05-15 11:18:23.874970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.670 [2024-05-15 11:18:23.874983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.670 qpair failed and we were unable to recover it. 00:26:26.670 [2024-05-15 11:18:23.875140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.670 [2024-05-15 11:18:23.875285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.670 [2024-05-15 11:18:23.875298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.670 qpair failed and we were unable to recover it. 00:26:26.670 [2024-05-15 11:18:23.875480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.670 [2024-05-15 11:18:23.875623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.670 [2024-05-15 11:18:23.875637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.670 qpair failed and we were unable to recover it. 00:26:26.670 [2024-05-15 11:18:23.875826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.670 [2024-05-15 11:18:23.875914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.670 [2024-05-15 11:18:23.875928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.670 qpair failed and we were unable to recover it. 00:26:26.670 [2024-05-15 11:18:23.876130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.670 [2024-05-15 11:18:23.876225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.670 [2024-05-15 11:18:23.876239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.670 qpair failed and we were unable to recover it. 00:26:26.670 [2024-05-15 11:18:23.876396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.670 [2024-05-15 11:18:23.876535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.670 [2024-05-15 11:18:23.876555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.670 qpair failed and we were unable to recover it. 00:26:26.670 [2024-05-15 11:18:23.876665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.670 [2024-05-15 11:18:23.876776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.670 [2024-05-15 11:18:23.876793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.670 qpair failed and we were unable to recover it. 00:26:26.670 [2024-05-15 11:18:23.876988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.670 [2024-05-15 11:18:23.877141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.670 [2024-05-15 11:18:23.877155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.670 qpair failed and we were unable to recover it. 00:26:26.670 [2024-05-15 11:18:23.877249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.670 [2024-05-15 11:18:23.877337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.670 [2024-05-15 11:18:23.877351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.670 qpair failed and we were unable to recover it. 00:26:26.670 [2024-05-15 11:18:23.877417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.670 [2024-05-15 11:18:23.877499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.670 [2024-05-15 11:18:23.877513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.670 qpair failed and we were unable to recover it. 00:26:26.670 [2024-05-15 11:18:23.877601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.670 [2024-05-15 11:18:23.877669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.670 [2024-05-15 11:18:23.877683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.670 qpair failed and we were unable to recover it. 00:26:26.670 [2024-05-15 11:18:23.877773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.670 [2024-05-15 11:18:23.877846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.670 [2024-05-15 11:18:23.877860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.670 qpair failed and we were unable to recover it. 00:26:26.670 [2024-05-15 11:18:23.878025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.670 [2024-05-15 11:18:23.878099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.670 [2024-05-15 11:18:23.878113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.670 qpair failed and we were unable to recover it. 00:26:26.670 [2024-05-15 11:18:23.878294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.670 [2024-05-15 11:18:23.878516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.670 [2024-05-15 11:18:23.878533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.670 qpair failed and we were unable to recover it. 00:26:26.670 [2024-05-15 11:18:23.878695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.670 [2024-05-15 11:18:23.878775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.670 [2024-05-15 11:18:23.878788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.670 qpair failed and we were unable to recover it. 00:26:26.670 [2024-05-15 11:18:23.878872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.670 [2024-05-15 11:18:23.878957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.670 [2024-05-15 11:18:23.878971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.670 qpair failed and we were unable to recover it. 00:26:26.670 [2024-05-15 11:18:23.879047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.670 [2024-05-15 11:18:23.879204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.670 [2024-05-15 11:18:23.879219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.670 qpair failed and we were unable to recover it. 00:26:26.670 [2024-05-15 11:18:23.879358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.670 [2024-05-15 11:18:23.879548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.670 [2024-05-15 11:18:23.879562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.670 qpair failed and we were unable to recover it. 00:26:26.670 [2024-05-15 11:18:23.879650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.670 [2024-05-15 11:18:23.879797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.670 [2024-05-15 11:18:23.879811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.670 qpair failed and we were unable to recover it. 00:26:26.670 [2024-05-15 11:18:23.879893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.670 [2024-05-15 11:18:23.879979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.670 [2024-05-15 11:18:23.879999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.670 qpair failed and we were unable to recover it. 00:26:26.953 [2024-05-15 11:18:23.880099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.953 [2024-05-15 11:18:23.880192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.953 [2024-05-15 11:18:23.880207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.953 qpair failed and we were unable to recover it. 00:26:26.953 [2024-05-15 11:18:23.880388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.953 [2024-05-15 11:18:23.880502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.953 [2024-05-15 11:18:23.880515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.953 qpair failed and we were unable to recover it. 00:26:26.953 [2024-05-15 11:18:23.880596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.953 [2024-05-15 11:18:23.880684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.953 [2024-05-15 11:18:23.880698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.953 qpair failed and we were unable to recover it. 00:26:26.953 [2024-05-15 11:18:23.880771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.953 [2024-05-15 11:18:23.880855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.953 [2024-05-15 11:18:23.880869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.953 qpair failed and we were unable to recover it. 00:26:26.953 [2024-05-15 11:18:23.880967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.953 [2024-05-15 11:18:23.881052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.953 [2024-05-15 11:18:23.881066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.953 qpair failed and we were unable to recover it. 00:26:26.953 [2024-05-15 11:18:23.881142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.953 [2024-05-15 11:18:23.881300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.953 [2024-05-15 11:18:23.881318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.953 qpair failed and we were unable to recover it. 00:26:26.953 [2024-05-15 11:18:23.881393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.953 [2024-05-15 11:18:23.881533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.953 [2024-05-15 11:18:23.881548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.953 qpair failed and we were unable to recover it. 00:26:26.953 [2024-05-15 11:18:23.881735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.953 [2024-05-15 11:18:23.881817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.953 [2024-05-15 11:18:23.881838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.953 qpair failed and we were unable to recover it. 00:26:26.953 [2024-05-15 11:18:23.881996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.953 [2024-05-15 11:18:23.882078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.953 [2024-05-15 11:18:23.882091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.953 qpair failed and we were unable to recover it. 00:26:26.953 [2024-05-15 11:18:23.882327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.953 [2024-05-15 11:18:23.882434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.953 [2024-05-15 11:18:23.882447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.953 qpair failed and we were unable to recover it. 00:26:26.953 [2024-05-15 11:18:23.882598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.953 [2024-05-15 11:18:23.882697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.953 [2024-05-15 11:18:23.882710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.953 qpair failed and we were unable to recover it. 00:26:26.953 [2024-05-15 11:18:23.882804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.953 [2024-05-15 11:18:23.882954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.953 [2024-05-15 11:18:23.882968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.953 qpair failed and we were unable to recover it. 00:26:26.953 [2024-05-15 11:18:23.883132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.953 [2024-05-15 11:18:23.883237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.953 [2024-05-15 11:18:23.883251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.953 qpair failed and we were unable to recover it. 00:26:26.953 [2024-05-15 11:18:23.883370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.953 [2024-05-15 11:18:23.883518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.953 [2024-05-15 11:18:23.883532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.953 qpair failed and we were unable to recover it. 00:26:26.954 [2024-05-15 11:18:23.883622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.883696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.883710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.954 qpair failed and we were unable to recover it. 00:26:26.954 [2024-05-15 11:18:23.883792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.883950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.883964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.954 qpair failed and we were unable to recover it. 00:26:26.954 [2024-05-15 11:18:23.884067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.884218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.884231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.954 qpair failed and we were unable to recover it. 00:26:26.954 [2024-05-15 11:18:23.884311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.884388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.884402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.954 qpair failed and we were unable to recover it. 00:26:26.954 [2024-05-15 11:18:23.884548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.884684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.884697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.954 qpair failed and we were unable to recover it. 00:26:26.954 [2024-05-15 11:18:23.884776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.884848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.884862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.954 qpair failed and we were unable to recover it. 00:26:26.954 [2024-05-15 11:18:23.884954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.885039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.885052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.954 qpair failed and we were unable to recover it. 00:26:26.954 [2024-05-15 11:18:23.885208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.885353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.885367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.954 qpair failed and we were unable to recover it. 00:26:26.954 [2024-05-15 11:18:23.885454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.885591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.885605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.954 qpair failed and we were unable to recover it. 00:26:26.954 [2024-05-15 11:18:23.885757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.885853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.885866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.954 qpair failed and we were unable to recover it. 00:26:26.954 [2024-05-15 11:18:23.886030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.886114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.886128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.954 qpair failed and we were unable to recover it. 00:26:26.954 [2024-05-15 11:18:23.886214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.886302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.886315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.954 qpair failed and we were unable to recover it. 00:26:26.954 [2024-05-15 11:18:23.886411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.886495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.886510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.954 qpair failed and we were unable to recover it. 00:26:26.954 [2024-05-15 11:18:23.886598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.886687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.886701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.954 qpair failed and we were unable to recover it. 00:26:26.954 [2024-05-15 11:18:23.886794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.886866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.886880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.954 qpair failed and we were unable to recover it. 00:26:26.954 [2024-05-15 11:18:23.886956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.887097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.887111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.954 qpair failed and we were unable to recover it. 00:26:26.954 [2024-05-15 11:18:23.887207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.887284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.887298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.954 qpair failed and we were unable to recover it. 00:26:26.954 [2024-05-15 11:18:23.887382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.887471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.887484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.954 qpair failed and we were unable to recover it. 00:26:26.954 [2024-05-15 11:18:23.887579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.887725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.887739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.954 qpair failed and we were unable to recover it. 00:26:26.954 [2024-05-15 11:18:23.887817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.887895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.887909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.954 qpair failed and we were unable to recover it. 00:26:26.954 [2024-05-15 11:18:23.888082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.888171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.888185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.954 qpair failed and we were unable to recover it. 00:26:26.954 [2024-05-15 11:18:23.888335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.888413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.888427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.954 qpair failed and we were unable to recover it. 00:26:26.954 [2024-05-15 11:18:23.888522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.888603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.888617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.954 qpair failed and we were unable to recover it. 00:26:26.954 [2024-05-15 11:18:23.888750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.888820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.888833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.954 qpair failed and we were unable to recover it. 00:26:26.954 [2024-05-15 11:18:23.888913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.889124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.889138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.954 qpair failed and we were unable to recover it. 00:26:26.954 [2024-05-15 11:18:23.889375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.889460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.889474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.954 qpair failed and we were unable to recover it. 00:26:26.954 [2024-05-15 11:18:23.889617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.889730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.889743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.954 qpair failed and we were unable to recover it. 00:26:26.954 [2024-05-15 11:18:23.889896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.889971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.889984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.954 qpair failed and we were unable to recover it. 00:26:26.954 [2024-05-15 11:18:23.890134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.890207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.890221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.954 qpair failed and we were unable to recover it. 00:26:26.954 [2024-05-15 11:18:23.890296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.954 [2024-05-15 11:18:23.890447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.890461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.955 qpair failed and we were unable to recover it. 00:26:26.955 [2024-05-15 11:18:23.890622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.890767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.890781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.955 qpair failed and we were unable to recover it. 00:26:26.955 [2024-05-15 11:18:23.890856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.890938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.890951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.955 qpair failed and we were unable to recover it. 00:26:26.955 [2024-05-15 11:18:23.891035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.891113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.891127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.955 qpair failed and we were unable to recover it. 00:26:26.955 [2024-05-15 11:18:23.891272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.891349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.891362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.955 qpair failed and we were unable to recover it. 00:26:26.955 [2024-05-15 11:18:23.891531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.891617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.891630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.955 qpair failed and we were unable to recover it. 00:26:26.955 [2024-05-15 11:18:23.891698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.891777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.891790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.955 qpair failed and we were unable to recover it. 00:26:26.955 [2024-05-15 11:18:23.891957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.892158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.892176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.955 qpair failed and we were unable to recover it. 00:26:26.955 [2024-05-15 11:18:23.892323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.892425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.892439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.955 qpair failed and we were unable to recover it. 00:26:26.955 [2024-05-15 11:18:23.892703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.892899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.892913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.955 qpair failed and we were unable to recover it. 00:26:26.955 [2024-05-15 11:18:23.893007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.893108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.893121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.955 qpair failed and we were unable to recover it. 00:26:26.955 [2024-05-15 11:18:23.893204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.893281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.893294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.955 qpair failed and we were unable to recover it. 00:26:26.955 [2024-05-15 11:18:23.893442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.893549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.893563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.955 qpair failed and we were unable to recover it. 00:26:26.955 [2024-05-15 11:18:23.893707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.893856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.893870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.955 qpair failed and we were unable to recover it. 00:26:26.955 [2024-05-15 11:18:23.893960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.894052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.894066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.955 qpair failed and we were unable to recover it. 00:26:26.955 [2024-05-15 11:18:23.894214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.894277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.894293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.955 qpair failed and we were unable to recover it. 00:26:26.955 [2024-05-15 11:18:23.894371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.894528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.894542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.955 qpair failed and we were unable to recover it. 00:26:26.955 [2024-05-15 11:18:23.894641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.894718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.894732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.955 qpair failed and we were unable to recover it. 00:26:26.955 [2024-05-15 11:18:23.894821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.894911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.894924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.955 qpair failed and we were unable to recover it. 00:26:26.955 [2024-05-15 11:18:23.895079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.895191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.895208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.955 qpair failed and we were unable to recover it. 00:26:26.955 [2024-05-15 11:18:23.895288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.895448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.895462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.955 qpair failed and we were unable to recover it. 00:26:26.955 [2024-05-15 11:18:23.895612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.895754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.895768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.955 qpair failed and we were unable to recover it. 00:26:26.955 [2024-05-15 11:18:23.895954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.896036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.896049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.955 qpair failed and we were unable to recover it. 00:26:26.955 [2024-05-15 11:18:23.896121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.896209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.896226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.955 qpair failed and we were unable to recover it. 00:26:26.955 [2024-05-15 11:18:23.896311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.896391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.896405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.955 qpair failed and we were unable to recover it. 00:26:26.955 [2024-05-15 11:18:23.896547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.896698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.896711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.955 qpair failed and we were unable to recover it. 00:26:26.955 [2024-05-15 11:18:23.896886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.896975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.896989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.955 qpair failed and we were unable to recover it. 00:26:26.955 [2024-05-15 11:18:23.897078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.897176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.897194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.955 qpair failed and we were unable to recover it. 00:26:26.955 [2024-05-15 11:18:23.897274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.897371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.897385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.955 qpair failed and we were unable to recover it. 00:26:26.955 [2024-05-15 11:18:23.897464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.897538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.955 [2024-05-15 11:18:23.897552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.955 qpair failed and we were unable to recover it. 00:26:26.955 [2024-05-15 11:18:23.897698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.897766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.897778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.956 qpair failed and we were unable to recover it. 00:26:26.956 [2024-05-15 11:18:23.897854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.897995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.898008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.956 qpair failed and we were unable to recover it. 00:26:26.956 [2024-05-15 11:18:23.898177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.898251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.898264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.956 qpair failed and we were unable to recover it. 00:26:26.956 [2024-05-15 11:18:23.898339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.898428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.898440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.956 qpair failed and we were unable to recover it. 00:26:26.956 [2024-05-15 11:18:23.898578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.898663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.898676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.956 qpair failed and we were unable to recover it. 00:26:26.956 [2024-05-15 11:18:23.898761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.898923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.898936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.956 qpair failed and we were unable to recover it. 00:26:26.956 [2024-05-15 11:18:23.899049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.899149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.899173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.956 qpair failed and we were unable to recover it. 00:26:26.956 [2024-05-15 11:18:23.899329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.899408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.899421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.956 qpair failed and we were unable to recover it. 00:26:26.956 [2024-05-15 11:18:23.899519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.899605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.899618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.956 qpair failed and we were unable to recover it. 00:26:26.956 [2024-05-15 11:18:23.899700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.899864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.899878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.956 qpair failed and we were unable to recover it. 00:26:26.956 [2024-05-15 11:18:23.900035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.900097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.900110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.956 qpair failed and we were unable to recover it. 00:26:26.956 [2024-05-15 11:18:23.900183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.900264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.900278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.956 qpair failed and we were unable to recover it. 00:26:26.956 [2024-05-15 11:18:23.900367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.900513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.900527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.956 qpair failed and we were unable to recover it. 00:26:26.956 [2024-05-15 11:18:23.900600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.900744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.900757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.956 qpair failed and we were unable to recover it. 00:26:26.956 [2024-05-15 11:18:23.900905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.901003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.901016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.956 qpair failed and we were unable to recover it. 00:26:26.956 [2024-05-15 11:18:23.901097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.901235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.901248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.956 qpair failed and we were unable to recover it. 00:26:26.956 [2024-05-15 11:18:23.901407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.901501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.901515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.956 qpair failed and we were unable to recover it. 00:26:26.956 [2024-05-15 11:18:23.901595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.901666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.901679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.956 qpair failed and we were unable to recover it. 00:26:26.956 [2024-05-15 11:18:23.901829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.901977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.901990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.956 qpair failed and we were unable to recover it. 00:26:26.956 [2024-05-15 11:18:23.902084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.902171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.902185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.956 qpair failed and we were unable to recover it. 00:26:26.956 [2024-05-15 11:18:23.902279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.902420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.902434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.956 qpair failed and we were unable to recover it. 00:26:26.956 [2024-05-15 11:18:23.902517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.902650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.902669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.956 qpair failed and we were unable to recover it. 00:26:26.956 [2024-05-15 11:18:23.902760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.902835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.902848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.956 qpair failed and we were unable to recover it. 00:26:26.956 [2024-05-15 11:18:23.902931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.903005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.903019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.956 qpair failed and we were unable to recover it. 00:26:26.956 [2024-05-15 11:18:23.903105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.903197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.903210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.956 qpair failed and we were unable to recover it. 00:26:26.956 [2024-05-15 11:18:23.903384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.903462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.903475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.956 qpair failed and we were unable to recover it. 00:26:26.956 [2024-05-15 11:18:23.903683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.903760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.903774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.956 qpair failed and we were unable to recover it. 00:26:26.956 [2024-05-15 11:18:23.903857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.903994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.904008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.956 qpair failed and we were unable to recover it. 00:26:26.956 [2024-05-15 11:18:23.904250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.904347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.904360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.956 qpair failed and we were unable to recover it. 00:26:26.956 [2024-05-15 11:18:23.904522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.956 [2024-05-15 11:18:23.904597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.957 [2024-05-15 11:18:23.904610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.957 qpair failed and we were unable to recover it. 00:26:26.957 [2024-05-15 11:18:23.904764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.957 [2024-05-15 11:18:23.904850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.957 [2024-05-15 11:18:23.904863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.957 qpair failed and we were unable to recover it. 00:26:26.957 [2024-05-15 11:18:23.905023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.957 [2024-05-15 11:18:23.905185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.957 [2024-05-15 11:18:23.905199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.957 qpair failed and we were unable to recover it. 00:26:26.957 [2024-05-15 11:18:23.905343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.957 [2024-05-15 11:18:23.905468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.957 [2024-05-15 11:18:23.905481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.957 qpair failed and we were unable to recover it. 00:26:26.957 [2024-05-15 11:18:23.905646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.957 [2024-05-15 11:18:23.905818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.957 [2024-05-15 11:18:23.905832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.957 qpair failed and we were unable to recover it. 00:26:26.957 [2024-05-15 11:18:23.905976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.957 [2024-05-15 11:18:23.906124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.957 [2024-05-15 11:18:23.906137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.957 qpair failed and we were unable to recover it. 00:26:26.957 [2024-05-15 11:18:23.906233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.957 [2024-05-15 11:18:23.906332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.957 [2024-05-15 11:18:23.906346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.957 qpair failed and we were unable to recover it. 00:26:26.957 [2024-05-15 11:18:23.906443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.957 [2024-05-15 11:18:23.906540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.957 [2024-05-15 11:18:23.906553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.957 qpair failed and we were unable to recover it. 00:26:26.957 [2024-05-15 11:18:23.906643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.957 [2024-05-15 11:18:23.906725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.957 [2024-05-15 11:18:23.906739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.957 qpair failed and we were unable to recover it. 00:26:26.957 [2024-05-15 11:18:23.906821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.957 [2024-05-15 11:18:23.906908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.957 [2024-05-15 11:18:23.906922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.957 qpair failed and we were unable to recover it. 00:26:26.957 [2024-05-15 11:18:23.907014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.957 [2024-05-15 11:18:23.907112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.957 [2024-05-15 11:18:23.907125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.957 qpair failed and we were unable to recover it. 00:26:26.957 [2024-05-15 11:18:23.907336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.957 [2024-05-15 11:18:23.907435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.957 [2024-05-15 11:18:23.907448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.957 qpair failed and we were unable to recover it. 00:26:26.957 [2024-05-15 11:18:23.907525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.957 [2024-05-15 11:18:23.907600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.957 [2024-05-15 11:18:23.907613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.957 qpair failed and we were unable to recover it. 00:26:26.957 [2024-05-15 11:18:23.907713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.957 [2024-05-15 11:18:23.907796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.957 [2024-05-15 11:18:23.907811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.957 qpair failed and we were unable to recover it. 00:26:26.957 [2024-05-15 11:18:23.907890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.957 [2024-05-15 11:18:23.908047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.957 [2024-05-15 11:18:23.908061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.957 qpair failed and we were unable to recover it. 00:26:26.957 [2024-05-15 11:18:23.908216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.957 [2024-05-15 11:18:23.908315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.957 [2024-05-15 11:18:23.908329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.957 qpair failed and we were unable to recover it. 00:26:26.957 [2024-05-15 11:18:23.908416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.957 [2024-05-15 11:18:23.908500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.957 [2024-05-15 11:18:23.908513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.957 qpair failed and we were unable to recover it. 00:26:26.957 [2024-05-15 11:18:23.908593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.957 [2024-05-15 11:18:23.908677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.957 [2024-05-15 11:18:23.908690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.957 qpair failed and we were unable to recover it. 00:26:26.957 [2024-05-15 11:18:23.908766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.957 [2024-05-15 11:18:23.908838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.957 [2024-05-15 11:18:23.908852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.957 qpair failed and we were unable to recover it. 00:26:26.957 [2024-05-15 11:18:23.909009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.957 [2024-05-15 11:18:23.909149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.957 [2024-05-15 11:18:23.909177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.957 qpair failed and we were unable to recover it. 00:26:26.957 [2024-05-15 11:18:23.909247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.957 [2024-05-15 11:18:23.909331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.957 [2024-05-15 11:18:23.909346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.957 qpair failed and we were unable to recover it. 00:26:26.957 [2024-05-15 11:18:23.909425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.957 [2024-05-15 11:18:23.909500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.957 [2024-05-15 11:18:23.909513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.957 qpair failed and we were unable to recover it. 00:26:26.957 [2024-05-15 11:18:23.909601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.957 [2024-05-15 11:18:23.909717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.957 [2024-05-15 11:18:23.909729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.957 qpair failed and we were unable to recover it. 00:26:26.957 [2024-05-15 11:18:23.909834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.957 [2024-05-15 11:18:23.909922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.957 [2024-05-15 11:18:23.909935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.957 qpair failed and we were unable to recover it. 00:26:26.957 [2024-05-15 11:18:23.910025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.957 [2024-05-15 11:18:23.910170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.957 [2024-05-15 11:18:23.910184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.957 qpair failed and we were unable to recover it. 00:26:26.957 [2024-05-15 11:18:23.910315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.957 [2024-05-15 11:18:23.910448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.957 [2024-05-15 11:18:23.910462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.957 qpair failed and we were unable to recover it. 00:26:26.957 [2024-05-15 11:18:23.910615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.910694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.910708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.958 qpair failed and we were unable to recover it. 00:26:26.958 [2024-05-15 11:18:23.910792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.910873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.910886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.958 qpair failed and we were unable to recover it. 00:26:26.958 [2024-05-15 11:18:23.910959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.911097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.911110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.958 qpair failed and we were unable to recover it. 00:26:26.958 [2024-05-15 11:18:23.911180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.911388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.911401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.958 qpair failed and we were unable to recover it. 00:26:26.958 [2024-05-15 11:18:23.911569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.911644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.911657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.958 qpair failed and we were unable to recover it. 00:26:26.958 [2024-05-15 11:18:23.911755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.911848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.911861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.958 qpair failed and we were unable to recover it. 00:26:26.958 [2024-05-15 11:18:23.911945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.912020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.912033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.958 qpair failed and we were unable to recover it. 00:26:26.958 [2024-05-15 11:18:23.912129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.912342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.912357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.958 qpair failed and we were unable to recover it. 00:26:26.958 [2024-05-15 11:18:23.912430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.912525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.912540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.958 qpair failed and we were unable to recover it. 00:26:26.958 [2024-05-15 11:18:23.912697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.912786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.912802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.958 qpair failed and we were unable to recover it. 00:26:26.958 [2024-05-15 11:18:23.912954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.913128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.913142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.958 qpair failed and we were unable to recover it. 00:26:26.958 [2024-05-15 11:18:23.913305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.913387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.913403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.958 qpair failed and we were unable to recover it. 00:26:26.958 [2024-05-15 11:18:23.913557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.913640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.913653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.958 qpair failed and we were unable to recover it. 00:26:26.958 [2024-05-15 11:18:23.913764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.913852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.913867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.958 qpair failed and we were unable to recover it. 00:26:26.958 [2024-05-15 11:18:23.914039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.914122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.914135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.958 qpair failed and we were unable to recover it. 00:26:26.958 [2024-05-15 11:18:23.914244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.914316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.914330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.958 qpair failed and we were unable to recover it. 00:26:26.958 [2024-05-15 11:18:23.914477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.914562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.914576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.958 qpair failed and we were unable to recover it. 00:26:26.958 [2024-05-15 11:18:23.914667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.914743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.914755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.958 qpair failed and we were unable to recover it. 00:26:26.958 [2024-05-15 11:18:23.914837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.914907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.914920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.958 qpair failed and we were unable to recover it. 00:26:26.958 [2024-05-15 11:18:23.915002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.915138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.915152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.958 qpair failed and we were unable to recover it. 00:26:26.958 [2024-05-15 11:18:23.915273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.915366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.915377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.958 qpair failed and we were unable to recover it. 00:26:26.958 [2024-05-15 11:18:23.915458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.915607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.915620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.958 qpair failed and we were unable to recover it. 00:26:26.958 [2024-05-15 11:18:23.915685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.915749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.915760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.958 qpair failed and we were unable to recover it. 00:26:26.958 [2024-05-15 11:18:23.915846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.915933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.915943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.958 qpair failed and we were unable to recover it. 00:26:26.958 [2024-05-15 11:18:23.916002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.916085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.916095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.958 qpair failed and we were unable to recover it. 00:26:26.958 [2024-05-15 11:18:23.916176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.916262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.916273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.958 qpair failed and we were unable to recover it. 00:26:26.958 [2024-05-15 11:18:23.916408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.916546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.916557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.958 qpair failed and we were unable to recover it. 00:26:26.958 [2024-05-15 11:18:23.916692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.916793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.958 [2024-05-15 11:18:23.916803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.959 qpair failed and we were unable to recover it. 00:26:26.959 [2024-05-15 11:18:23.916942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.917009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.917020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.959 qpair failed and we were unable to recover it. 00:26:26.959 [2024-05-15 11:18:23.917084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.917177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.917188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.959 qpair failed and we were unable to recover it. 00:26:26.959 [2024-05-15 11:18:23.917324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.917398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.917408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.959 qpair failed and we were unable to recover it. 00:26:26.959 [2024-05-15 11:18:23.917498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.917611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.917629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.959 qpair failed and we were unable to recover it. 00:26:26.959 [2024-05-15 11:18:23.917709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.917780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.917791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.959 qpair failed and we were unable to recover it. 00:26:26.959 [2024-05-15 11:18:23.917869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.918008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.918018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.959 qpair failed and we were unable to recover it. 00:26:26.959 [2024-05-15 11:18:23.918154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.918257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.918268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.959 qpair failed and we were unable to recover it. 00:26:26.959 [2024-05-15 11:18:23.918360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.918426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.918437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.959 qpair failed and we were unable to recover it. 00:26:26.959 [2024-05-15 11:18:23.918561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.918694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.918704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.959 qpair failed and we were unable to recover it. 00:26:26.959 [2024-05-15 11:18:23.918785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.918863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.918872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.959 qpair failed and we were unable to recover it. 00:26:26.959 [2024-05-15 11:18:23.919014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.919081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.919090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.959 qpair failed and we were unable to recover it. 00:26:26.959 [2024-05-15 11:18:23.919180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.919257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.919266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.959 qpair failed and we were unable to recover it. 00:26:26.959 [2024-05-15 11:18:23.919346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.919422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.919434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.959 qpair failed and we were unable to recover it. 00:26:26.959 [2024-05-15 11:18:23.919571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.919638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.919650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.959 qpair failed and we were unable to recover it. 00:26:26.959 [2024-05-15 11:18:23.919733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.919880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.919891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.959 qpair failed and we were unable to recover it. 00:26:26.959 [2024-05-15 11:18:23.920036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.920103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.920112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.959 qpair failed and we were unable to recover it. 00:26:26.959 [2024-05-15 11:18:23.920191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.920264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.920274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.959 qpair failed and we were unable to recover it. 00:26:26.959 [2024-05-15 11:18:23.920410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.920548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.920558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.959 qpair failed and we were unable to recover it. 00:26:26.959 [2024-05-15 11:18:23.920639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.920715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.920725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.959 qpair failed and we were unable to recover it. 00:26:26.959 [2024-05-15 11:18:23.920801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.920866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.920877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.959 qpair failed and we were unable to recover it. 00:26:26.959 [2024-05-15 11:18:23.920947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.921029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.921039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.959 qpair failed and we were unable to recover it. 00:26:26.959 [2024-05-15 11:18:23.921188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.921257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.921271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.959 qpair failed and we were unable to recover it. 00:26:26.959 [2024-05-15 11:18:23.921355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.921419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.921430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.959 qpair failed and we were unable to recover it. 00:26:26.959 [2024-05-15 11:18:23.921633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.921784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.921794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.959 qpair failed and we were unable to recover it. 00:26:26.959 [2024-05-15 11:18:23.921864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.921940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.921950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.959 qpair failed and we were unable to recover it. 00:26:26.959 [2024-05-15 11:18:23.922017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.922085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.922094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.959 qpair failed and we were unable to recover it. 00:26:26.959 [2024-05-15 11:18:23.922176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.922267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.922280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.959 qpair failed and we were unable to recover it. 00:26:26.959 [2024-05-15 11:18:23.922354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.922441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.922452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.959 qpair failed and we were unable to recover it. 00:26:26.959 [2024-05-15 11:18:23.922535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.922601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.922610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.959 qpair failed and we were unable to recover it. 00:26:26.959 [2024-05-15 11:18:23.922686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.959 [2024-05-15 11:18:23.922747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.922756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.960 qpair failed and we were unable to recover it. 00:26:26.960 [2024-05-15 11:18:23.922826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.922890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.922901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.960 qpair failed and we were unable to recover it. 00:26:26.960 [2024-05-15 11:18:23.923047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.923113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.923123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.960 qpair failed and we were unable to recover it. 00:26:26.960 [2024-05-15 11:18:23.923193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.923260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.923271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.960 qpair failed and we were unable to recover it. 00:26:26.960 [2024-05-15 11:18:23.923408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.923473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.923483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.960 qpair failed and we were unable to recover it. 00:26:26.960 [2024-05-15 11:18:23.923559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.923642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.923652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.960 qpair failed and we were unable to recover it. 00:26:26.960 [2024-05-15 11:18:23.923741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.923821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.923831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.960 qpair failed and we were unable to recover it. 00:26:26.960 [2024-05-15 11:18:23.923909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.923984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.923996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.960 qpair failed and we were unable to recover it. 00:26:26.960 [2024-05-15 11:18:23.924060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.924129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.924139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.960 qpair failed and we were unable to recover it. 00:26:26.960 [2024-05-15 11:18:23.924214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.924286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.924297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.960 qpair failed and we were unable to recover it. 00:26:26.960 [2024-05-15 11:18:23.924355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.924425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.924435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.960 qpair failed and we were unable to recover it. 00:26:26.960 [2024-05-15 11:18:23.924523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.924586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.924596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.960 qpair failed and we were unable to recover it. 00:26:26.960 [2024-05-15 11:18:23.924667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.924802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.924812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.960 qpair failed and we were unable to recover it. 00:26:26.960 [2024-05-15 11:18:23.924902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.924977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.924987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.960 qpair failed and we were unable to recover it. 00:26:26.960 [2024-05-15 11:18:23.925061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.925125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.925134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.960 qpair failed and we were unable to recover it. 00:26:26.960 [2024-05-15 11:18:23.925209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.925278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.925288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.960 qpair failed and we were unable to recover it. 00:26:26.960 [2024-05-15 11:18:23.925353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.925413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.925423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.960 qpair failed and we were unable to recover it. 00:26:26.960 [2024-05-15 11:18:23.925559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.925660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.925670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.960 qpair failed and we were unable to recover it. 00:26:26.960 [2024-05-15 11:18:23.925803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.925868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.925877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.960 qpair failed and we were unable to recover it. 00:26:26.960 [2024-05-15 11:18:23.925955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.926025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.926035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.960 qpair failed and we were unable to recover it. 00:26:26.960 [2024-05-15 11:18:23.926108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.926184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.926195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.960 qpair failed and we were unable to recover it. 00:26:26.960 [2024-05-15 11:18:23.926340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.926419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.926429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.960 qpair failed and we were unable to recover it. 00:26:26.960 [2024-05-15 11:18:23.926499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.926576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.926586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.960 qpair failed and we were unable to recover it. 00:26:26.960 [2024-05-15 11:18:23.926710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.926854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.926864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.960 qpair failed and we were unable to recover it. 00:26:26.960 [2024-05-15 11:18:23.926932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.927011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.927020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.960 qpair failed and we were unable to recover it. 00:26:26.960 [2024-05-15 11:18:23.927161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.927232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.927242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.960 qpair failed and we were unable to recover it. 00:26:26.960 [2024-05-15 11:18:23.927306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.927402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.927412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.960 qpair failed and we were unable to recover it. 00:26:26.960 [2024-05-15 11:18:23.927482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.927565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.927575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.960 qpair failed and we were unable to recover it. 00:26:26.960 [2024-05-15 11:18:23.927708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.927788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.927798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.960 qpair failed and we were unable to recover it. 00:26:26.960 [2024-05-15 11:18:23.927876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.927943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.960 [2024-05-15 11:18:23.927953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.960 qpair failed and we were unable to recover it. 00:26:26.960 [2024-05-15 11:18:23.928021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.928097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.928107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.961 qpair failed and we were unable to recover it. 00:26:26.961 [2024-05-15 11:18:23.928188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.928323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.928333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.961 qpair failed and we were unable to recover it. 00:26:26.961 [2024-05-15 11:18:23.928408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.928543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.928553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.961 qpair failed and we were unable to recover it. 00:26:26.961 [2024-05-15 11:18:23.928625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.928696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.928706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.961 qpair failed and we were unable to recover it. 00:26:26.961 [2024-05-15 11:18:23.928775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.928842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.928852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.961 qpair failed and we were unable to recover it. 00:26:26.961 [2024-05-15 11:18:23.928917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.929001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.929010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.961 qpair failed and we were unable to recover it. 00:26:26.961 [2024-05-15 11:18:23.929076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.929135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.929145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.961 qpair failed and we were unable to recover it. 00:26:26.961 [2024-05-15 11:18:23.929294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.929437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.929448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.961 qpair failed and we were unable to recover it. 00:26:26.961 [2024-05-15 11:18:23.929594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.929726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.929736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.961 qpair failed and we were unable to recover it. 00:26:26.961 [2024-05-15 11:18:23.929821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.929902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.929911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.961 qpair failed and we were unable to recover it. 00:26:26.961 [2024-05-15 11:18:23.929997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.930076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.930086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.961 qpair failed and we were unable to recover it. 00:26:26.961 [2024-05-15 11:18:23.930154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.930236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.930247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.961 qpair failed and we were unable to recover it. 00:26:26.961 [2024-05-15 11:18:23.930324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.930406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.930416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.961 qpair failed and we were unable to recover it. 00:26:26.961 [2024-05-15 11:18:23.930549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.930616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.930625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.961 qpair failed and we were unable to recover it. 00:26:26.961 [2024-05-15 11:18:23.930697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.930758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.930768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.961 qpair failed and we were unable to recover it. 00:26:26.961 [2024-05-15 11:18:23.930851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.930915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.930926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.961 qpair failed and we were unable to recover it. 00:26:26.961 [2024-05-15 11:18:23.931000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.931065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.931075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.961 qpair failed and we were unable to recover it. 00:26:26.961 [2024-05-15 11:18:23.931141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.931231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.931241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.961 qpair failed and we were unable to recover it. 00:26:26.961 [2024-05-15 11:18:23.931304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.931376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.931385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.961 qpair failed and we were unable to recover it. 00:26:26.961 [2024-05-15 11:18:23.931526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.931603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.931612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.961 qpair failed and we were unable to recover it. 00:26:26.961 [2024-05-15 11:18:23.931677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.931738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.931748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.961 qpair failed and we were unable to recover it. 00:26:26.961 [2024-05-15 11:18:23.931822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.931892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.931901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.961 qpair failed and we were unable to recover it. 00:26:26.961 [2024-05-15 11:18:23.931972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.932035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.932045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.961 qpair failed and we were unable to recover it. 00:26:26.961 [2024-05-15 11:18:23.932197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.932265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.932276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.961 qpair failed and we were unable to recover it. 00:26:26.961 [2024-05-15 11:18:23.932345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.932412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.932422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.961 qpair failed and we were unable to recover it. 00:26:26.961 [2024-05-15 11:18:23.932505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.932599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.932613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.961 qpair failed and we were unable to recover it. 00:26:26.961 [2024-05-15 11:18:23.932763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.932826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.932839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.961 qpair failed and we were unable to recover it. 00:26:26.961 [2024-05-15 11:18:23.932909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.932997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.933010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.961 qpair failed and we were unable to recover it. 00:26:26.961 [2024-05-15 11:18:23.933120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.933206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.933219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.961 qpair failed and we were unable to recover it. 00:26:26.961 [2024-05-15 11:18:23.933312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.933385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.961 [2024-05-15 11:18:23.933399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.961 qpair failed and we were unable to recover it. 00:26:26.962 [2024-05-15 11:18:23.933587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.962 [2024-05-15 11:18:23.933686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.962 [2024-05-15 11:18:23.933699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.962 qpair failed and we were unable to recover it. 00:26:26.962 [2024-05-15 11:18:23.933778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.962 [2024-05-15 11:18:23.933852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.962 [2024-05-15 11:18:23.933865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.962 qpair failed and we were unable to recover it. 00:26:26.962 [2024-05-15 11:18:23.933949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.962 [2024-05-15 11:18:23.934092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.962 [2024-05-15 11:18:23.934105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.962 qpair failed and we were unable to recover it. 00:26:26.962 [2024-05-15 11:18:23.934185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.962 [2024-05-15 11:18:23.934271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.962 [2024-05-15 11:18:23.934283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.962 qpair failed and we were unable to recover it. 00:26:26.962 [2024-05-15 11:18:23.934368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.962 [2024-05-15 11:18:23.934442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.962 [2024-05-15 11:18:23.934455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.962 qpair failed and we were unable to recover it. 00:26:26.962 [2024-05-15 11:18:23.934548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.962 [2024-05-15 11:18:23.934703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.962 [2024-05-15 11:18:23.934717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.962 qpair failed and we were unable to recover it. 00:26:26.962 [2024-05-15 11:18:23.934804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.962 [2024-05-15 11:18:23.934903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.962 [2024-05-15 11:18:23.934918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.962 qpair failed and we were unable to recover it. 00:26:26.962 [2024-05-15 11:18:23.935060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.962 [2024-05-15 11:18:23.935132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.962 [2024-05-15 11:18:23.935145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.962 qpair failed and we were unable to recover it. 00:26:26.962 [2024-05-15 11:18:23.935234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.962 [2024-05-15 11:18:23.935308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.962 [2024-05-15 11:18:23.935321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.962 qpair failed and we were unable to recover it. 00:26:26.962 [2024-05-15 11:18:23.935395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.962 [2024-05-15 11:18:23.935475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.962 [2024-05-15 11:18:23.935490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.962 qpair failed and we were unable to recover it. 00:26:26.962 [2024-05-15 11:18:23.935562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.962 [2024-05-15 11:18:23.935638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.962 [2024-05-15 11:18:23.935651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.962 qpair failed and we were unable to recover it. 00:26:26.962 [2024-05-15 11:18:23.935734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.962 [2024-05-15 11:18:23.935813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.962 [2024-05-15 11:18:23.935826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.962 qpair failed and we were unable to recover it. 00:26:26.962 [2024-05-15 11:18:23.935915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.962 [2024-05-15 11:18:23.935989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.962 [2024-05-15 11:18:23.936001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.962 qpair failed and we were unable to recover it. 00:26:26.962 [2024-05-15 11:18:23.936080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.962 [2024-05-15 11:18:23.936161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.962 [2024-05-15 11:18:23.936180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.962 qpair failed and we were unable to recover it. 00:26:26.962 [2024-05-15 11:18:23.936258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.962 [2024-05-15 11:18:23.936332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.962 [2024-05-15 11:18:23.936345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.962 qpair failed and we were unable to recover it. 00:26:26.962 [2024-05-15 11:18:23.936418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.962 [2024-05-15 11:18:23.936562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.962 [2024-05-15 11:18:23.936576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.962 qpair failed and we were unable to recover it. 00:26:26.962 [2024-05-15 11:18:23.936658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.962 [2024-05-15 11:18:23.936746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.962 [2024-05-15 11:18:23.936759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.962 qpair failed and we were unable to recover it. 00:26:26.962 [2024-05-15 11:18:23.936923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.962 [2024-05-15 11:18:23.937013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.962 [2024-05-15 11:18:23.937027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.962 qpair failed and we were unable to recover it. 00:26:26.962 [2024-05-15 11:18:23.937108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.962 [2024-05-15 11:18:23.937184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.962 [2024-05-15 11:18:23.937197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.962 qpair failed and we were unable to recover it. 00:26:26.962 [2024-05-15 11:18:23.937278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.962 [2024-05-15 11:18:23.937430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.962 [2024-05-15 11:18:23.937443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.962 qpair failed and we were unable to recover it. 00:26:26.962 [2024-05-15 11:18:23.937521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.962 [2024-05-15 11:18:23.937757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.962 [2024-05-15 11:18:23.937771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.962 qpair failed and we were unable to recover it. 00:26:26.962 [2024-05-15 11:18:23.937849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.962 [2024-05-15 11:18:23.937989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.962 [2024-05-15 11:18:23.938003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.962 qpair failed and we were unable to recover it. 00:26:26.962 [2024-05-15 11:18:23.938079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.962 [2024-05-15 11:18:23.938172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.962 [2024-05-15 11:18:23.938185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.962 qpair failed and we were unable to recover it. 00:26:26.962 [2024-05-15 11:18:23.938271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.962 [2024-05-15 11:18:23.938355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.962 [2024-05-15 11:18:23.938368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.962 qpair failed and we were unable to recover it. 00:26:26.962 [2024-05-15 11:18:23.938458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.962 [2024-05-15 11:18:23.938597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.962 [2024-05-15 11:18:23.938611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.962 qpair failed and we were unable to recover it. 00:26:26.963 [2024-05-15 11:18:23.938696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.938782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.938795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.963 qpair failed and we were unable to recover it. 00:26:26.963 [2024-05-15 11:18:23.938889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.938969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.938982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.963 qpair failed and we were unable to recover it. 00:26:26.963 [2024-05-15 11:18:23.939062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.939137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.939150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.963 qpair failed and we were unable to recover it. 00:26:26.963 [2024-05-15 11:18:23.939295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.939370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.939383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.963 qpair failed and we were unable to recover it. 00:26:26.963 [2024-05-15 11:18:23.939472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.939548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.939562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.963 qpair failed and we were unable to recover it. 00:26:26.963 [2024-05-15 11:18:23.939636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.939771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.939784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.963 qpair failed and we were unable to recover it. 00:26:26.963 [2024-05-15 11:18:23.939924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.939999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.940012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.963 qpair failed and we were unable to recover it. 00:26:26.963 [2024-05-15 11:18:23.940154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.940244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.940258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.963 qpair failed and we were unable to recover it. 00:26:26.963 [2024-05-15 11:18:23.940332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.940405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.940419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.963 qpair failed and we were unable to recover it. 00:26:26.963 [2024-05-15 11:18:23.940499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.940637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.940651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.963 qpair failed and we were unable to recover it. 00:26:26.963 [2024-05-15 11:18:23.940723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.940807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.940820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.963 qpair failed and we were unable to recover it. 00:26:26.963 [2024-05-15 11:18:23.940960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.941068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.941081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.963 qpair failed and we were unable to recover it. 00:26:26.963 [2024-05-15 11:18:23.941321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.941466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.941480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.963 qpair failed and we were unable to recover it. 00:26:26.963 [2024-05-15 11:18:23.941561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.941655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.941670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.963 qpair failed and we were unable to recover it. 00:26:26.963 [2024-05-15 11:18:23.941826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.941937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.941951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.963 qpair failed and we were unable to recover it. 00:26:26.963 [2024-05-15 11:18:23.942054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.942140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.942153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.963 qpair failed and we were unable to recover it. 00:26:26.963 [2024-05-15 11:18:23.942237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.942320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.942333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.963 qpair failed and we were unable to recover it. 00:26:26.963 [2024-05-15 11:18:23.942408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.942564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.942577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.963 qpair failed and we were unable to recover it. 00:26:26.963 [2024-05-15 11:18:23.942722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.942790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.942803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.963 qpair failed and we were unable to recover it. 00:26:26.963 [2024-05-15 11:18:23.943010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.943156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.943174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.963 qpair failed and we were unable to recover it. 00:26:26.963 [2024-05-15 11:18:23.943281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.943435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.943448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.963 qpair failed and we were unable to recover it. 00:26:26.963 [2024-05-15 11:18:23.943524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.943624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.943637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.963 qpair failed and we were unable to recover it. 00:26:26.963 [2024-05-15 11:18:23.943785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.943889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.943902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.963 qpair failed and we were unable to recover it. 00:26:26.963 [2024-05-15 11:18:23.944058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.944193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.944207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.963 qpair failed and we were unable to recover it. 00:26:26.963 [2024-05-15 11:18:23.944293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.944440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.944453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.963 qpair failed and we were unable to recover it. 00:26:26.963 [2024-05-15 11:18:23.944541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.944613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.944627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.963 qpair failed and we were unable to recover it. 00:26:26.963 [2024-05-15 11:18:23.944707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.944850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.944863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.963 qpair failed and we were unable to recover it. 00:26:26.963 [2024-05-15 11:18:23.945010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.945095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.945107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.963 qpair failed and we were unable to recover it. 00:26:26.963 [2024-05-15 11:18:23.945187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.945272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.945285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.963 qpair failed and we were unable to recover it. 00:26:26.963 [2024-05-15 11:18:23.945370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.963 [2024-05-15 11:18:23.945442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.945455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.964 qpair failed and we were unable to recover it. 00:26:26.964 [2024-05-15 11:18:23.945603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.945701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.945716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.964 qpair failed and we were unable to recover it. 00:26:26.964 [2024-05-15 11:18:23.945806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.945952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.945965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.964 qpair failed and we were unable to recover it. 00:26:26.964 [2024-05-15 11:18:23.946056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.946134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.946147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.964 qpair failed and we were unable to recover it. 00:26:26.964 [2024-05-15 11:18:23.946235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.946325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.946338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.964 qpair failed and we were unable to recover it. 00:26:26.964 [2024-05-15 11:18:23.946411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.946576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.946590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.964 qpair failed and we were unable to recover it. 00:26:26.964 [2024-05-15 11:18:23.946745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.946823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.946837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.964 qpair failed and we were unable to recover it. 00:26:26.964 [2024-05-15 11:18:23.946928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.947008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.947021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.964 qpair failed and we were unable to recover it. 00:26:26.964 [2024-05-15 11:18:23.947117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.947255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.947269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.964 qpair failed and we were unable to recover it. 00:26:26.964 [2024-05-15 11:18:23.947408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.947482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.947495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.964 qpair failed and we were unable to recover it. 00:26:26.964 [2024-05-15 11:18:23.947641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.947710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.947723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.964 qpair failed and we were unable to recover it. 00:26:26.964 [2024-05-15 11:18:23.947807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.947954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.947969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.964 qpair failed and we were unable to recover it. 00:26:26.964 [2024-05-15 11:18:23.948047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.948136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.948149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.964 qpair failed and we were unable to recover it. 00:26:26.964 [2024-05-15 11:18:23.948244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.948336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.948350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.964 qpair failed and we were unable to recover it. 00:26:26.964 [2024-05-15 11:18:23.948436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.948512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.948525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.964 qpair failed and we were unable to recover it. 00:26:26.964 [2024-05-15 11:18:23.948605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.948676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.948689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.964 qpair failed and we were unable to recover it. 00:26:26.964 [2024-05-15 11:18:23.948832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.948901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.948913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.964 qpair failed and we were unable to recover it. 00:26:26.964 [2024-05-15 11:18:23.949007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.949146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.949159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.964 qpair failed and we were unable to recover it. 00:26:26.964 [2024-05-15 11:18:23.949252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.949324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.949338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.964 qpair failed and we were unable to recover it. 00:26:26.964 [2024-05-15 11:18:23.949499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.949571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.949584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.964 qpair failed and we were unable to recover it. 00:26:26.964 [2024-05-15 11:18:23.949681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.949766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.949779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.964 qpair failed and we were unable to recover it. 00:26:26.964 [2024-05-15 11:18:23.949859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.949934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.949949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.964 qpair failed and we were unable to recover it. 00:26:26.964 [2024-05-15 11:18:23.950034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.950121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.950135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.964 qpair failed and we were unable to recover it. 00:26:26.964 [2024-05-15 11:18:23.950284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.950436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.950450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.964 qpair failed and we were unable to recover it. 00:26:26.964 [2024-05-15 11:18:23.950592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.950691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.950703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.964 qpair failed and we were unable to recover it. 00:26:26.964 [2024-05-15 11:18:23.950787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.950874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.950889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.964 qpair failed and we were unable to recover it. 00:26:26.964 [2024-05-15 11:18:23.950963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.951037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.951050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.964 qpair failed and we were unable to recover it. 00:26:26.964 [2024-05-15 11:18:23.951126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.951210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.951223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.964 qpair failed and we were unable to recover it. 00:26:26.964 [2024-05-15 11:18:23.951396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.951484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.951497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.964 qpair failed and we were unable to recover it. 00:26:26.964 [2024-05-15 11:18:23.951593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.951681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.951693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.964 qpair failed and we were unable to recover it. 00:26:26.964 [2024-05-15 11:18:23.951777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.964 [2024-05-15 11:18:23.951857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.951871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.965 qpair failed and we were unable to recover it. 00:26:26.965 [2024-05-15 11:18:23.951964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.952037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.952052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.965 qpair failed and we were unable to recover it. 00:26:26.965 [2024-05-15 11:18:23.952206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.952282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.952295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.965 qpair failed and we were unable to recover it. 00:26:26.965 [2024-05-15 11:18:23.952445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.952517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.952530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.965 qpair failed and we were unable to recover it. 00:26:26.965 [2024-05-15 11:18:23.952625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.952699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.952712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.965 qpair failed and we were unable to recover it. 00:26:26.965 [2024-05-15 11:18:23.952790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.952878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.952892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.965 qpair failed and we were unable to recover it. 00:26:26.965 [2024-05-15 11:18:23.952967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.953035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.953048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.965 qpair failed and we were unable to recover it. 00:26:26.965 [2024-05-15 11:18:23.953131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.953215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.953228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.965 qpair failed and we were unable to recover it. 00:26:26.965 [2024-05-15 11:18:23.953316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.953391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.953405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.965 qpair failed and we were unable to recover it. 00:26:26.965 [2024-05-15 11:18:23.953564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.953663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.953676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.965 qpair failed and we were unable to recover it. 00:26:26.965 [2024-05-15 11:18:23.953771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.953869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.953882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.965 qpair failed and we were unable to recover it. 00:26:26.965 [2024-05-15 11:18:23.953973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.954051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.954064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.965 qpair failed and we were unable to recover it. 00:26:26.965 [2024-05-15 11:18:23.954141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.954289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.954302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.965 qpair failed and we were unable to recover it. 00:26:26.965 [2024-05-15 11:18:23.954386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.954458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.954471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.965 qpair failed and we were unable to recover it. 00:26:26.965 [2024-05-15 11:18:23.954549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.954630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.954642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.965 qpair failed and we were unable to recover it. 00:26:26.965 [2024-05-15 11:18:23.954721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.954811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.954824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.965 qpair failed and we were unable to recover it. 00:26:26.965 [2024-05-15 11:18:23.954913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.954985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.954999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.965 qpair failed and we were unable to recover it. 00:26:26.965 [2024-05-15 11:18:23.955080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.955159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.955177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.965 qpair failed and we were unable to recover it. 00:26:26.965 [2024-05-15 11:18:23.955248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.955336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.955349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.965 qpair failed and we were unable to recover it. 00:26:26.965 [2024-05-15 11:18:23.955421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.955523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.955538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.965 qpair failed and we were unable to recover it. 00:26:26.965 [2024-05-15 11:18:23.955637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.955707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.955719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.965 qpair failed and we were unable to recover it. 00:26:26.965 [2024-05-15 11:18:23.955806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.955875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.955888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.965 qpair failed and we were unable to recover it. 00:26:26.965 [2024-05-15 11:18:23.955973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.956112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.956126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.965 qpair failed and we were unable to recover it. 00:26:26.965 [2024-05-15 11:18:23.956227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.956303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.956316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.965 qpair failed and we were unable to recover it. 00:26:26.965 [2024-05-15 11:18:23.956390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.956462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.956475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.965 qpair failed and we were unable to recover it. 00:26:26.965 [2024-05-15 11:18:23.956549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.956694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.956706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.965 qpair failed and we were unable to recover it. 00:26:26.965 [2024-05-15 11:18:23.956792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.956953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.956966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.965 qpair failed and we were unable to recover it. 00:26:26.965 [2024-05-15 11:18:23.957105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.957183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.957197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.965 qpair failed and we were unable to recover it. 00:26:26.965 [2024-05-15 11:18:23.957290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.957357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.957371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.965 qpair failed and we were unable to recover it. 00:26:26.965 [2024-05-15 11:18:23.957448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.957522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.965 [2024-05-15 11:18:23.957535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.965 qpair failed and we were unable to recover it. 00:26:26.965 [2024-05-15 11:18:23.957628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.966 [2024-05-15 11:18:23.957697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.966 [2024-05-15 11:18:23.957710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.966 qpair failed and we were unable to recover it. 00:26:26.966 [2024-05-15 11:18:23.957786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.966 [2024-05-15 11:18:23.957956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.966 [2024-05-15 11:18:23.957969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.966 qpair failed and we were unable to recover it. 00:26:26.966 [2024-05-15 11:18:23.958205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.966 [2024-05-15 11:18:23.958304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.966 [2024-05-15 11:18:23.958318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.966 qpair failed and we were unable to recover it. 00:26:26.966 [2024-05-15 11:18:23.958465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.966 [2024-05-15 11:18:23.958545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.966 [2024-05-15 11:18:23.958558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.966 qpair failed and we were unable to recover it. 00:26:26.966 [2024-05-15 11:18:23.958629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.966 [2024-05-15 11:18:23.958703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.966 [2024-05-15 11:18:23.958715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.966 qpair failed and we were unable to recover it. 00:26:26.966 [2024-05-15 11:18:23.958796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.966 [2024-05-15 11:18:23.958876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.966 [2024-05-15 11:18:23.958889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.966 qpair failed and we were unable to recover it. 00:26:26.966 [2024-05-15 11:18:23.958965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.966 [2024-05-15 11:18:23.959035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.966 [2024-05-15 11:18:23.959048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.966 qpair failed and we were unable to recover it. 00:26:26.966 [2024-05-15 11:18:23.959150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.966 [2024-05-15 11:18:23.959237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.966 [2024-05-15 11:18:23.959251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.966 qpair failed and we were unable to recover it. 00:26:26.966 [2024-05-15 11:18:23.959337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.966 [2024-05-15 11:18:23.959475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.966 [2024-05-15 11:18:23.959490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.966 qpair failed and we were unable to recover it. 00:26:26.966 [2024-05-15 11:18:23.959638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.966 [2024-05-15 11:18:23.959728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.966 [2024-05-15 11:18:23.959741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.966 qpair failed and we were unable to recover it. 00:26:26.966 [2024-05-15 11:18:23.959814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.966 [2024-05-15 11:18:23.959884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.966 [2024-05-15 11:18:23.959898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.966 qpair failed and we were unable to recover it. 00:26:26.966 [2024-05-15 11:18:23.960057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.966 [2024-05-15 11:18:23.960131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.966 [2024-05-15 11:18:23.960145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.966 qpair failed and we were unable to recover it. 00:26:26.966 [2024-05-15 11:18:23.960260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.966 [2024-05-15 11:18:23.960338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.966 [2024-05-15 11:18:23.960352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.966 qpair failed and we were unable to recover it. 00:26:26.966 [2024-05-15 11:18:23.960429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.966 [2024-05-15 11:18:23.960513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.966 [2024-05-15 11:18:23.960526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.966 qpair failed and we were unable to recover it. 00:26:26.966 [2024-05-15 11:18:23.960609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.966 [2024-05-15 11:18:23.960698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.966 [2024-05-15 11:18:23.960711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.966 qpair failed and we were unable to recover it. 00:26:26.966 [2024-05-15 11:18:23.960788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.966 [2024-05-15 11:18:23.960865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.966 [2024-05-15 11:18:23.960878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.966 qpair failed and we were unable to recover it. 00:26:26.966 [2024-05-15 11:18:23.960965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.966 [2024-05-15 11:18:23.961103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.966 [2024-05-15 11:18:23.961117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.966 qpair failed and we were unable to recover it. 00:26:26.966 [2024-05-15 11:18:23.961262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.966 [2024-05-15 11:18:23.961349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.966 [2024-05-15 11:18:23.961362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.966 qpair failed and we were unable to recover it. 00:26:26.966 [2024-05-15 11:18:23.961527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.966 [2024-05-15 11:18:23.961629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.966 [2024-05-15 11:18:23.961642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.966 qpair failed and we were unable to recover it. 00:26:26.966 [2024-05-15 11:18:23.961749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.966 [2024-05-15 11:18:23.961843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.966 [2024-05-15 11:18:23.961857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.966 qpair failed and we were unable to recover it. 00:26:26.966 [2024-05-15 11:18:23.961931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.966 [2024-05-15 11:18:23.962003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.966 [2024-05-15 11:18:23.962016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.966 qpair failed and we were unable to recover it. 00:26:26.966 [2024-05-15 11:18:23.962093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.966 [2024-05-15 11:18:23.962242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.966 [2024-05-15 11:18:23.962255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.966 qpair failed and we were unable to recover it. 00:26:26.966 [2024-05-15 11:18:23.962345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.966 [2024-05-15 11:18:23.962482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.966 [2024-05-15 11:18:23.962495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.966 qpair failed and we were unable to recover it. 00:26:26.966 [2024-05-15 11:18:23.962582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.966 [2024-05-15 11:18:23.962658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.966 [2024-05-15 11:18:23.962671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.966 qpair failed and we were unable to recover it. 00:26:26.966 [2024-05-15 11:18:23.962745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.966 [2024-05-15 11:18:23.962817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.966 [2024-05-15 11:18:23.962830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.966 qpair failed and we were unable to recover it. 00:26:26.966 [2024-05-15 11:18:23.962974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.963050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.963062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.967 qpair failed and we were unable to recover it. 00:26:26.967 [2024-05-15 11:18:23.963147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.963236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.963249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.967 qpair failed and we were unable to recover it. 00:26:26.967 [2024-05-15 11:18:23.963337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.963482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.963495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.967 qpair failed and we were unable to recover it. 00:26:26.967 [2024-05-15 11:18:23.963640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.963720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.963733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.967 qpair failed and we were unable to recover it. 00:26:26.967 [2024-05-15 11:18:23.963821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.963895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.963908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.967 qpair failed and we were unable to recover it. 00:26:26.967 [2024-05-15 11:18:23.963983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.964060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.964073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.967 qpair failed and we were unable to recover it. 00:26:26.967 [2024-05-15 11:18:23.964151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.964226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.964239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.967 qpair failed and we were unable to recover it. 00:26:26.967 [2024-05-15 11:18:23.964321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.964466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.964479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.967 qpair failed and we were unable to recover it. 00:26:26.967 [2024-05-15 11:18:23.964560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.964632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.964645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.967 qpair failed and we were unable to recover it. 00:26:26.967 [2024-05-15 11:18:23.964724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.964818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.964831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.967 qpair failed and we were unable to recover it. 00:26:26.967 [2024-05-15 11:18:23.964920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.964991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.965004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.967 qpair failed and we were unable to recover it. 00:26:26.967 [2024-05-15 11:18:23.965099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.965173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.965186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.967 qpair failed and we were unable to recover it. 00:26:26.967 [2024-05-15 11:18:23.965336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.965408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.965421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.967 qpair failed and we were unable to recover it. 00:26:26.967 [2024-05-15 11:18:23.965489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.965588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.965601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.967 qpair failed and we were unable to recover it. 00:26:26.967 [2024-05-15 11:18:23.965676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.965818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.965831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.967 qpair failed and we were unable to recover it. 00:26:26.967 [2024-05-15 11:18:23.965905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.965991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.966005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.967 qpair failed and we were unable to recover it. 00:26:26.967 [2024-05-15 11:18:23.966093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.966174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.966187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.967 qpair failed and we were unable to recover it. 00:26:26.967 [2024-05-15 11:18:23.966281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.966386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.966399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.967 qpair failed and we were unable to recover it. 00:26:26.967 [2024-05-15 11:18:23.966465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.966549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.966563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.967 qpair failed and we were unable to recover it. 00:26:26.967 [2024-05-15 11:18:23.966657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.966801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.966815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.967 qpair failed and we were unable to recover it. 00:26:26.967 [2024-05-15 11:18:23.966902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.966989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.967002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.967 qpair failed and we were unable to recover it. 00:26:26.967 [2024-05-15 11:18:23.967154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.967231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.967244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.967 qpair failed and we were unable to recover it. 00:26:26.967 [2024-05-15 11:18:23.967328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.967402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.967415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.967 qpair failed and we were unable to recover it. 00:26:26.967 [2024-05-15 11:18:23.967496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.967574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.967587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.967 qpair failed and we were unable to recover it. 00:26:26.967 [2024-05-15 11:18:23.967675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.967879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.967893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.967 qpair failed and we were unable to recover it. 00:26:26.967 [2024-05-15 11:18:23.967980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.968128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.968142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.967 qpair failed and we were unable to recover it. 00:26:26.967 [2024-05-15 11:18:23.968235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.968328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.968341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.967 qpair failed and we were unable to recover it. 00:26:26.967 [2024-05-15 11:18:23.968449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.968624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.968640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.967 qpair failed and we were unable to recover it. 00:26:26.967 [2024-05-15 11:18:23.968734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.968812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.968822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.967 qpair failed and we were unable to recover it. 00:26:26.967 [2024-05-15 11:18:23.968908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.968980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.967 [2024-05-15 11:18:23.968990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.968 qpair failed and we were unable to recover it. 00:26:26.968 [2024-05-15 11:18:23.969059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.969126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.969135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.968 qpair failed and we were unable to recover it. 00:26:26.968 [2024-05-15 11:18:23.969220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.969354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.969364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.968 qpair failed and we were unable to recover it. 00:26:26.968 [2024-05-15 11:18:23.969443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.969523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.969532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.968 qpair failed and we were unable to recover it. 00:26:26.968 [2024-05-15 11:18:23.969599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.969733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.969742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.968 qpair failed and we were unable to recover it. 00:26:26.968 [2024-05-15 11:18:23.969875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.969938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.969948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.968 qpair failed and we were unable to recover it. 00:26:26.968 [2024-05-15 11:18:23.970084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.970150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.970160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.968 qpair failed and we were unable to recover it. 00:26:26.968 [2024-05-15 11:18:23.970239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.970319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.970329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.968 qpair failed and we were unable to recover it. 00:26:26.968 [2024-05-15 11:18:23.970414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.970503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.970512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.968 qpair failed and we were unable to recover it. 00:26:26.968 [2024-05-15 11:18:23.970595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.970676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.970686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.968 qpair failed and we were unable to recover it. 00:26:26.968 [2024-05-15 11:18:23.970751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.970816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.970825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.968 qpair failed and we were unable to recover it. 00:26:26.968 [2024-05-15 11:18:23.971004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.971083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.971092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.968 qpair failed and we were unable to recover it. 00:26:26.968 [2024-05-15 11:18:23.971170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.971305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.971315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.968 qpair failed and we were unable to recover it. 00:26:26.968 [2024-05-15 11:18:23.971380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.971450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.971459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.968 qpair failed and we were unable to recover it. 00:26:26.968 [2024-05-15 11:18:23.971593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.971656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.971665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.968 qpair failed and we were unable to recover it. 00:26:26.968 [2024-05-15 11:18:23.971749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.971997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.972007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.968 qpair failed and we were unable to recover it. 00:26:26.968 [2024-05-15 11:18:23.972084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.972171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.972181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.968 qpair failed and we were unable to recover it. 00:26:26.968 [2024-05-15 11:18:23.972249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.972335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.972345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.968 qpair failed and we were unable to recover it. 00:26:26.968 [2024-05-15 11:18:23.972512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.972596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.972606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.968 qpair failed and we were unable to recover it. 00:26:26.968 [2024-05-15 11:18:23.972681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.972759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.972769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.968 qpair failed and we were unable to recover it. 00:26:26.968 [2024-05-15 11:18:23.972841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.972994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.973004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.968 qpair failed and we were unable to recover it. 00:26:26.968 [2024-05-15 11:18:23.973088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.973173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.973183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.968 qpair failed and we were unable to recover it. 00:26:26.968 [2024-05-15 11:18:23.973271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.973425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.973435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.968 qpair failed and we were unable to recover it. 00:26:26.968 [2024-05-15 11:18:23.973574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.973649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.973659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.968 qpair failed and we were unable to recover it. 00:26:26.968 [2024-05-15 11:18:23.973732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.973865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.973876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.968 qpair failed and we were unable to recover it. 00:26:26.968 [2024-05-15 11:18:23.973943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.974006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.974016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.968 qpair failed and we were unable to recover it. 00:26:26.968 [2024-05-15 11:18:23.974087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.974153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.974162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.968 qpair failed and we were unable to recover it. 00:26:26.968 [2024-05-15 11:18:23.974251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.974332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.974341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.968 qpair failed and we were unable to recover it. 00:26:26.968 [2024-05-15 11:18:23.974424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.974486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.974495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.968 qpair failed and we were unable to recover it. 00:26:26.968 [2024-05-15 11:18:23.974682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.974755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.974765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.968 qpair failed and we were unable to recover it. 00:26:26.968 [2024-05-15 11:18:23.974847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.968 [2024-05-15 11:18:23.974925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.969 [2024-05-15 11:18:23.974935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.969 qpair failed and we were unable to recover it. 00:26:26.969 [2024-05-15 11:18:23.975098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.969 [2024-05-15 11:18:23.975162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.969 [2024-05-15 11:18:23.975177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.969 qpair failed and we were unable to recover it. 00:26:26.969 [2024-05-15 11:18:23.975246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.969 [2024-05-15 11:18:23.975320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.969 [2024-05-15 11:18:23.975330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.969 qpair failed and we were unable to recover it. 00:26:26.969 [2024-05-15 11:18:23.975462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.969 [2024-05-15 11:18:23.975609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.969 [2024-05-15 11:18:23.975619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.969 qpair failed and we were unable to recover it. 00:26:26.969 [2024-05-15 11:18:23.975695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.969 [2024-05-15 11:18:23.975772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.969 [2024-05-15 11:18:23.975781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.969 qpair failed and we were unable to recover it. 00:26:26.969 [2024-05-15 11:18:23.975867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.969 [2024-05-15 11:18:23.976015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.969 [2024-05-15 11:18:23.976026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.969 qpair failed and we were unable to recover it. 00:26:26.969 [2024-05-15 11:18:23.976118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.969 [2024-05-15 11:18:23.976207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.969 [2024-05-15 11:18:23.976217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.969 qpair failed and we were unable to recover it. 00:26:26.969 [2024-05-15 11:18:23.976276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.969 [2024-05-15 11:18:23.976356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.969 [2024-05-15 11:18:23.976367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.969 qpair failed and we were unable to recover it. 00:26:26.969 [2024-05-15 11:18:23.976433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.969 [2024-05-15 11:18:23.976502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.969 [2024-05-15 11:18:23.976513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.969 qpair failed and we were unable to recover it. 00:26:26.969 [2024-05-15 11:18:23.976589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.969 [2024-05-15 11:18:23.976653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.969 [2024-05-15 11:18:23.976662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.969 qpair failed and we were unable to recover it. 00:26:26.969 [2024-05-15 11:18:23.976741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.969 [2024-05-15 11:18:23.976817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.969 [2024-05-15 11:18:23.976826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.969 qpair failed and we were unable to recover it. 00:26:26.969 [2024-05-15 11:18:23.976907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.969 [2024-05-15 11:18:23.976979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.969 [2024-05-15 11:18:23.976989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.969 qpair failed and we were unable to recover it. 00:26:26.969 [2024-05-15 11:18:23.977066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.969 [2024-05-15 11:18:23.977209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.969 [2024-05-15 11:18:23.977221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.969 qpair failed and we were unable to recover it. 00:26:26.969 [2024-05-15 11:18:23.977287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.969 [2024-05-15 11:18:23.977360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.969 [2024-05-15 11:18:23.977369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.969 qpair failed and we were unable to recover it. 00:26:26.969 [2024-05-15 11:18:23.977439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.969 [2024-05-15 11:18:23.977502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.969 [2024-05-15 11:18:23.977512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.969 qpair failed and we were unable to recover it. 00:26:26.969 [2024-05-15 11:18:23.977652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.969 [2024-05-15 11:18:23.977717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.969 [2024-05-15 11:18:23.977727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.969 qpair failed and we were unable to recover it. 00:26:26.969 [2024-05-15 11:18:23.977803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.969 [2024-05-15 11:18:23.977880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.969 [2024-05-15 11:18:23.977889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.969 qpair failed and we were unable to recover it. 00:26:26.969 [2024-05-15 11:18:23.977967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.969 [2024-05-15 11:18:23.978038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.969 [2024-05-15 11:18:23.978048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.969 qpair failed and we were unable to recover it. 00:26:26.969 [2024-05-15 11:18:23.978140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.969 [2024-05-15 11:18:23.978209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.969 [2024-05-15 11:18:23.978223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.969 qpair failed and we were unable to recover it. 00:26:26.969 [2024-05-15 11:18:23.978294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.969 [2024-05-15 11:18:23.978362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.969 [2024-05-15 11:18:23.978372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.969 qpair failed and we were unable to recover it. 00:26:26.969 [2024-05-15 11:18:23.978455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.969 [2024-05-15 11:18:23.978522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.969 [2024-05-15 11:18:23.978531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.969 qpair failed and we were unable to recover it. 00:26:26.969 [2024-05-15 11:18:23.978610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.969 [2024-05-15 11:18:23.978692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.969 [2024-05-15 11:18:23.978702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.969 qpair failed and we were unable to recover it. 00:26:26.969 [2024-05-15 11:18:23.978781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.969 [2024-05-15 11:18:23.978850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.969 [2024-05-15 11:18:23.978859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.969 qpair failed and we were unable to recover it. 00:26:26.969 [2024-05-15 11:18:23.978928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.969 [2024-05-15 11:18:23.978998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.969 [2024-05-15 11:18:23.979008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.969 qpair failed and we were unable to recover it. 00:26:26.969 [2024-05-15 11:18:23.979089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.969 [2024-05-15 11:18:23.979154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.969 [2024-05-15 11:18:23.979168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.969 qpair failed and we were unable to recover it. 00:26:26.969 [2024-05-15 11:18:23.979238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.969 [2024-05-15 11:18:23.979309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.969 [2024-05-15 11:18:23.979320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.969 qpair failed and we were unable to recover it. 00:26:26.969 [2024-05-15 11:18:23.979399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.969 [2024-05-15 11:18:23.979499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.979509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.970 qpair failed and we were unable to recover it. 00:26:26.970 [2024-05-15 11:18:23.979575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.979653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.979662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.970 qpair failed and we were unable to recover it. 00:26:26.970 [2024-05-15 11:18:23.979728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.979794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.979806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.970 qpair failed and we were unable to recover it. 00:26:26.970 [2024-05-15 11:18:23.979876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.979933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.979942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.970 qpair failed and we were unable to recover it. 00:26:26.970 [2024-05-15 11:18:23.980024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.980089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.980098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.970 qpair failed and we were unable to recover it. 00:26:26.970 [2024-05-15 11:18:23.980188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.980263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.980273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.970 qpair failed and we were unable to recover it. 00:26:26.970 [2024-05-15 11:18:23.980343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.980497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.980507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.970 qpair failed and we were unable to recover it. 00:26:26.970 [2024-05-15 11:18:23.980642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.980726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.980736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.970 qpair failed and we were unable to recover it. 00:26:26.970 [2024-05-15 11:18:23.980960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.981091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.981101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.970 qpair failed and we were unable to recover it. 00:26:26.970 [2024-05-15 11:18:23.981179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.981256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.981265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.970 qpair failed and we were unable to recover it. 00:26:26.970 [2024-05-15 11:18:23.981330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.981491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.981502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.970 qpair failed and we were unable to recover it. 00:26:26.970 [2024-05-15 11:18:23.981582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.981661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.981670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.970 qpair failed and we were unable to recover it. 00:26:26.970 [2024-05-15 11:18:23.981763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.981900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.981912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.970 qpair failed and we were unable to recover it. 00:26:26.970 [2024-05-15 11:18:23.981976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.982042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.982051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.970 qpair failed and we were unable to recover it. 00:26:26.970 [2024-05-15 11:18:23.982142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.982211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.982221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.970 qpair failed and we were unable to recover it. 00:26:26.970 [2024-05-15 11:18:23.982282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.982357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.982367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.970 qpair failed and we were unable to recover it. 00:26:26.970 [2024-05-15 11:18:23.982431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.982508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.982518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.970 qpair failed and we were unable to recover it. 00:26:26.970 [2024-05-15 11:18:23.982589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.982717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.982728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.970 qpair failed and we were unable to recover it. 00:26:26.970 [2024-05-15 11:18:23.982801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.982941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.982952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.970 qpair failed and we were unable to recover it. 00:26:26.970 [2024-05-15 11:18:23.983017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.983086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.983096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.970 qpair failed and we were unable to recover it. 00:26:26.970 [2024-05-15 11:18:23.983180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.983251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.983260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.970 qpair failed and we were unable to recover it. 00:26:26.970 [2024-05-15 11:18:23.983410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.983477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.983487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.970 qpair failed and we were unable to recover it. 00:26:26.970 [2024-05-15 11:18:23.983549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.983647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.983657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.970 qpair failed and we were unable to recover it. 00:26:26.970 [2024-05-15 11:18:23.983728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.983809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.983818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.970 qpair failed and we were unable to recover it. 00:26:26.970 [2024-05-15 11:18:23.983905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.983983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.983992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.970 qpair failed and we were unable to recover it. 00:26:26.970 [2024-05-15 11:18:23.984076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.984155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.984175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.970 qpair failed and we were unable to recover it. 00:26:26.970 [2024-05-15 11:18:23.984260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.984400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.984410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.970 qpair failed and we were unable to recover it. 00:26:26.970 [2024-05-15 11:18:23.984476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.984559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.984568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.970 qpair failed and we were unable to recover it. 00:26:26.970 [2024-05-15 11:18:23.984639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.984713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.984723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.970 qpair failed and we were unable to recover it. 00:26:26.970 [2024-05-15 11:18:23.984796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.984925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.970 [2024-05-15 11:18:23.984935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.970 qpair failed and we were unable to recover it. 00:26:26.970 [2024-05-15 11:18:23.985010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.971 [2024-05-15 11:18:23.985078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.971 [2024-05-15 11:18:23.985089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.971 qpair failed and we were unable to recover it. 00:26:26.971 [2024-05-15 11:18:23.985159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.971 [2024-05-15 11:18:23.985231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.971 [2024-05-15 11:18:23.985240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.971 qpair failed and we were unable to recover it. 00:26:26.971 [2024-05-15 11:18:23.985302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.971 [2024-05-15 11:18:23.985365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.971 [2024-05-15 11:18:23.985374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.971 qpair failed and we were unable to recover it. 00:26:26.971 [2024-05-15 11:18:23.985517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.971 [2024-05-15 11:18:23.985586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.971 [2024-05-15 11:18:23.985596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.971 qpair failed and we were unable to recover it. 00:26:26.971 [2024-05-15 11:18:23.985728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.971 [2024-05-15 11:18:23.985823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.971 [2024-05-15 11:18:23.985833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.971 qpair failed and we were unable to recover it. 00:26:26.971 [2024-05-15 11:18:23.985971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.971 [2024-05-15 11:18:23.986047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.971 [2024-05-15 11:18:23.986056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.971 qpair failed and we were unable to recover it. 00:26:26.971 [2024-05-15 11:18:23.986129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.971 [2024-05-15 11:18:23.986207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.971 [2024-05-15 11:18:23.986218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.971 qpair failed and we were unable to recover it. 00:26:26.971 [2024-05-15 11:18:23.986289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.971 [2024-05-15 11:18:23.986421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.971 [2024-05-15 11:18:23.986431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.971 qpair failed and we were unable to recover it. 00:26:26.971 [2024-05-15 11:18:23.986561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.971 [2024-05-15 11:18:23.986621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.971 [2024-05-15 11:18:23.986629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.971 qpair failed and we were unable to recover it. 00:26:26.971 [2024-05-15 11:18:23.986699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.971 [2024-05-15 11:18:23.986784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.971 [2024-05-15 11:18:23.986793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.971 qpair failed and we were unable to recover it. 00:26:26.971 [2024-05-15 11:18:23.986874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.971 [2024-05-15 11:18:23.986938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.971 [2024-05-15 11:18:23.986947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.971 qpair failed and we were unable to recover it. 00:26:26.971 [2024-05-15 11:18:23.987028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.971 [2024-05-15 11:18:23.987160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.971 [2024-05-15 11:18:23.987175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.971 qpair failed and we were unable to recover it. 00:26:26.971 [2024-05-15 11:18:23.987240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.971 [2024-05-15 11:18:23.987335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.971 [2024-05-15 11:18:23.987344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.971 qpair failed and we were unable to recover it. 00:26:26.971 [2024-05-15 11:18:23.987479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.971 [2024-05-15 11:18:23.987558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.971 [2024-05-15 11:18:23.987567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.971 qpair failed and we were unable to recover it. 00:26:26.971 [2024-05-15 11:18:23.987634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.971 [2024-05-15 11:18:23.987777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.971 [2024-05-15 11:18:23.987786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.971 qpair failed and we were unable to recover it. 00:26:26.971 [2024-05-15 11:18:23.987930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.971 [2024-05-15 11:18:23.988008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.971 [2024-05-15 11:18:23.988019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.971 qpair failed and we were unable to recover it. 00:26:26.971 [2024-05-15 11:18:23.988176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.971 [2024-05-15 11:18:23.988255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.971 [2024-05-15 11:18:23.988265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.971 qpair failed and we were unable to recover it. 00:26:26.971 [2024-05-15 11:18:23.988339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.971 [2024-05-15 11:18:23.988418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.971 [2024-05-15 11:18:23.988428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.971 qpair failed and we were unable to recover it. 00:26:26.971 [2024-05-15 11:18:23.988496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.971 [2024-05-15 11:18:23.988628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.971 [2024-05-15 11:18:23.988638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.971 qpair failed and we were unable to recover it. 00:26:26.971 [2024-05-15 11:18:23.988716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.971 [2024-05-15 11:18:23.988800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.971 [2024-05-15 11:18:23.988809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.971 qpair failed and we were unable to recover it. 00:26:26.971 [2024-05-15 11:18:23.988879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.971 [2024-05-15 11:18:23.988945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.971 [2024-05-15 11:18:23.988954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.971 qpair failed and we were unable to recover it. 00:26:26.971 [2024-05-15 11:18:23.989026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.971 [2024-05-15 11:18:23.989099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.989108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.972 qpair failed and we were unable to recover it. 00:26:26.972 [2024-05-15 11:18:23.989198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.989270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.989279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.972 qpair failed and we were unable to recover it. 00:26:26.972 [2024-05-15 11:18:23.989357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.989503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.989511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.972 qpair failed and we were unable to recover it. 00:26:26.972 [2024-05-15 11:18:23.989591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.989669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.989679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.972 qpair failed and we were unable to recover it. 00:26:26.972 [2024-05-15 11:18:23.989812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.989883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.989895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.972 qpair failed and we were unable to recover it. 00:26:26.972 [2024-05-15 11:18:23.989970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.990034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.990043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.972 qpair failed and we were unable to recover it. 00:26:26.972 [2024-05-15 11:18:23.990184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.990277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.990287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.972 qpair failed and we were unable to recover it. 00:26:26.972 [2024-05-15 11:18:23.990352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.990428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.990437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.972 qpair failed and we were unable to recover it. 00:26:26.972 [2024-05-15 11:18:23.990507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.990570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.990579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.972 qpair failed and we were unable to recover it. 00:26:26.972 [2024-05-15 11:18:23.990649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.990795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.990805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.972 qpair failed and we were unable to recover it. 00:26:26.972 [2024-05-15 11:18:23.990874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.990950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.990960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.972 qpair failed and we were unable to recover it. 00:26:26.972 [2024-05-15 11:18:23.991032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.991107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.991117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.972 qpair failed and we were unable to recover it. 00:26:26.972 [2024-05-15 11:18:23.991220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.991291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.991300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.972 qpair failed and we were unable to recover it. 00:26:26.972 [2024-05-15 11:18:23.991370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.991436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.991446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.972 qpair failed and we were unable to recover it. 00:26:26.972 [2024-05-15 11:18:23.991516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.991594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.991603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.972 qpair failed and we were unable to recover it. 00:26:26.972 [2024-05-15 11:18:23.991686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.991825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.991834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.972 qpair failed and we were unable to recover it. 00:26:26.972 [2024-05-15 11:18:23.991926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.992000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.992010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.972 qpair failed and we were unable to recover it. 00:26:26.972 [2024-05-15 11:18:23.992081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.992158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.992185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.972 qpair failed and we were unable to recover it. 00:26:26.972 [2024-05-15 11:18:23.992268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.992354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.992363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.972 qpair failed and we were unable to recover it. 00:26:26.972 [2024-05-15 11:18:23.992430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.992523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.992534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.972 qpair failed and we were unable to recover it. 00:26:26.972 [2024-05-15 11:18:23.992603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.992667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.992676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.972 qpair failed and we were unable to recover it. 00:26:26.972 [2024-05-15 11:18:23.992751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.992814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.992823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.972 qpair failed and we were unable to recover it. 00:26:26.972 [2024-05-15 11:18:23.992894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.992976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.992986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.972 qpair failed and we were unable to recover it. 00:26:26.972 [2024-05-15 11:18:23.993055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.993119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.993129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.972 qpair failed and we were unable to recover it. 00:26:26.972 [2024-05-15 11:18:23.993197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.993332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.993342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.972 qpair failed and we were unable to recover it. 00:26:26.972 [2024-05-15 11:18:23.993410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.993475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.993484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.972 qpair failed and we were unable to recover it. 00:26:26.972 [2024-05-15 11:18:23.993624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.993692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.993701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.972 qpair failed and we were unable to recover it. 00:26:26.972 [2024-05-15 11:18:23.993795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.993860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.993869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.972 qpair failed and we were unable to recover it. 00:26:26.972 [2024-05-15 11:18:23.993949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.994017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.994027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.972 qpair failed and we were unable to recover it. 00:26:26.972 [2024-05-15 11:18:23.994104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.994189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.972 [2024-05-15 11:18:23.994198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.972 qpair failed and we were unable to recover it. 00:26:26.973 [2024-05-15 11:18:23.994272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.994339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.994348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.973 qpair failed and we were unable to recover it. 00:26:26.973 [2024-05-15 11:18:23.994421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.994498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.994507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.973 qpair failed and we were unable to recover it. 00:26:26.973 [2024-05-15 11:18:23.994598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.994676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.994685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.973 qpair failed and we were unable to recover it. 00:26:26.973 [2024-05-15 11:18:23.994757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.994829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.994839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.973 qpair failed and we were unable to recover it. 00:26:26.973 [2024-05-15 11:18:23.994906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.994980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.994989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.973 qpair failed and we were unable to recover it. 00:26:26.973 [2024-05-15 11:18:23.995063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.995130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.995139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.973 qpair failed and we were unable to recover it. 00:26:26.973 [2024-05-15 11:18:23.995211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.995344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.995355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.973 qpair failed and we were unable to recover it. 00:26:26.973 [2024-05-15 11:18:23.995415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.995490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.995499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.973 qpair failed and we were unable to recover it. 00:26:26.973 [2024-05-15 11:18:23.995633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.995766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.995775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.973 qpair failed and we were unable to recover it. 00:26:26.973 [2024-05-15 11:18:23.995867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.996020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.996029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.973 qpair failed and we were unable to recover it. 00:26:26.973 [2024-05-15 11:18:23.996094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.996160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.996174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.973 qpair failed and we were unable to recover it. 00:26:26.973 [2024-05-15 11:18:23.996267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.996396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.996406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.973 qpair failed and we were unable to recover it. 00:26:26.973 [2024-05-15 11:18:23.996478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.996561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.996571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.973 qpair failed and we were unable to recover it. 00:26:26.973 [2024-05-15 11:18:23.996636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.996700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.996709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.973 qpair failed and we were unable to recover it. 00:26:26.973 [2024-05-15 11:18:23.996783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.996915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.996926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.973 qpair failed and we were unable to recover it. 00:26:26.973 [2024-05-15 11:18:23.996995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.997070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.997079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.973 qpair failed and we were unable to recover it. 00:26:26.973 [2024-05-15 11:18:23.997152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.997220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.997229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.973 qpair failed and we were unable to recover it. 00:26:26.973 [2024-05-15 11:18:23.997294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.997358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.997369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.973 qpair failed and we were unable to recover it. 00:26:26.973 [2024-05-15 11:18:23.997515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.997580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.997590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.973 qpair failed and we were unable to recover it. 00:26:26.973 [2024-05-15 11:18:23.997670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.997742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.997751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.973 qpair failed and we were unable to recover it. 00:26:26.973 [2024-05-15 11:18:23.997836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.997907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.997917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.973 qpair failed and we were unable to recover it. 00:26:26.973 [2024-05-15 11:18:23.997989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.998052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.998061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.973 qpair failed and we were unable to recover it. 00:26:26.973 [2024-05-15 11:18:23.998133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.998288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.998298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.973 qpair failed and we were unable to recover it. 00:26:26.973 [2024-05-15 11:18:23.998367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.998435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.998445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.973 qpair failed and we were unable to recover it. 00:26:26.973 [2024-05-15 11:18:23.998523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.998607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.998617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.973 qpair failed and we were unable to recover it. 00:26:26.973 [2024-05-15 11:18:23.998746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.998821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.998831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.973 qpair failed and we were unable to recover it. 00:26:26.973 [2024-05-15 11:18:23.998894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.973 [2024-05-15 11:18:23.999040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:23.999050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.974 qpair failed and we were unable to recover it. 00:26:26.974 [2024-05-15 11:18:23.999118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:23.999259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:23.999269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.974 qpair failed and we were unable to recover it. 00:26:26.974 [2024-05-15 11:18:23.999345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:23.999413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:23.999422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.974 qpair failed and we were unable to recover it. 00:26:26.974 [2024-05-15 11:18:23.999497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:23.999628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:23.999637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.974 qpair failed and we were unable to recover it. 00:26:26.974 [2024-05-15 11:18:23.999731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:23.999879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:23.999888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.974 qpair failed and we were unable to recover it. 00:26:26.974 [2024-05-15 11:18:23.999956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:24.000028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:24.000037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.974 qpair failed and we were unable to recover it. 00:26:26.974 [2024-05-15 11:18:24.000111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:24.000192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:24.000204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.974 qpair failed and we were unable to recover it. 00:26:26.974 [2024-05-15 11:18:24.000285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:24.000416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:24.000426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.974 qpair failed and we were unable to recover it. 00:26:26.974 [2024-05-15 11:18:24.000569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:24.000633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:24.000643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.974 qpair failed and we were unable to recover it. 00:26:26.974 [2024-05-15 11:18:24.000709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:24.000774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:24.000784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.974 qpair failed and we were unable to recover it. 00:26:26.974 [2024-05-15 11:18:24.000860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:24.000929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:24.000939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.974 qpair failed and we were unable to recover it. 00:26:26.974 [2024-05-15 11:18:24.001003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:24.001135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:24.001145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.974 qpair failed and we were unable to recover it. 00:26:26.974 [2024-05-15 11:18:24.001263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:24.001334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:24.001343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.974 qpair failed and we were unable to recover it. 00:26:26.974 [2024-05-15 11:18:24.001406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:24.001463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:24.001471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.974 qpair failed and we were unable to recover it. 00:26:26.974 [2024-05-15 11:18:24.001607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:24.001677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:24.001686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.974 qpair failed and we were unable to recover it. 00:26:26.974 [2024-05-15 11:18:24.001752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:24.001815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:24.001824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.974 qpair failed and we were unable to recover it. 00:26:26.974 [2024-05-15 11:18:24.001893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:24.001990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:24.002000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.974 qpair failed and we were unable to recover it. 00:26:26.974 [2024-05-15 11:18:24.002083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:24.002258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:24.002267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.974 qpair failed and we were unable to recover it. 00:26:26.974 [2024-05-15 11:18:24.002362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:24.002447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:24.002457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.974 qpair failed and we were unable to recover it. 00:26:26.974 [2024-05-15 11:18:24.002605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:24.002738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:24.002747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.974 qpair failed and we were unable to recover it. 00:26:26.974 [2024-05-15 11:18:24.002820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:24.002890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:24.002900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.974 qpair failed and we were unable to recover it. 00:26:26.974 [2024-05-15 11:18:24.002981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:24.003064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:24.003074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.974 qpair failed and we were unable to recover it. 00:26:26.974 [2024-05-15 11:18:24.003153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:24.003236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:24.003246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.974 qpair failed and we were unable to recover it. 00:26:26.974 [2024-05-15 11:18:24.003327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:24.003460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:24.003470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.974 qpair failed and we were unable to recover it. 00:26:26.974 [2024-05-15 11:18:24.003541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:24.003604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:24.003614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.974 qpair failed and we were unable to recover it. 00:26:26.974 [2024-05-15 11:18:24.003689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:24.003773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:24.003782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.974 qpair failed and we were unable to recover it. 00:26:26.974 [2024-05-15 11:18:24.003848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:24.003913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:24.003923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.974 qpair failed and we were unable to recover it. 00:26:26.974 [2024-05-15 11:18:24.003995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:24.004062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:24.004072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.974 qpair failed and we were unable to recover it. 00:26:26.974 [2024-05-15 11:18:24.004303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:24.004386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:24.004395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.974 qpair failed and we were unable to recover it. 00:26:26.974 [2024-05-15 11:18:24.004464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:24.004614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.974 [2024-05-15 11:18:24.004624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.974 qpair failed and we were unable to recover it. 00:26:26.974 [2024-05-15 11:18:24.004691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.004852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.004862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.975 qpair failed and we were unable to recover it. 00:26:26.975 [2024-05-15 11:18:24.004930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.005005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.005015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.975 qpair failed and we were unable to recover it. 00:26:26.975 [2024-05-15 11:18:24.005108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.005188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.005199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.975 qpair failed and we were unable to recover it. 00:26:26.975 [2024-05-15 11:18:24.005337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.005466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.005476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.975 qpair failed and we were unable to recover it. 00:26:26.975 [2024-05-15 11:18:24.005553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.005627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.005636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.975 qpair failed and we were unable to recover it. 00:26:26.975 [2024-05-15 11:18:24.005715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.005785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.005794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.975 qpair failed and we were unable to recover it. 00:26:26.975 [2024-05-15 11:18:24.005869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.006002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.006013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.975 qpair failed and we were unable to recover it. 00:26:26.975 [2024-05-15 11:18:24.006090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.006152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.006161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.975 qpair failed and we were unable to recover it. 00:26:26.975 [2024-05-15 11:18:24.006233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.006309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.006319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.975 qpair failed and we were unable to recover it. 00:26:26.975 [2024-05-15 11:18:24.006385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.006460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.006469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.975 qpair failed and we were unable to recover it. 00:26:26.975 [2024-05-15 11:18:24.006543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.006609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.006618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.975 qpair failed and we were unable to recover it. 00:26:26.975 [2024-05-15 11:18:24.006753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.006818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.006828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.975 qpair failed and we were unable to recover it. 00:26:26.975 [2024-05-15 11:18:24.006908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.006977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.006986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.975 qpair failed and we were unable to recover it. 00:26:26.975 [2024-05-15 11:18:24.007068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.007212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.007221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.975 qpair failed and we were unable to recover it. 00:26:26.975 [2024-05-15 11:18:24.007288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.007357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.007366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.975 qpair failed and we were unable to recover it. 00:26:26.975 [2024-05-15 11:18:24.007457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.007602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.007612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.975 qpair failed and we were unable to recover it. 00:26:26.975 [2024-05-15 11:18:24.007762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.007899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.007910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.975 qpair failed and we were unable to recover it. 00:26:26.975 [2024-05-15 11:18:24.008002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.008156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.008188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.975 qpair failed and we were unable to recover it. 00:26:26.975 [2024-05-15 11:18:24.008248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.008321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.008331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.975 qpair failed and we were unable to recover it. 00:26:26.975 [2024-05-15 11:18:24.008396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.008461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.008470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.975 qpair failed and we were unable to recover it. 00:26:26.975 [2024-05-15 11:18:24.008602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.008682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.008691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.975 qpair failed and we were unable to recover it. 00:26:26.975 [2024-05-15 11:18:24.008769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.008840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.008850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.975 qpair failed and we were unable to recover it. 00:26:26.975 [2024-05-15 11:18:24.008935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.009066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.009076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.975 qpair failed and we were unable to recover it. 00:26:26.975 [2024-05-15 11:18:24.009144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.009333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.009345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.975 qpair failed and we were unable to recover it. 00:26:26.975 [2024-05-15 11:18:24.009415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.009650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.009661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.975 qpair failed and we were unable to recover it. 00:26:26.975 [2024-05-15 11:18:24.009725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.009797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.009806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.975 qpair failed and we were unable to recover it. 00:26:26.975 [2024-05-15 11:18:24.009872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:18:24.009936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.976 [2024-05-15 11:18:24.009947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.976 qpair failed and we were unable to recover it. 00:26:26.976 [2024-05-15 11:18:24.010021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.976 [2024-05-15 11:18:24.010086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.976 [2024-05-15 11:18:24.010095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.976 qpair failed and we were unable to recover it. 00:26:26.976 [2024-05-15 11:18:24.010172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.976 [2024-05-15 11:18:24.010296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.976 [2024-05-15 11:18:24.010305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.976 qpair failed and we were unable to recover it. 00:26:26.976 [2024-05-15 11:18:24.010415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.976 [2024-05-15 11:18:24.010484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.976 [2024-05-15 11:18:24.010493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.976 qpair failed and we were unable to recover it. 00:26:26.976 [2024-05-15 11:18:24.010565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.976 [2024-05-15 11:18:24.010653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.976 [2024-05-15 11:18:24.010662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.976 qpair failed and we were unable to recover it. 00:26:26.976 [2024-05-15 11:18:24.010737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.976 [2024-05-15 11:18:24.010803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.976 [2024-05-15 11:18:24.010812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.976 qpair failed and we were unable to recover it. 00:26:26.976 [2024-05-15 11:18:24.010950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.976 [2024-05-15 11:18:24.011103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.976 [2024-05-15 11:18:24.011113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.976 qpair failed and we were unable to recover it. 00:26:26.976 [2024-05-15 11:18:24.011194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.976 [2024-05-15 11:18:24.011258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.976 [2024-05-15 11:18:24.011267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.976 qpair failed and we were unable to recover it. 00:26:26.976 [2024-05-15 11:18:24.011336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.976 [2024-05-15 11:18:24.011436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.976 [2024-05-15 11:18:24.011446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.976 qpair failed and we were unable to recover it. 00:26:26.976 [2024-05-15 11:18:24.011512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.011574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.011584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.977 qpair failed and we were unable to recover it. 00:26:26.977 [2024-05-15 11:18:24.011744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.011807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.011818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.977 qpair failed and we were unable to recover it. 00:26:26.977 [2024-05-15 11:18:24.011889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.011957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.011966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.977 qpair failed and we were unable to recover it. 00:26:26.977 [2024-05-15 11:18:24.012033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.012098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.012107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.977 qpair failed and we were unable to recover it. 00:26:26.977 [2024-05-15 11:18:24.012182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.012249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.012258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.977 qpair failed and we were unable to recover it. 00:26:26.977 [2024-05-15 11:18:24.012352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.012422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.012431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.977 qpair failed and we were unable to recover it. 00:26:26.977 [2024-05-15 11:18:24.012571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.012638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.012647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.977 qpair failed and we were unable to recover it. 00:26:26.977 [2024-05-15 11:18:24.012713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.012846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.012856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.977 qpair failed and we were unable to recover it. 00:26:26.977 [2024-05-15 11:18:24.012938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.013000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.013009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.977 qpair failed and we were unable to recover it. 00:26:26.977 [2024-05-15 11:18:24.013076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.013146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.013156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.977 qpair failed and we were unable to recover it. 00:26:26.977 [2024-05-15 11:18:24.013235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.013380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.013391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.977 qpair failed and we were unable to recover it. 00:26:26.977 [2024-05-15 11:18:24.013529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.013591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.013600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.977 qpair failed and we were unable to recover it. 00:26:26.977 [2024-05-15 11:18:24.013669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.013739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.013749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.977 qpair failed and we were unable to recover it. 00:26:26.977 [2024-05-15 11:18:24.013816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.013891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.013901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.977 qpair failed and we were unable to recover it. 00:26:26.977 [2024-05-15 11:18:24.013976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.014047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.014056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.977 qpair failed and we were unable to recover it. 00:26:26.977 [2024-05-15 11:18:24.014138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.014229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.014239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.977 qpair failed and we were unable to recover it. 00:26:26.977 [2024-05-15 11:18:24.014337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.014398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.014407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.977 qpair failed and we were unable to recover it. 00:26:26.977 [2024-05-15 11:18:24.014476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.014540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.014549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.977 qpair failed and we were unable to recover it. 00:26:26.977 [2024-05-15 11:18:24.014615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.014696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.014706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.977 qpair failed and we were unable to recover it. 00:26:26.977 [2024-05-15 11:18:24.014842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.014909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.014918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.977 qpair failed and we were unable to recover it. 00:26:26.977 [2024-05-15 11:18:24.014995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.015061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.015071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.977 qpair failed and we were unable to recover it. 00:26:26.977 [2024-05-15 11:18:24.015136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.015199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.015209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.977 qpair failed and we were unable to recover it. 00:26:26.977 [2024-05-15 11:18:24.015286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.015388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.015397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.977 qpair failed and we were unable to recover it. 00:26:26.977 [2024-05-15 11:18:24.015473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.015604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.015614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.977 qpair failed and we were unable to recover it. 00:26:26.977 [2024-05-15 11:18:24.015694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.015766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.015775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.977 qpair failed and we were unable to recover it. 00:26:26.977 [2024-05-15 11:18:24.015846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.015916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.015926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.977 qpair failed and we were unable to recover it. 00:26:26.977 [2024-05-15 11:18:24.015995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.016060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.016071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.977 qpair failed and we were unable to recover it. 00:26:26.977 [2024-05-15 11:18:24.016151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.016243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.016254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.977 qpair failed and we were unable to recover it. 00:26:26.977 [2024-05-15 11:18:24.016331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.016407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:18:24.016415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.977 qpair failed and we were unable to recover it. 00:26:26.977 [2024-05-15 11:18:24.016483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.016539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.016549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.978 qpair failed and we were unable to recover it. 00:26:26.978 [2024-05-15 11:18:24.016628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.016695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.016705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.978 qpair failed and we were unable to recover it. 00:26:26.978 [2024-05-15 11:18:24.016785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.016882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.016892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.978 qpair failed and we were unable to recover it. 00:26:26.978 [2024-05-15 11:18:24.016969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.017040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.017049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.978 qpair failed and we were unable to recover it. 00:26:26.978 [2024-05-15 11:18:24.017185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.017253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.017263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.978 qpair failed and we were unable to recover it. 00:26:26.978 [2024-05-15 11:18:24.017328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.017468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.017478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.978 qpair failed and we were unable to recover it. 00:26:26.978 [2024-05-15 11:18:24.017553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.017627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.017638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.978 qpair failed and we were unable to recover it. 00:26:26.978 [2024-05-15 11:18:24.017723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.017788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.017798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.978 qpair failed and we were unable to recover it. 00:26:26.978 [2024-05-15 11:18:24.017932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.018013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.018023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.978 qpair failed and we were unable to recover it. 00:26:26.978 [2024-05-15 11:18:24.018093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.018260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.018270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.978 qpair failed and we were unable to recover it. 00:26:26.978 [2024-05-15 11:18:24.018342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.018516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.018526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.978 qpair failed and we were unable to recover it. 00:26:26.978 [2024-05-15 11:18:24.018643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.018712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.018721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.978 qpair failed and we were unable to recover it. 00:26:26.978 [2024-05-15 11:18:24.018860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.018935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.018944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.978 qpair failed and we were unable to recover it. 00:26:26.978 [2024-05-15 11:18:24.019081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.019155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.019168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.978 qpair failed and we were unable to recover it. 00:26:26.978 [2024-05-15 11:18:24.019234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.019384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.019394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.978 qpair failed and we were unable to recover it. 00:26:26.978 [2024-05-15 11:18:24.019460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.019630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.019640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.978 qpair failed and we were unable to recover it. 00:26:26.978 [2024-05-15 11:18:24.019785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.019854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.019864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.978 qpair failed and we were unable to recover it. 00:26:26.978 [2024-05-15 11:18:24.019947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.020010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.020019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.978 qpair failed and we were unable to recover it. 00:26:26.978 [2024-05-15 11:18:24.020155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.020236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.020247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.978 qpair failed and we were unable to recover it. 00:26:26.978 [2024-05-15 11:18:24.020475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.020613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.020623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.978 qpair failed and we were unable to recover it. 00:26:26.978 [2024-05-15 11:18:24.020696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.020766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.020776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.978 qpair failed and we were unable to recover it. 00:26:26.978 [2024-05-15 11:18:24.020931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.020999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.021008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.978 qpair failed and we were unable to recover it. 00:26:26.978 [2024-05-15 11:18:24.021142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.021344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.021354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.978 qpair failed and we were unable to recover it. 00:26:26.978 [2024-05-15 11:18:24.021423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.021566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.021575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.978 qpair failed and we were unable to recover it. 00:26:26.978 [2024-05-15 11:18:24.021802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.021872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.021881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.978 qpair failed and we were unable to recover it. 00:26:26.978 [2024-05-15 11:18:24.021964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.022038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.022048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.978 qpair failed and we were unable to recover it. 00:26:26.978 [2024-05-15 11:18:24.022138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.022210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.022220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.978 qpair failed and we were unable to recover it. 00:26:26.978 [2024-05-15 11:18:24.022298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.022444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.022454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.978 qpair failed and we were unable to recover it. 00:26:26.978 [2024-05-15 11:18:24.022516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.022596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.022606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.978 qpair failed and we were unable to recover it. 00:26:26.978 [2024-05-15 11:18:24.022689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.022754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.978 [2024-05-15 11:18:24.022764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.979 qpair failed and we were unable to recover it. 00:26:26.979 [2024-05-15 11:18:24.022832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.022918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.022927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.979 qpair failed and we were unable to recover it. 00:26:26.979 [2024-05-15 11:18:24.022994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.023155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.023169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.979 qpair failed and we were unable to recover it. 00:26:26.979 [2024-05-15 11:18:24.023255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.023389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.023398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.979 qpair failed and we were unable to recover it. 00:26:26.979 [2024-05-15 11:18:24.023469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.023547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.023556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.979 qpair failed and we were unable to recover it. 00:26:26.979 [2024-05-15 11:18:24.023632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.023704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.023714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.979 qpair failed and we were unable to recover it. 00:26:26.979 [2024-05-15 11:18:24.023800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.023864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.023874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.979 qpair failed and we were unable to recover it. 00:26:26.979 [2024-05-15 11:18:24.023975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.024064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.024073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.979 qpair failed and we were unable to recover it. 00:26:26.979 [2024-05-15 11:18:24.024212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.024315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.024325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.979 qpair failed and we were unable to recover it. 00:26:26.979 [2024-05-15 11:18:24.024406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.024473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.024482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.979 qpair failed and we were unable to recover it. 00:26:26.979 [2024-05-15 11:18:24.024562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.024632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.024642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.979 qpair failed and we were unable to recover it. 00:26:26.979 [2024-05-15 11:18:24.024713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.024792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.024801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.979 qpair failed and we were unable to recover it. 00:26:26.979 [2024-05-15 11:18:24.024881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.025082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.025091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.979 qpair failed and we were unable to recover it. 00:26:26.979 [2024-05-15 11:18:24.025175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.025258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.025267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.979 qpair failed and we were unable to recover it. 00:26:26.979 [2024-05-15 11:18:24.025360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.025442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.025455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.979 qpair failed and we were unable to recover it. 00:26:26.979 [2024-05-15 11:18:24.025543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.025622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.025640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.979 qpair failed and we were unable to recover it. 00:26:26.979 [2024-05-15 11:18:24.025725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.025824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.025843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.979 qpair failed and we were unable to recover it. 00:26:26.979 [2024-05-15 11:18:24.025918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.026011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.026029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.979 qpair failed and we were unable to recover it. 00:26:26.979 [2024-05-15 11:18:24.026133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.026224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.026242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.979 qpair failed and we were unable to recover it. 00:26:26.979 [2024-05-15 11:18:24.026334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.026420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.026437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.979 qpair failed and we were unable to recover it. 00:26:26.979 [2024-05-15 11:18:24.026539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.026631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.026645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.979 qpair failed and we were unable to recover it. 00:26:26.979 [2024-05-15 11:18:24.026742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.026944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.026957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.979 qpair failed and we were unable to recover it. 00:26:26.979 [2024-05-15 11:18:24.027034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.027111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.027125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.979 qpair failed and we were unable to recover it. 00:26:26.979 [2024-05-15 11:18:24.027261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.027349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.027362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.979 qpair failed and we were unable to recover it. 00:26:26.979 [2024-05-15 11:18:24.027444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.027592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.027605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.979 qpair failed and we were unable to recover it. 00:26:26.979 [2024-05-15 11:18:24.027699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.027781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.027794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.979 qpair failed and we were unable to recover it. 00:26:26.979 [2024-05-15 11:18:24.027947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.028027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.028040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.979 qpair failed and we were unable to recover it. 00:26:26.979 [2024-05-15 11:18:24.028119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.028204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.028218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.979 qpair failed and we were unable to recover it. 00:26:26.979 [2024-05-15 11:18:24.028306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.028448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.028462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.979 qpair failed and we were unable to recover it. 00:26:26.979 [2024-05-15 11:18:24.028544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.028685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.979 [2024-05-15 11:18:24.028700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.979 qpair failed and we were unable to recover it. 00:26:26.979 [2024-05-15 11:18:24.028800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.028888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.028901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.980 qpair failed and we were unable to recover it. 00:26:26.980 [2024-05-15 11:18:24.028987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.029062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.029074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.980 qpair failed and we were unable to recover it. 00:26:26.980 [2024-05-15 11:18:24.029172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.029276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.029291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.980 qpair failed and we were unable to recover it. 00:26:26.980 [2024-05-15 11:18:24.029376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.029520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.029533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.980 qpair failed and we were unable to recover it. 00:26:26.980 [2024-05-15 11:18:24.029600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.029758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.029772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.980 qpair failed and we were unable to recover it. 00:26:26.980 [2024-05-15 11:18:24.029856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.029923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.029936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.980 qpair failed and we were unable to recover it. 00:26:26.980 [2024-05-15 11:18:24.030147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.030239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.030253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.980 qpair failed and we were unable to recover it. 00:26:26.980 [2024-05-15 11:18:24.030415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.030494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.030507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.980 qpair failed and we were unable to recover it. 00:26:26.980 [2024-05-15 11:18:24.030673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.030813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.030827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.980 qpair failed and we were unable to recover it. 00:26:26.980 [2024-05-15 11:18:24.030988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.031128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.031141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.980 qpair failed and we were unable to recover it. 00:26:26.980 [2024-05-15 11:18:24.031228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.031304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.031317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.980 qpair failed and we were unable to recover it. 00:26:26.980 [2024-05-15 11:18:24.031476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.031546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.031559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.980 qpair failed and we were unable to recover it. 00:26:26.980 [2024-05-15 11:18:24.031648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.031720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.031733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.980 qpair failed and we were unable to recover it. 00:26:26.980 [2024-05-15 11:18:24.031821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.031910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.031923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.980 qpair failed and we were unable to recover it. 00:26:26.980 [2024-05-15 11:18:24.032000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.032184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.032198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.980 qpair failed and we were unable to recover it. 00:26:26.980 [2024-05-15 11:18:24.032304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.032478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.032491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.980 qpair failed and we were unable to recover it. 00:26:26.980 [2024-05-15 11:18:24.032578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.032654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.032666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.980 qpair failed and we were unable to recover it. 00:26:26.980 [2024-05-15 11:18:24.032808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.032885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.032898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.980 qpair failed and we were unable to recover it. 00:26:26.980 [2024-05-15 11:18:24.033052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.033123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.033136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.980 qpair failed and we were unable to recover it. 00:26:26.980 [2024-05-15 11:18:24.033231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.033324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.033337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.980 qpair failed and we were unable to recover it. 00:26:26.980 [2024-05-15 11:18:24.033499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.033587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.033601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.980 qpair failed and we were unable to recover it. 00:26:26.980 [2024-05-15 11:18:24.033699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.033780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.033793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.980 qpair failed and we were unable to recover it. 00:26:26.980 [2024-05-15 11:18:24.033942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.034017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.034030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.980 qpair failed and we were unable to recover it. 00:26:26.980 [2024-05-15 11:18:24.034186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.034328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.034342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.980 qpair failed and we were unable to recover it. 00:26:26.980 [2024-05-15 11:18:24.034435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.034526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.034539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.980 qpair failed and we were unable to recover it. 00:26:26.980 [2024-05-15 11:18:24.034615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.034710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.034723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.980 qpair failed and we were unable to recover it. 00:26:26.980 [2024-05-15 11:18:24.034798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.034916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.034930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.980 qpair failed and we were unable to recover it. 00:26:26.980 [2024-05-15 11:18:24.035077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.035172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.035187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.980 qpair failed and we were unable to recover it. 00:26:26.980 [2024-05-15 11:18:24.035277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.035367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.035381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.980 qpair failed and we were unable to recover it. 00:26:26.980 [2024-05-15 11:18:24.035470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.035564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.980 [2024-05-15 11:18:24.035577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.980 qpair failed and we were unable to recover it. 00:26:26.981 [2024-05-15 11:18:24.035648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.981 [2024-05-15 11:18:24.035788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.981 [2024-05-15 11:18:24.035801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.981 qpair failed and we were unable to recover it. 00:26:26.981 [2024-05-15 11:18:24.035945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.981 [2024-05-15 11:18:24.036035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.981 [2024-05-15 11:18:24.036048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.981 qpair failed and we were unable to recover it. 00:26:26.981 [2024-05-15 11:18:24.036122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.981 [2024-05-15 11:18:24.036269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.981 [2024-05-15 11:18:24.036284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.981 qpair failed and we were unable to recover it. 00:26:26.981 [2024-05-15 11:18:24.036434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.981 [2024-05-15 11:18:24.036587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.981 [2024-05-15 11:18:24.036599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.981 qpair failed and we were unable to recover it. 00:26:26.981 [2024-05-15 11:18:24.036686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.981 [2024-05-15 11:18:24.036760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.981 [2024-05-15 11:18:24.036773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.981 qpair failed and we were unable to recover it. 00:26:26.981 [2024-05-15 11:18:24.036913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.981 [2024-05-15 11:18:24.036988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.981 [2024-05-15 11:18:24.037002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.981 qpair failed and we were unable to recover it. 00:26:26.981 [2024-05-15 11:18:24.037092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.981 [2024-05-15 11:18:24.037193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.981 [2024-05-15 11:18:24.037207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.981 qpair failed and we were unable to recover it. 00:26:26.981 [2024-05-15 11:18:24.037284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.981 [2024-05-15 11:18:24.037359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.981 [2024-05-15 11:18:24.037372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.981 qpair failed and we were unable to recover it. 00:26:26.981 [2024-05-15 11:18:24.037522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.981 [2024-05-15 11:18:24.037602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.981 [2024-05-15 11:18:24.037615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.981 qpair failed and we were unable to recover it. 00:26:26.981 [2024-05-15 11:18:24.037699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.981 [2024-05-15 11:18:24.037789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.981 [2024-05-15 11:18:24.037801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.981 qpair failed and we were unable to recover it. 00:26:26.981 [2024-05-15 11:18:24.037893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.981 [2024-05-15 11:18:24.037965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.981 [2024-05-15 11:18:24.037979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.981 qpair failed and we were unable to recover it. 00:26:26.981 [2024-05-15 11:18:24.038068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.981 [2024-05-15 11:18:24.038149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.981 [2024-05-15 11:18:24.038162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.981 qpair failed and we were unable to recover it. 00:26:26.981 [2024-05-15 11:18:24.038248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.981 [2024-05-15 11:18:24.038337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.981 [2024-05-15 11:18:24.038350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.981 qpair failed and we were unable to recover it. 00:26:26.981 [2024-05-15 11:18:24.038492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.981 [2024-05-15 11:18:24.038572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.981 [2024-05-15 11:18:24.038585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.981 qpair failed and we were unable to recover it. 00:26:26.981 [2024-05-15 11:18:24.038659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.981 [2024-05-15 11:18:24.038753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.981 [2024-05-15 11:18:24.038770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.981 qpair failed and we were unable to recover it. 00:26:26.981 [2024-05-15 11:18:24.038851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.981 [2024-05-15 11:18:24.038991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.981 [2024-05-15 11:18:24.039004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.981 qpair failed and we were unable to recover it. 00:26:26.981 [2024-05-15 11:18:24.039103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.981 [2024-05-15 11:18:24.039186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.981 [2024-05-15 11:18:24.039200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.981 qpair failed and we were unable to recover it. 00:26:26.981 [2024-05-15 11:18:24.039294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.981 [2024-05-15 11:18:24.039372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.981 [2024-05-15 11:18:24.039385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.981 qpair failed and we were unable to recover it. 00:26:26.981 [2024-05-15 11:18:24.039459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.981 [2024-05-15 11:18:24.039530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.981 [2024-05-15 11:18:24.039544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.981 qpair failed and we were unable to recover it. 00:26:26.981 [2024-05-15 11:18:24.039619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.981 [2024-05-15 11:18:24.039691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.981 [2024-05-15 11:18:24.039705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.981 qpair failed and we were unable to recover it. 00:26:26.981 [2024-05-15 11:18:24.039793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.981 [2024-05-15 11:18:24.039955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.981 [2024-05-15 11:18:24.039968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.981 qpair failed and we were unable to recover it. 00:26:26.981 [2024-05-15 11:18:24.040047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.981 [2024-05-15 11:18:24.040136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.981 [2024-05-15 11:18:24.040148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.981 qpair failed and we were unable to recover it. 00:26:26.981 [2024-05-15 11:18:24.040237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.981 [2024-05-15 11:18:24.040312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.981 [2024-05-15 11:18:24.040325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.981 qpair failed and we were unable to recover it. 00:26:26.981 [2024-05-15 11:18:24.040399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.981 [2024-05-15 11:18:24.040534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.981 [2024-05-15 11:18:24.040546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.981 qpair failed and we were unable to recover it. 00:26:26.981 [2024-05-15 11:18:24.040626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.981 [2024-05-15 11:18:24.040701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.040719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.982 qpair failed and we were unable to recover it. 00:26:26.982 [2024-05-15 11:18:24.040808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.040880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.040893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.982 qpair failed and we were unable to recover it. 00:26:26.982 [2024-05-15 11:18:24.040986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.041060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.041073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.982 qpair failed and we were unable to recover it. 00:26:26.982 [2024-05-15 11:18:24.041234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.041311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.041324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.982 qpair failed and we were unable to recover it. 00:26:26.982 [2024-05-15 11:18:24.041408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.041491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.041505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.982 qpair failed and we were unable to recover it. 00:26:26.982 [2024-05-15 11:18:24.041582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.041735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.041748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.982 qpair failed and we were unable to recover it. 00:26:26.982 [2024-05-15 11:18:24.041894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.041979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.041991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.982 qpair failed and we were unable to recover it. 00:26:26.982 [2024-05-15 11:18:24.042082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.042153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.042172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.982 qpair failed and we were unable to recover it. 00:26:26.982 [2024-05-15 11:18:24.042258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.042342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.042355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.982 qpair failed and we were unable to recover it. 00:26:26.982 [2024-05-15 11:18:24.042451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.042534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.042547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.982 qpair failed and we were unable to recover it. 00:26:26.982 [2024-05-15 11:18:24.042623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.042697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.042712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.982 qpair failed and we were unable to recover it. 00:26:26.982 [2024-05-15 11:18:24.042785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.042874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.042887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.982 qpair failed and we were unable to recover it. 00:26:26.982 [2024-05-15 11:18:24.042980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.043075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.043088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.982 qpair failed and we were unable to recover it. 00:26:26.982 [2024-05-15 11:18:24.043181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.043259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.043271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.982 qpair failed and we were unable to recover it. 00:26:26.982 [2024-05-15 11:18:24.043349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.043425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.043438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.982 qpair failed and we were unable to recover it. 00:26:26.982 [2024-05-15 11:18:24.043518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.043604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.043617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.982 qpair failed and we were unable to recover it. 00:26:26.982 [2024-05-15 11:18:24.043696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.043776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.043788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.982 qpair failed and we were unable to recover it. 00:26:26.982 [2024-05-15 11:18:24.043873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.044011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.044025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.982 qpair failed and we were unable to recover it. 00:26:26.982 [2024-05-15 11:18:24.044104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.044186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.044201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.982 qpair failed and we were unable to recover it. 00:26:26.982 [2024-05-15 11:18:24.044296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.044373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.044386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.982 qpair failed and we were unable to recover it. 00:26:26.982 [2024-05-15 11:18:24.044469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.044546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.044559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.982 qpair failed and we were unable to recover it. 00:26:26.982 [2024-05-15 11:18:24.044641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.044712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.044725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.982 qpair failed and we were unable to recover it. 00:26:26.982 [2024-05-15 11:18:24.044867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.044967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.044981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.982 qpair failed and we were unable to recover it. 00:26:26.982 [2024-05-15 11:18:24.045056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.045196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.045211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.982 qpair failed and we were unable to recover it. 00:26:26.982 [2024-05-15 11:18:24.045285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.045424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.045437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.982 qpair failed and we were unable to recover it. 00:26:26.982 [2024-05-15 11:18:24.045534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.045606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.045619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.982 qpair failed and we were unable to recover it. 00:26:26.982 [2024-05-15 11:18:24.045709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.045849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.045862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.982 qpair failed and we were unable to recover it. 00:26:26.982 [2024-05-15 11:18:24.045949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.046029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.046043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.982 qpair failed and we were unable to recover it. 00:26:26.982 [2024-05-15 11:18:24.046192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.046259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.046272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.982 qpair failed and we were unable to recover it. 00:26:26.982 [2024-05-15 11:18:24.046346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.046414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.982 [2024-05-15 11:18:24.046426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.982 qpair failed and we were unable to recover it. 00:26:26.982 [2024-05-15 11:18:24.046498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.046600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.046613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.983 qpair failed and we were unable to recover it. 00:26:26.983 [2024-05-15 11:18:24.046698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.046789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.046803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.983 qpair failed and we were unable to recover it. 00:26:26.983 [2024-05-15 11:18:24.046876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.047027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.047041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.983 qpair failed and we were unable to recover it. 00:26:26.983 [2024-05-15 11:18:24.047118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.047259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.047272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.983 qpair failed and we were unable to recover it. 00:26:26.983 [2024-05-15 11:18:24.047416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.047566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.047580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.983 qpair failed and we were unable to recover it. 00:26:26.983 [2024-05-15 11:18:24.047674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.047753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.047766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.983 qpair failed and we were unable to recover it. 00:26:26.983 [2024-05-15 11:18:24.047911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.047982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.047995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.983 qpair failed and we were unable to recover it. 00:26:26.983 [2024-05-15 11:18:24.048205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.048300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.048314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.983 qpair failed and we were unable to recover it. 00:26:26.983 [2024-05-15 11:18:24.048592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.048671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.048683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.983 qpair failed and we were unable to recover it. 00:26:26.983 [2024-05-15 11:18:24.048828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.048979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.048992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.983 qpair failed and we were unable to recover it. 00:26:26.983 [2024-05-15 11:18:24.049147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.049236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.049249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.983 qpair failed and we were unable to recover it. 00:26:26.983 [2024-05-15 11:18:24.049342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.049494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.049507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.983 qpair failed and we were unable to recover it. 00:26:26.983 [2024-05-15 11:18:24.049591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.049671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.049684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.983 qpair failed and we were unable to recover it. 00:26:26.983 [2024-05-15 11:18:24.049761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.049853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.049866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.983 qpair failed and we were unable to recover it. 00:26:26.983 [2024-05-15 11:18:24.049951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.050020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.050034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.983 qpair failed and we were unable to recover it. 00:26:26.983 [2024-05-15 11:18:24.050223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.050308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.050321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.983 qpair failed and we were unable to recover it. 00:26:26.983 [2024-05-15 11:18:24.050416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.050562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.050574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.983 qpair failed and we were unable to recover it. 00:26:26.983 [2024-05-15 11:18:24.050663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.050758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.050772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.983 qpair failed and we were unable to recover it. 00:26:26.983 [2024-05-15 11:18:24.051029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.051196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.051210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.983 qpair failed and we were unable to recover it. 00:26:26.983 [2024-05-15 11:18:24.051297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.051366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.051379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.983 qpair failed and we were unable to recover it. 00:26:26.983 [2024-05-15 11:18:24.051481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.051554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.051567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.983 qpair failed and we were unable to recover it. 00:26:26.983 [2024-05-15 11:18:24.051707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.051848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.051861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.983 qpair failed and we were unable to recover it. 00:26:26.983 [2024-05-15 11:18:24.051937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.052025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.052038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.983 qpair failed and we were unable to recover it. 00:26:26.983 [2024-05-15 11:18:24.052143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.052220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.052234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.983 qpair failed and we were unable to recover it. 00:26:26.983 [2024-05-15 11:18:24.052313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.052463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.052476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.983 qpair failed and we were unable to recover it. 00:26:26.983 [2024-05-15 11:18:24.052560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.052654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.052667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.983 qpair failed and we were unable to recover it. 00:26:26.983 [2024-05-15 11:18:24.052807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.052898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.052911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.983 qpair failed and we were unable to recover it. 00:26:26.983 [2024-05-15 11:18:24.053064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.053215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.053228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.983 qpair failed and we were unable to recover it. 00:26:26.983 [2024-05-15 11:18:24.053315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.053388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.053401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.983 qpair failed and we were unable to recover it. 00:26:26.983 [2024-05-15 11:18:24.053554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.053666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.983 [2024-05-15 11:18:24.053695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.983 qpair failed and we were unable to recover it. 00:26:26.984 [2024-05-15 11:18:24.053824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.054074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.054103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.984 qpair failed and we were unable to recover it. 00:26:26.984 [2024-05-15 11:18:24.054234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.054336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.054349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.984 qpair failed and we were unable to recover it. 00:26:26.984 [2024-05-15 11:18:24.054489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.054626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.054639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.984 qpair failed and we were unable to recover it. 00:26:26.984 [2024-05-15 11:18:24.054716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.054802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.054815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.984 qpair failed and we were unable to recover it. 00:26:26.984 [2024-05-15 11:18:24.054958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.055033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.055046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.984 qpair failed and we were unable to recover it. 00:26:26.984 [2024-05-15 11:18:24.055189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.055332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.055346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.984 qpair failed and we were unable to recover it. 00:26:26.984 [2024-05-15 11:18:24.055436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.055537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.055551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.984 qpair failed and we were unable to recover it. 00:26:26.984 [2024-05-15 11:18:24.055631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.055774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.055788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.984 qpair failed and we were unable to recover it. 00:26:26.984 [2024-05-15 11:18:24.055940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.056083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.056096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.984 qpair failed and we were unable to recover it. 00:26:26.984 [2024-05-15 11:18:24.056258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.056391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.056419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.984 qpair failed and we were unable to recover it. 00:26:26.984 [2024-05-15 11:18:24.056532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.056665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.056692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.984 qpair failed and we were unable to recover it. 00:26:26.984 [2024-05-15 11:18:24.056875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.056978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.057006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.984 qpair failed and we were unable to recover it. 00:26:26.984 [2024-05-15 11:18:24.057118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.057193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.057206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.984 qpair failed and we were unable to recover it. 00:26:26.984 [2024-05-15 11:18:24.057282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.057375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.057388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.984 qpair failed and we were unable to recover it. 00:26:26.984 [2024-05-15 11:18:24.057480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.057631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.057645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.984 qpair failed and we were unable to recover it. 00:26:26.984 [2024-05-15 11:18:24.057809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.058059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.058087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.984 qpair failed and we were unable to recover it. 00:26:26.984 [2024-05-15 11:18:24.058215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.058338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.058351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.984 qpair failed and we were unable to recover it. 00:26:26.984 [2024-05-15 11:18:24.058441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.058525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.058554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.984 qpair failed and we were unable to recover it. 00:26:26.984 [2024-05-15 11:18:24.058680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.058886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.058915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.984 qpair failed and we were unable to recover it. 00:26:26.984 [2024-05-15 11:18:24.059023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.059145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.059222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.984 qpair failed and we were unable to recover it. 00:26:26.984 [2024-05-15 11:18:24.059347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.059459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.059472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.984 qpair failed and we were unable to recover it. 00:26:26.984 [2024-05-15 11:18:24.059630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.059719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.059732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.984 qpair failed and we were unable to recover it. 00:26:26.984 [2024-05-15 11:18:24.059886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.060066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.060096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.984 qpair failed and we were unable to recover it. 00:26:26.984 [2024-05-15 11:18:24.060229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.060408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.060437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.984 qpair failed and we were unable to recover it. 00:26:26.984 [2024-05-15 11:18:24.060626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.060812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.060841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.984 qpair failed and we were unable to recover it. 00:26:26.984 [2024-05-15 11:18:24.060963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.061143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.061181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.984 qpair failed and we were unable to recover it. 00:26:26.984 [2024-05-15 11:18:24.061359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.061442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.061455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.984 qpair failed and we were unable to recover it. 00:26:26.984 [2024-05-15 11:18:24.061546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.061617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.061630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.984 qpair failed and we were unable to recover it. 00:26:26.984 [2024-05-15 11:18:24.061795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.062006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.062018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.984 qpair failed and we were unable to recover it. 00:26:26.984 [2024-05-15 11:18:24.062088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.062185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.984 [2024-05-15 11:18:24.062198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.985 qpair failed and we were unable to recover it. 00:26:26.985 [2024-05-15 11:18:24.062288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.985 [2024-05-15 11:18:24.062373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.985 [2024-05-15 11:18:24.062387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.985 qpair failed and we were unable to recover it. 00:26:26.985 [2024-05-15 11:18:24.062474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.985 [2024-05-15 11:18:24.062566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.985 [2024-05-15 11:18:24.062598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.985 qpair failed and we were unable to recover it. 00:26:26.985 [2024-05-15 11:18:24.062726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.985 [2024-05-15 11:18:24.062831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.985 [2024-05-15 11:18:24.062859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.985 qpair failed and we were unable to recover it. 00:26:26.985 [2024-05-15 11:18:24.062972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.985 [2024-05-15 11:18:24.063182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.985 [2024-05-15 11:18:24.063211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.985 qpair failed and we were unable to recover it. 00:26:26.985 [2024-05-15 11:18:24.063327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.985 [2024-05-15 11:18:24.063412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.985 [2024-05-15 11:18:24.063425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.985 qpair failed and we were unable to recover it. 00:26:26.985 [2024-05-15 11:18:24.063589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.985 [2024-05-15 11:18:24.063771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.985 [2024-05-15 11:18:24.063800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.985 qpair failed and we were unable to recover it. 00:26:26.985 [2024-05-15 11:18:24.064056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.985 [2024-05-15 11:18:24.064183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.985 [2024-05-15 11:18:24.064197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.985 qpair failed and we were unable to recover it. 00:26:26.985 [2024-05-15 11:18:24.064289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.985 [2024-05-15 11:18:24.064363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.985 [2024-05-15 11:18:24.064376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.985 qpair failed and we were unable to recover it. 00:26:26.985 [2024-05-15 11:18:24.064518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.985 [2024-05-15 11:18:24.064599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.985 [2024-05-15 11:18:24.064612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.985 qpair failed and we were unable to recover it. 00:26:26.985 [2024-05-15 11:18:24.064704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.985 [2024-05-15 11:18:24.064775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.985 [2024-05-15 11:18:24.064789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.985 qpair failed and we were unable to recover it. 00:26:26.985 [2024-05-15 11:18:24.064868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.985 [2024-05-15 11:18:24.064959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.985 [2024-05-15 11:18:24.064972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.985 qpair failed and we were unable to recover it. 00:26:26.985 [2024-05-15 11:18:24.065070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.985 [2024-05-15 11:18:24.065215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.985 [2024-05-15 11:18:24.065229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.985 qpair failed and we were unable to recover it. 00:26:26.985 [2024-05-15 11:18:24.065437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.985 [2024-05-15 11:18:24.065508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.985 [2024-05-15 11:18:24.065521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.985 qpair failed and we were unable to recover it. 00:26:26.985 [2024-05-15 11:18:24.065598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.985 [2024-05-15 11:18:24.065736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.985 [2024-05-15 11:18:24.065748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.985 qpair failed and we were unable to recover it. 00:26:26.985 [2024-05-15 11:18:24.065908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.985 [2024-05-15 11:18:24.066009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.985 [2024-05-15 11:18:24.066037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.985 qpair failed and we were unable to recover it. 00:26:26.985 [2024-05-15 11:18:24.066149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.985 [2024-05-15 11:18:24.066273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.985 [2024-05-15 11:18:24.066303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.985 qpair failed and we were unable to recover it. 00:26:26.985 [2024-05-15 11:18:24.066429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.985 [2024-05-15 11:18:24.066617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.985 [2024-05-15 11:18:24.066630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.985 qpair failed and we were unable to recover it. 00:26:26.985 [2024-05-15 11:18:24.066804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.985 [2024-05-15 11:18:24.066912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.985 [2024-05-15 11:18:24.066941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.985 qpair failed and we were unable to recover it. 00:26:26.985 [2024-05-15 11:18:24.067135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.985 [2024-05-15 11:18:24.067264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.985 [2024-05-15 11:18:24.067294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.985 qpair failed and we were unable to recover it. 00:26:26.985 [2024-05-15 11:18:24.067481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.985 [2024-05-15 11:18:24.067564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.985 [2024-05-15 11:18:24.067577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.985 qpair failed and we were unable to recover it. 00:26:26.985 [2024-05-15 11:18:24.067736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.985 [2024-05-15 11:18:24.067880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.985 [2024-05-15 11:18:24.067893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.985 qpair failed and we were unable to recover it. 00:26:26.985 [2024-05-15 11:18:24.068050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.985 [2024-05-15 11:18:24.068196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.985 [2024-05-15 11:18:24.068210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.985 qpair failed and we were unable to recover it. 00:26:26.985 [2024-05-15 11:18:24.068311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.985 [2024-05-15 11:18:24.068394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.985 [2024-05-15 11:18:24.068407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.985 qpair failed and we were unable to recover it. 00:26:26.985 [2024-05-15 11:18:24.068502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.985 [2024-05-15 11:18:24.068585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.985 [2024-05-15 11:18:24.068599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.985 qpair failed and we were unable to recover it. 00:26:26.985 [2024-05-15 11:18:24.068805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.985 [2024-05-15 11:18:24.068900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.985 [2024-05-15 11:18:24.068912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.985 qpair failed and we were unable to recover it. 00:26:26.986 [2024-05-15 11:18:24.069001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.069080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.069093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.986 qpair failed and we were unable to recover it. 00:26:26.986 [2024-05-15 11:18:24.069181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.069274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.069287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.986 qpair failed and we were unable to recover it. 00:26:26.986 [2024-05-15 11:18:24.069375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.069546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.069560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.986 qpair failed and we were unable to recover it. 00:26:26.986 [2024-05-15 11:18:24.069646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.069725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.069738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.986 qpair failed and we were unable to recover it. 00:26:26.986 [2024-05-15 11:18:24.069828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.069906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.069919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.986 qpair failed and we were unable to recover it. 00:26:26.986 [2024-05-15 11:18:24.070010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.070116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.070130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.986 qpair failed and we were unable to recover it. 00:26:26.986 [2024-05-15 11:18:24.070279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.070367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.070381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.986 qpair failed and we were unable to recover it. 00:26:26.986 [2024-05-15 11:18:24.070469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.070549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.070562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.986 qpair failed and we were unable to recover it. 00:26:26.986 [2024-05-15 11:18:24.070705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.070775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.070787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.986 qpair failed and we were unable to recover it. 00:26:26.986 [2024-05-15 11:18:24.070942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.071016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.071029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.986 qpair failed and we were unable to recover it. 00:26:26.986 [2024-05-15 11:18:24.071117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.071189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.071203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.986 qpair failed and we were unable to recover it. 00:26:26.986 [2024-05-15 11:18:24.071292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.071439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.071452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.986 qpair failed and we were unable to recover it. 00:26:26.986 [2024-05-15 11:18:24.071528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.071610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.071623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.986 qpair failed and we were unable to recover it. 00:26:26.986 [2024-05-15 11:18:24.071699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.071795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.071807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.986 qpair failed and we were unable to recover it. 00:26:26.986 [2024-05-15 11:18:24.071949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.072013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.072026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.986 qpair failed and we were unable to recover it. 00:26:26.986 [2024-05-15 11:18:24.072100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.072189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.072204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.986 qpair failed and we were unable to recover it. 00:26:26.986 [2024-05-15 11:18:24.072314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.072393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.072406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.986 qpair failed and we were unable to recover it. 00:26:26.986 [2024-05-15 11:18:24.072482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.072558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.072571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.986 qpair failed and we were unable to recover it. 00:26:26.986 [2024-05-15 11:18:24.072645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.072737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.072750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.986 qpair failed and we were unable to recover it. 00:26:26.986 [2024-05-15 11:18:24.072828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.072909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.072922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.986 qpair failed and we were unable to recover it. 00:26:26.986 [2024-05-15 11:18:24.073018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.073160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.073178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.986 qpair failed and we were unable to recover it. 00:26:26.986 [2024-05-15 11:18:24.073265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.073398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.073412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.986 qpair failed and we were unable to recover it. 00:26:26.986 [2024-05-15 11:18:24.073497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.073570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.073583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.986 qpair failed and we were unable to recover it. 00:26:26.986 [2024-05-15 11:18:24.073687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.073777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.073790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.986 qpair failed and we were unable to recover it. 00:26:26.986 [2024-05-15 11:18:24.073863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.073934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.073946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.986 qpair failed and we were unable to recover it. 00:26:26.986 [2024-05-15 11:18:24.074032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.074129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.074142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.986 qpair failed and we were unable to recover it. 00:26:26.986 [2024-05-15 11:18:24.074238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.074322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.074338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.986 qpair failed and we were unable to recover it. 00:26:26.986 [2024-05-15 11:18:24.074433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.074495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.074508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.986 qpair failed and we were unable to recover it. 00:26:26.986 [2024-05-15 11:18:24.074582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.074667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.074679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.986 qpair failed and we were unable to recover it. 00:26:26.986 [2024-05-15 11:18:24.074825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.986 [2024-05-15 11:18:24.074977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.074991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.987 qpair failed and we were unable to recover it. 00:26:26.987 [2024-05-15 11:18:24.075066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.075135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.075148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.987 qpair failed and we were unable to recover it. 00:26:26.987 [2024-05-15 11:18:24.075230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.075305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.075319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.987 qpair failed and we were unable to recover it. 00:26:26.987 [2024-05-15 11:18:24.075390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.075470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.075482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.987 qpair failed and we were unable to recover it. 00:26:26.987 [2024-05-15 11:18:24.075656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.075727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.075740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.987 qpair failed and we were unable to recover it. 00:26:26.987 [2024-05-15 11:18:24.075890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.076002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.076031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.987 qpair failed and we were unable to recover it. 00:26:26.987 [2024-05-15 11:18:24.076140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.076296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.076327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.987 qpair failed and we were unable to recover it. 00:26:26.987 [2024-05-15 11:18:24.076441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.076597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.076615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.987 qpair failed and we were unable to recover it. 00:26:26.987 [2024-05-15 11:18:24.076699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.076843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.076855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.987 qpair failed and we were unable to recover it. 00:26:26.987 [2024-05-15 11:18:24.076940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.077028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.077041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.987 qpair failed and we were unable to recover it. 00:26:26.987 [2024-05-15 11:18:24.077104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.077265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.077279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.987 qpair failed and we were unable to recover it. 00:26:26.987 [2024-05-15 11:18:24.077355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.077446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.077460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.987 qpair failed and we were unable to recover it. 00:26:26.987 [2024-05-15 11:18:24.077547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.077691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.077704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.987 qpair failed and we were unable to recover it. 00:26:26.987 [2024-05-15 11:18:24.077841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.077997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.078010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.987 qpair failed and we were unable to recover it. 00:26:26.987 [2024-05-15 11:18:24.078153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.078246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.078260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.987 qpair failed and we were unable to recover it. 00:26:26.987 [2024-05-15 11:18:24.078359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.078446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.078458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.987 qpair failed and we were unable to recover it. 00:26:26.987 [2024-05-15 11:18:24.078544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.078686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.078699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.987 qpair failed and we were unable to recover it. 00:26:26.987 [2024-05-15 11:18:24.078777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.078874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.078889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.987 qpair failed and we were unable to recover it. 00:26:26.987 [2024-05-15 11:18:24.079045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.079140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.079153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.987 qpair failed and we were unable to recover it. 00:26:26.987 [2024-05-15 11:18:24.079308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.079404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.079417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.987 qpair failed and we were unable to recover it. 00:26:26.987 [2024-05-15 11:18:24.079573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.079653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.079666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.987 qpair failed and we were unable to recover it. 00:26:26.987 [2024-05-15 11:18:24.079813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.079919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.079931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.987 qpair failed and we were unable to recover it. 00:26:26.987 [2024-05-15 11:18:24.080007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.080154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.080171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.987 qpair failed and we were unable to recover it. 00:26:26.987 [2024-05-15 11:18:24.080248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.080327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.080341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.987 qpair failed and we were unable to recover it. 00:26:26.987 [2024-05-15 11:18:24.080428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.080587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.080600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.987 qpair failed and we were unable to recover it. 00:26:26.987 [2024-05-15 11:18:24.080701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.080773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.080786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.987 qpair failed and we were unable to recover it. 00:26:26.987 [2024-05-15 11:18:24.080879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.080952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.080965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.987 qpair failed and we were unable to recover it. 00:26:26.987 [2024-05-15 11:18:24.081055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.081119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.081134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.987 qpair failed and we were unable to recover it. 00:26:26.987 [2024-05-15 11:18:24.081213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.081290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.081303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.987 qpair failed and we were unable to recover it. 00:26:26.987 [2024-05-15 11:18:24.081454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.081537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.987 [2024-05-15 11:18:24.081550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.987 qpair failed and we were unable to recover it. 00:26:26.987 [2024-05-15 11:18:24.081623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.081722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.081735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.988 qpair failed and we were unable to recover it. 00:26:26.988 [2024-05-15 11:18:24.081817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.081907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.081921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.988 qpair failed and we were unable to recover it. 00:26:26.988 [2024-05-15 11:18:24.082003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.082095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.082108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.988 qpair failed and we were unable to recover it. 00:26:26.988 [2024-05-15 11:18:24.082204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.082275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.082288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.988 qpair failed and we were unable to recover it. 00:26:26.988 [2024-05-15 11:18:24.082379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.082456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.082469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.988 qpair failed and we were unable to recover it. 00:26:26.988 [2024-05-15 11:18:24.082556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.082641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.082654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.988 qpair failed and we were unable to recover it. 00:26:26.988 [2024-05-15 11:18:24.082801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.082890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.082904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.988 qpair failed and we were unable to recover it. 00:26:26.988 [2024-05-15 11:18:24.083003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.083087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.083099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.988 qpair failed and we were unable to recover it. 00:26:26.988 [2024-05-15 11:18:24.083223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.083296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.083309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.988 qpair failed and we were unable to recover it. 00:26:26.988 [2024-05-15 11:18:24.083379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.083462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.083474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.988 qpair failed and we were unable to recover it. 00:26:26.988 [2024-05-15 11:18:24.083565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.083711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.083741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.988 qpair failed and we were unable to recover it. 00:26:26.988 [2024-05-15 11:18:24.083931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.084106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.084135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.988 qpair failed and we were unable to recover it. 00:26:26.988 [2024-05-15 11:18:24.084271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.084357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.084370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.988 qpair failed and we were unable to recover it. 00:26:26.988 [2024-05-15 11:18:24.084439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.084518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.084531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.988 qpair failed and we were unable to recover it. 00:26:26.988 [2024-05-15 11:18:24.084605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.084694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.084708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.988 qpair failed and we were unable to recover it. 00:26:26.988 [2024-05-15 11:18:24.084816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.084956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.084969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.988 qpair failed and we were unable to recover it. 00:26:26.988 [2024-05-15 11:18:24.085041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.085194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.085209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.988 qpair failed and we were unable to recover it. 00:26:26.988 [2024-05-15 11:18:24.085300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.085439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.085453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.988 qpair failed and we were unable to recover it. 00:26:26.988 [2024-05-15 11:18:24.085562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.085643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.085655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.988 qpair failed and we were unable to recover it. 00:26:26.988 [2024-05-15 11:18:24.085813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.085973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.085986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.988 qpair failed and we were unable to recover it. 00:26:26.988 [2024-05-15 11:18:24.086153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.086344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.086374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.988 qpair failed and we were unable to recover it. 00:26:26.988 [2024-05-15 11:18:24.086482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.086594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.086623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.988 qpair failed and we were unable to recover it. 00:26:26.988 [2024-05-15 11:18:24.086731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.086856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.086884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.988 qpair failed and we were unable to recover it. 00:26:26.988 [2024-05-15 11:18:24.086993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.087231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.087245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.988 qpair failed and we were unable to recover it. 00:26:26.988 [2024-05-15 11:18:24.087348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.087431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.087444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.988 qpair failed and we were unable to recover it. 00:26:26.988 [2024-05-15 11:18:24.087585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.087647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.087660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.988 qpair failed and we were unable to recover it. 00:26:26.988 [2024-05-15 11:18:24.087740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.087822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.087836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.988 qpair failed and we were unable to recover it. 00:26:26.988 [2024-05-15 11:18:24.087933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.088082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.088095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.988 qpair failed and we were unable to recover it. 00:26:26.988 [2024-05-15 11:18:24.088258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.088343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.088356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.988 qpair failed and we were unable to recover it. 00:26:26.988 [2024-05-15 11:18:24.088448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.088538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.988 [2024-05-15 11:18:24.088551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.988 qpair failed and we were unable to recover it. 00:26:26.989 [2024-05-15 11:18:24.088640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.088714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.088727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.989 qpair failed and we were unable to recover it. 00:26:26.989 [2024-05-15 11:18:24.088827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.088901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.088914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.989 qpair failed and we were unable to recover it. 00:26:26.989 [2024-05-15 11:18:24.089057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.089134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.089146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.989 qpair failed and we were unable to recover it. 00:26:26.989 [2024-05-15 11:18:24.089291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.089373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.089386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.989 qpair failed and we were unable to recover it. 00:26:26.989 [2024-05-15 11:18:24.089593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.089672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.089685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.989 qpair failed and we were unable to recover it. 00:26:26.989 [2024-05-15 11:18:24.089775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.089849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.089863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.989 qpair failed and we were unable to recover it. 00:26:26.989 [2024-05-15 11:18:24.089948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.090023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.090035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.989 qpair failed and we were unable to recover it. 00:26:26.989 [2024-05-15 11:18:24.090120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.090211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.090224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.989 qpair failed and we were unable to recover it. 00:26:26.989 [2024-05-15 11:18:24.090302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.090403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.090418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.989 qpair failed and we were unable to recover it. 00:26:26.989 [2024-05-15 11:18:24.090504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.090574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.090587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.989 qpair failed and we were unable to recover it. 00:26:26.989 [2024-05-15 11:18:24.090678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.090790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.090803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.989 qpair failed and we were unable to recover it. 00:26:26.989 [2024-05-15 11:18:24.090879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.091022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.091035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.989 qpair failed and we were unable to recover it. 00:26:26.989 [2024-05-15 11:18:24.091113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.091188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.091202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.989 qpair failed and we were unable to recover it. 00:26:26.989 [2024-05-15 11:18:24.091297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.091383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.091396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.989 qpair failed and we were unable to recover it. 00:26:26.989 [2024-05-15 11:18:24.091484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.091561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.091574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.989 qpair failed and we were unable to recover it. 00:26:26.989 [2024-05-15 11:18:24.091672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.091745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.091758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.989 qpair failed and we were unable to recover it. 00:26:26.989 [2024-05-15 11:18:24.091839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.091923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.091937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.989 qpair failed and we were unable to recover it. 00:26:26.989 [2024-05-15 11:18:24.092023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.092094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.092107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.989 qpair failed and we were unable to recover it. 00:26:26.989 [2024-05-15 11:18:24.092194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.092282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.092296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.989 qpair failed and we were unable to recover it. 00:26:26.989 [2024-05-15 11:18:24.092381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.092533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.092546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.989 qpair failed and we were unable to recover it. 00:26:26.989 [2024-05-15 11:18:24.092624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.092710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.092724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.989 qpair failed and we were unable to recover it. 00:26:26.989 [2024-05-15 11:18:24.092892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.092972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.092985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.989 qpair failed and we were unable to recover it. 00:26:26.989 [2024-05-15 11:18:24.093065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.093142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.093155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.989 qpair failed and we were unable to recover it. 00:26:26.989 [2024-05-15 11:18:24.093249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.093333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.093346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.989 qpair failed and we were unable to recover it. 00:26:26.989 [2024-05-15 11:18:24.093427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.093498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.093511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.989 qpair failed and we were unable to recover it. 00:26:26.989 [2024-05-15 11:18:24.093588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.093662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.093674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.989 qpair failed and we were unable to recover it. 00:26:26.989 [2024-05-15 11:18:24.093761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.093907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.093920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.989 qpair failed and we were unable to recover it. 00:26:26.989 [2024-05-15 11:18:24.093998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.094137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.094149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.989 qpair failed and we were unable to recover it. 00:26:26.989 [2024-05-15 11:18:24.094235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.094346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.094359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.989 qpair failed and we were unable to recover it. 00:26:26.989 [2024-05-15 11:18:24.094436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.989 [2024-05-15 11:18:24.094582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.990 [2024-05-15 11:18:24.094596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.990 qpair failed and we were unable to recover it. 00:26:26.990 [2024-05-15 11:18:24.094675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.990 [2024-05-15 11:18:24.094812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.990 [2024-05-15 11:18:24.094843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.990 qpair failed and we were unable to recover it. 00:26:26.990 [2024-05-15 11:18:24.095023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.990 [2024-05-15 11:18:24.095136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.990 [2024-05-15 11:18:24.095174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.990 qpair failed and we were unable to recover it. 00:26:26.990 [2024-05-15 11:18:24.095353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.990 [2024-05-15 11:18:24.095464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.990 [2024-05-15 11:18:24.095477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.990 qpair failed and we were unable to recover it. 00:26:26.990 [2024-05-15 11:18:24.095552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.990 [2024-05-15 11:18:24.095630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.990 [2024-05-15 11:18:24.095643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.990 qpair failed and we were unable to recover it. 00:26:26.990 [2024-05-15 11:18:24.095747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.990 [2024-05-15 11:18:24.095917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.990 [2024-05-15 11:18:24.095931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.990 qpair failed and we were unable to recover it. 00:26:26.990 [2024-05-15 11:18:24.096003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.990 [2024-05-15 11:18:24.096080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.990 [2024-05-15 11:18:24.096093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.990 qpair failed and we were unable to recover it. 00:26:26.990 [2024-05-15 11:18:24.096199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.990 [2024-05-15 11:18:24.096363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.990 [2024-05-15 11:18:24.096377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.990 qpair failed and we were unable to recover it. 00:26:26.990 [2024-05-15 11:18:24.096468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.990 [2024-05-15 11:18:24.096552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.990 [2024-05-15 11:18:24.096566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:26.990 qpair failed and we were unable to recover it. 00:26:26.990 [2024-05-15 11:18:24.096678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.990 [2024-05-15 11:18:24.096829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.990 [2024-05-15 11:18:24.096841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.990 qpair failed and we were unable to recover it. 00:26:26.990 [2024-05-15 11:18:24.096938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.990 [2024-05-15 11:18:24.097014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.990 [2024-05-15 11:18:24.097024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.990 qpair failed and we were unable to recover it. 00:26:26.990 [2024-05-15 11:18:24.097162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.990 [2024-05-15 11:18:24.097309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.990 [2024-05-15 11:18:24.097318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.990 qpair failed and we were unable to recover it. 00:26:26.990 [2024-05-15 11:18:24.097397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.990 [2024-05-15 11:18:24.097454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.990 [2024-05-15 11:18:24.097463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.990 qpair failed and we were unable to recover it. 00:26:26.990 [2024-05-15 11:18:24.097541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.990 [2024-05-15 11:18:24.097675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.990 [2024-05-15 11:18:24.097685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.990 qpair failed and we were unable to recover it. 00:26:26.990 [2024-05-15 11:18:24.097768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.990 [2024-05-15 11:18:24.097965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.990 [2024-05-15 11:18:24.097975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.990 qpair failed and we were unable to recover it. 00:26:26.990 [2024-05-15 11:18:24.098064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.990 [2024-05-15 11:18:24.098133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.990 [2024-05-15 11:18:24.098143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.990 qpair failed and we were unable to recover it. 00:26:26.990 [2024-05-15 11:18:24.098207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.990 [2024-05-15 11:18:24.098275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.990 [2024-05-15 11:18:24.098285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.990 qpair failed and we were unable to recover it. 00:26:26.990 [2024-05-15 11:18:24.098356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.990 [2024-05-15 11:18:24.098435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.990 [2024-05-15 11:18:24.098445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.990 qpair failed and we were unable to recover it. 00:26:26.990 [2024-05-15 11:18:24.098532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.990 [2024-05-15 11:18:24.098602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.990 [2024-05-15 11:18:24.098611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:26.990 qpair failed and we were unable to recover it. 00:26:26.990 [2024-05-15 11:18:24.098712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.990 [2024-05-15 11:18:24.098942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.990 [2024-05-15 11:18:24.098958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.990 qpair failed and we were unable to recover it. 00:26:26.990 [2024-05-15 11:18:24.099035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.990 [2024-05-15 11:18:24.099111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.990 [2024-05-15 11:18:24.099124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.990 qpair failed and we were unable to recover it. 00:26:26.990 [2024-05-15 11:18:24.099207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.990 [2024-05-15 11:18:24.099280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.990 [2024-05-15 11:18:24.099294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.990 qpair failed and we were unable to recover it. 00:26:26.990 [2024-05-15 11:18:24.099371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.990 [2024-05-15 11:18:24.099518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.990 [2024-05-15 11:18:24.099532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.990 qpair failed and we were unable to recover it. 00:26:26.990 [2024-05-15 11:18:24.099600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.990 [2024-05-15 11:18:24.099747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.990 [2024-05-15 11:18:24.099761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.990 qpair failed and we were unable to recover it. 00:26:26.990 [2024-05-15 11:18:24.099843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.990 [2024-05-15 11:18:24.099980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.990 [2024-05-15 11:18:24.099996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.990 qpair failed and we were unable to recover it. 00:26:26.990 [2024-05-15 11:18:24.100093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.990 [2024-05-15 11:18:24.100202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.100217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.991 qpair failed and we were unable to recover it. 00:26:26.991 [2024-05-15 11:18:24.100307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.100450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.100479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.991 qpair failed and we were unable to recover it. 00:26:26.991 [2024-05-15 11:18:24.100672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.100911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.100940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.991 qpair failed and we were unable to recover it. 00:26:26.991 [2024-05-15 11:18:24.101075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.101201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.101230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.991 qpair failed and we were unable to recover it. 00:26:26.991 [2024-05-15 11:18:24.101452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.101579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.101607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.991 qpair failed and we were unable to recover it. 00:26:26.991 [2024-05-15 11:18:24.101743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.101865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.101892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.991 qpair failed and we were unable to recover it. 00:26:26.991 [2024-05-15 11:18:24.102014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.102194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.102222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.991 qpair failed and we were unable to recover it. 00:26:26.991 [2024-05-15 11:18:24.102347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.102549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.102579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.991 qpair failed and we were unable to recover it. 00:26:26.991 [2024-05-15 11:18:24.102767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.102893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.102921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.991 qpair failed and we were unable to recover it. 00:26:26.991 [2024-05-15 11:18:24.103024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.103191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.103204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.991 qpair failed and we were unable to recover it. 00:26:26.991 [2024-05-15 11:18:24.103298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.103436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.103449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.991 qpair failed and we were unable to recover it. 00:26:26.991 [2024-05-15 11:18:24.103592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.103681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.103693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.991 qpair failed and we were unable to recover it. 00:26:26.991 [2024-05-15 11:18:24.103793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.103873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.103887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.991 qpair failed and we were unable to recover it. 00:26:26.991 [2024-05-15 11:18:24.104030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.104114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.104127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.991 qpair failed and we were unable to recover it. 00:26:26.991 [2024-05-15 11:18:24.104203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.104347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.104361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.991 qpair failed and we were unable to recover it. 00:26:26.991 [2024-05-15 11:18:24.104522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.104623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.104665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.991 qpair failed and we were unable to recover it. 00:26:26.991 [2024-05-15 11:18:24.104866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.104980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.105008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.991 qpair failed and we were unable to recover it. 00:26:26.991 [2024-05-15 11:18:24.105139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.105283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.105297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.991 qpair failed and we were unable to recover it. 00:26:26.991 [2024-05-15 11:18:24.105381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.105486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.105513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.991 qpair failed and we were unable to recover it. 00:26:26.991 [2024-05-15 11:18:24.105635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.105818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.105846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.991 qpair failed and we were unable to recover it. 00:26:26.991 [2024-05-15 11:18:24.105963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.106077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.106106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.991 qpair failed and we were unable to recover it. 00:26:26.991 [2024-05-15 11:18:24.106294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.106397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.106427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.991 qpair failed and we were unable to recover it. 00:26:26.991 [2024-05-15 11:18:24.106551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.106641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.106655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.991 qpair failed and we were unable to recover it. 00:26:26.991 [2024-05-15 11:18:24.106740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.106821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.106834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.991 qpair failed and we were unable to recover it. 00:26:26.991 [2024-05-15 11:18:24.106975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.107052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.107065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.991 qpair failed and we were unable to recover it. 00:26:26.991 [2024-05-15 11:18:24.107161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.107308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.107322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.991 qpair failed and we were unable to recover it. 00:26:26.991 [2024-05-15 11:18:24.107405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.107487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.107501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.991 qpair failed and we were unable to recover it. 00:26:26.991 [2024-05-15 11:18:24.107666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.107808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.107822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.991 qpair failed and we were unable to recover it. 00:26:26.991 [2024-05-15 11:18:24.108044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.108130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.108143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.991 qpair failed and we were unable to recover it. 00:26:26.991 [2024-05-15 11:18:24.108250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.108334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.991 [2024-05-15 11:18:24.108347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.991 qpair failed and we were unable to recover it. 00:26:26.992 [2024-05-15 11:18:24.108431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.108583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.108596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.992 qpair failed and we were unable to recover it. 00:26:26.992 [2024-05-15 11:18:24.108682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.108854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.108867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.992 qpair failed and we were unable to recover it. 00:26:26.992 [2024-05-15 11:18:24.108955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.109044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.109058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.992 qpair failed and we were unable to recover it. 00:26:26.992 [2024-05-15 11:18:24.109137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.109213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.109227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.992 qpair failed and we were unable to recover it. 00:26:26.992 [2024-05-15 11:18:24.109320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.109418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.109432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.992 qpair failed and we were unable to recover it. 00:26:26.992 [2024-05-15 11:18:24.109529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.109692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.109706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.992 qpair failed and we were unable to recover it. 00:26:26.992 [2024-05-15 11:18:24.109850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.109924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.109937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.992 qpair failed and we were unable to recover it. 00:26:26.992 [2024-05-15 11:18:24.110076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.110182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.110195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.992 qpair failed and we were unable to recover it. 00:26:26.992 [2024-05-15 11:18:24.110338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.110416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.110429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.992 qpair failed and we were unable to recover it. 00:26:26.992 [2024-05-15 11:18:24.110579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.110667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.110680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.992 qpair failed and we were unable to recover it. 00:26:26.992 [2024-05-15 11:18:24.110823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.110976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.110989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.992 qpair failed and we were unable to recover it. 00:26:26.992 [2024-05-15 11:18:24.111079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.111173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.111187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.992 qpair failed and we were unable to recover it. 00:26:26.992 [2024-05-15 11:18:24.111253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.111348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.111362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.992 qpair failed and we were unable to recover it. 00:26:26.992 [2024-05-15 11:18:24.111445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.111528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.111541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.992 qpair failed and we were unable to recover it. 00:26:26.992 [2024-05-15 11:18:24.111695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.111777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.111793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.992 qpair failed and we were unable to recover it. 00:26:26.992 [2024-05-15 11:18:24.111951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.112054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.112067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.992 qpair failed and we were unable to recover it. 00:26:26.992 [2024-05-15 11:18:24.112148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.112345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.112359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.992 qpair failed and we were unable to recover it. 00:26:26.992 [2024-05-15 11:18:24.112506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.112588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.112601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.992 qpair failed and we were unable to recover it. 00:26:26.992 [2024-05-15 11:18:24.112690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.112778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.112791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.992 qpair failed and we were unable to recover it. 00:26:26.992 [2024-05-15 11:18:24.112865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.112938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.112951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.992 qpair failed and we were unable to recover it. 00:26:26.992 [2024-05-15 11:18:24.113029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.113100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.113114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.992 qpair failed and we were unable to recover it. 00:26:26.992 [2024-05-15 11:18:24.113196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.113267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.113280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.992 qpair failed and we were unable to recover it. 00:26:26.992 [2024-05-15 11:18:24.113425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.113496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.113509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.992 qpair failed and we were unable to recover it. 00:26:26.992 [2024-05-15 11:18:24.113620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.113760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.113773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.992 qpair failed and we were unable to recover it. 00:26:26.992 [2024-05-15 11:18:24.113850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.113942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.113958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.992 qpair failed and we were unable to recover it. 00:26:26.992 [2024-05-15 11:18:24.114054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.114128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.114141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.992 qpair failed and we were unable to recover it. 00:26:26.992 [2024-05-15 11:18:24.114247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.114337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.114351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.992 qpair failed and we were unable to recover it. 00:26:26.992 [2024-05-15 11:18:24.114539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.114681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.114695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.992 qpair failed and we were unable to recover it. 00:26:26.992 [2024-05-15 11:18:24.114772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.114876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.114889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.992 qpair failed and we were unable to recover it. 00:26:26.992 [2024-05-15 11:18:24.115031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.115104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.992 [2024-05-15 11:18:24.115118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.993 qpair failed and we were unable to recover it. 00:26:26.993 [2024-05-15 11:18:24.115196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.115349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.115363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.993 qpair failed and we were unable to recover it. 00:26:26.993 [2024-05-15 11:18:24.115549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.115764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.115792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.993 qpair failed and we were unable to recover it. 00:26:26.993 [2024-05-15 11:18:24.115911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.116032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.116061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.993 qpair failed and we were unable to recover it. 00:26:26.993 [2024-05-15 11:18:24.116174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.116283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.116314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.993 qpair failed and we were unable to recover it. 00:26:26.993 [2024-05-15 11:18:24.116510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.116681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.116715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.993 qpair failed and we were unable to recover it. 00:26:26.993 [2024-05-15 11:18:24.116903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.117072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.117102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.993 qpair failed and we were unable to recover it. 00:26:26.993 [2024-05-15 11:18:24.117244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.117323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.117337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.993 qpair failed and we were unable to recover it. 00:26:26.993 [2024-05-15 11:18:24.117416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.117574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.117588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.993 qpair failed and we were unable to recover it. 00:26:26.993 [2024-05-15 11:18:24.117737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.117807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.117821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.993 qpair failed and we were unable to recover it. 00:26:26.993 [2024-05-15 11:18:24.118040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.118178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.118193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.993 qpair failed and we were unable to recover it. 00:26:26.993 [2024-05-15 11:18:24.118334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.118410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.118423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.993 qpair failed and we were unable to recover it. 00:26:26.993 [2024-05-15 11:18:24.118505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.118644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.118658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.993 qpair failed and we were unable to recover it. 00:26:26.993 [2024-05-15 11:18:24.118733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.118839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.118853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.993 qpair failed and we were unable to recover it. 00:26:26.993 [2024-05-15 11:18:24.119007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.119151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.119170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.993 qpair failed and we were unable to recover it. 00:26:26.993 [2024-05-15 11:18:24.119327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.119469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.119485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.993 qpair failed and we were unable to recover it. 00:26:26.993 [2024-05-15 11:18:24.119654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.119785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.119798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.993 qpair failed and we were unable to recover it. 00:26:26.993 [2024-05-15 11:18:24.119926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.120014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.120028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.993 qpair failed and we were unable to recover it. 00:26:26.993 [2024-05-15 11:18:24.120141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.120216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.120230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.993 qpair failed and we were unable to recover it. 00:26:26.993 [2024-05-15 11:18:24.120335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.120428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.120441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.993 qpair failed and we were unable to recover it. 00:26:26.993 [2024-05-15 11:18:24.120538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.120622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.120636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.993 qpair failed and we were unable to recover it. 00:26:26.993 [2024-05-15 11:18:24.120723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.120807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.120821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.993 qpair failed and we were unable to recover it. 00:26:26.993 [2024-05-15 11:18:24.120902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.120996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.121009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.993 qpair failed and we were unable to recover it. 00:26:26.993 [2024-05-15 11:18:24.121083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.121154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.121195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.993 qpair failed and we were unable to recover it. 00:26:26.993 [2024-05-15 11:18:24.121264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.121341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.121354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.993 qpair failed and we were unable to recover it. 00:26:26.993 [2024-05-15 11:18:24.121510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.121579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.121591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.993 qpair failed and we were unable to recover it. 00:26:26.993 [2024-05-15 11:18:24.121686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.121758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.121771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.993 qpair failed and we were unable to recover it. 00:26:26.993 [2024-05-15 11:18:24.121851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.121949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.121963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.993 qpair failed and we were unable to recover it. 00:26:26.993 [2024-05-15 11:18:24.122034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.122179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.122193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.993 qpair failed and we were unable to recover it. 00:26:26.993 [2024-05-15 11:18:24.122276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.122352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.122364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.993 qpair failed and we were unable to recover it. 00:26:26.993 [2024-05-15 11:18:24.122447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.993 [2024-05-15 11:18:24.122537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.994 [2024-05-15 11:18:24.122551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.994 qpair failed and we were unable to recover it. 00:26:26.994 [2024-05-15 11:18:24.122694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.994 [2024-05-15 11:18:24.122771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.994 [2024-05-15 11:18:24.122784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.994 qpair failed and we were unable to recover it. 00:26:26.994 [2024-05-15 11:18:24.122875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.994 [2024-05-15 11:18:24.122963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.994 [2024-05-15 11:18:24.122976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.994 qpair failed and we were unable to recover it. 00:26:26.994 [2024-05-15 11:18:24.123049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.994 [2024-05-15 11:18:24.123138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.994 [2024-05-15 11:18:24.123151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.994 qpair failed and we were unable to recover it. 00:26:26.994 [2024-05-15 11:18:24.123301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.994 [2024-05-15 11:18:24.123477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.994 [2024-05-15 11:18:24.123489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.994 qpair failed and we were unable to recover it. 00:26:26.994 [2024-05-15 11:18:24.123654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.994 [2024-05-15 11:18:24.123757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.994 [2024-05-15 11:18:24.123786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.994 qpair failed and we were unable to recover it. 00:26:26.994 [2024-05-15 11:18:24.123928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.994 [2024-05-15 11:18:24.124097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.994 [2024-05-15 11:18:24.124126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.994 qpair failed and we were unable to recover it. 00:26:26.994 [2024-05-15 11:18:24.124328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.994 [2024-05-15 11:18:24.124402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.994 [2024-05-15 11:18:24.124415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.994 qpair failed and we were unable to recover it. 00:26:26.994 [2024-05-15 11:18:24.124590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.994 [2024-05-15 11:18:24.124739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.994 [2024-05-15 11:18:24.124752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.994 qpair failed and we were unable to recover it. 00:26:26.994 [2024-05-15 11:18:24.124870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.994 [2024-05-15 11:18:24.124990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.994 [2024-05-15 11:18:24.125018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.994 qpair failed and we were unable to recover it. 00:26:26.994 [2024-05-15 11:18:24.125154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.994 [2024-05-15 11:18:24.125301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.994 [2024-05-15 11:18:24.125331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.994 qpair failed and we were unable to recover it. 00:26:26.994 [2024-05-15 11:18:24.125507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.994 [2024-05-15 11:18:24.125708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.994 [2024-05-15 11:18:24.125736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.994 qpair failed and we were unable to recover it. 00:26:26.994 [2024-05-15 11:18:24.125986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.994 [2024-05-15 11:18:24.126098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.994 [2024-05-15 11:18:24.126127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.994 qpair failed and we were unable to recover it. 00:26:26.994 [2024-05-15 11:18:24.126324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.994 [2024-05-15 11:18:24.126539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.994 [2024-05-15 11:18:24.126553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.994 qpair failed and we were unable to recover it. 00:26:26.994 [2024-05-15 11:18:24.126639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.994 [2024-05-15 11:18:24.126784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.994 [2024-05-15 11:18:24.126797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.994 qpair failed and we were unable to recover it. 00:26:26.994 [2024-05-15 11:18:24.126951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.994 [2024-05-15 11:18:24.127042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.994 [2024-05-15 11:18:24.127055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.994 qpair failed and we were unable to recover it. 00:26:26.994 [2024-05-15 11:18:24.127153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.994 [2024-05-15 11:18:24.127246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.994 [2024-05-15 11:18:24.127259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.994 qpair failed and we were unable to recover it. 00:26:26.994 [2024-05-15 11:18:24.127399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.994 [2024-05-15 11:18:24.127541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.994 [2024-05-15 11:18:24.127553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.994 qpair failed and we were unable to recover it. 00:26:26.994 [2024-05-15 11:18:24.127637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.994 [2024-05-15 11:18:24.127734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.994 [2024-05-15 11:18:24.127746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.994 qpair failed and we were unable to recover it. 00:26:26.994 [2024-05-15 11:18:24.127899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.994 [2024-05-15 11:18:24.127994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.994 [2024-05-15 11:18:24.128007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.994 qpair failed and we were unable to recover it. 00:26:26.994 [2024-05-15 11:18:24.128263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.994 [2024-05-15 11:18:24.128421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.994 [2024-05-15 11:18:24.128434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.994 qpair failed and we were unable to recover it. 00:26:26.994 [2024-05-15 11:18:24.128520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.994 [2024-05-15 11:18:24.128602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.994 [2024-05-15 11:18:24.128614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.994 qpair failed and we were unable to recover it. 00:26:26.994 [2024-05-15 11:18:24.128707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.994 [2024-05-15 11:18:24.128805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.994 [2024-05-15 11:18:24.128818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.994 qpair failed and we were unable to recover it. 00:26:26.994 [2024-05-15 11:18:24.128906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.994 [2024-05-15 11:18:24.128985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.994 [2024-05-15 11:18:24.128998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.994 qpair failed and we were unable to recover it. 00:26:26.994 [2024-05-15 11:18:24.129149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.994 [2024-05-15 11:18:24.129292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.994 [2024-05-15 11:18:24.129306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.994 qpair failed and we were unable to recover it. 00:26:26.994 [2024-05-15 11:18:24.129448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.994 [2024-05-15 11:18:24.129589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.129602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.995 qpair failed and we were unable to recover it. 00:26:26.995 [2024-05-15 11:18:24.129694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.129799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.129812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.995 qpair failed and we were unable to recover it. 00:26:26.995 [2024-05-15 11:18:24.129903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.130110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.130124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.995 qpair failed and we were unable to recover it. 00:26:26.995 [2024-05-15 11:18:24.130381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.130454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.130466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.995 qpair failed and we were unable to recover it. 00:26:26.995 [2024-05-15 11:18:24.130627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.130709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.130722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.995 qpair failed and we were unable to recover it. 00:26:26.995 [2024-05-15 11:18:24.130804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.130869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.130881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.995 qpair failed and we were unable to recover it. 00:26:26.995 [2024-05-15 11:18:24.130958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.131028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.131040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.995 qpair failed and we were unable to recover it. 00:26:26.995 [2024-05-15 11:18:24.131131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.131399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.131413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.995 qpair failed and we were unable to recover it. 00:26:26.995 [2024-05-15 11:18:24.131499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.131674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.131702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.995 qpair failed and we were unable to recover it. 00:26:26.995 [2024-05-15 11:18:24.131827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.132016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.132045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.995 qpair failed and we were unable to recover it. 00:26:26.995 [2024-05-15 11:18:24.132247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.132330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.132343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.995 qpair failed and we were unable to recover it. 00:26:26.995 [2024-05-15 11:18:24.132456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.132535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.132548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.995 qpair failed and we were unable to recover it. 00:26:26.995 [2024-05-15 11:18:24.132638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.132737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.132750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.995 qpair failed and we were unable to recover it. 00:26:26.995 [2024-05-15 11:18:24.132904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.133092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.133105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.995 qpair failed and we were unable to recover it. 00:26:26.995 [2024-05-15 11:18:24.133257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.133326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.133340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.995 qpair failed and we were unable to recover it. 00:26:26.995 [2024-05-15 11:18:24.133435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.133527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.133539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.995 qpair failed and we were unable to recover it. 00:26:26.995 [2024-05-15 11:18:24.133687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.133779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.133792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.995 qpair failed and we were unable to recover it. 00:26:26.995 [2024-05-15 11:18:24.133873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.133947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.133959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.995 qpair failed and we were unable to recover it. 00:26:26.995 [2024-05-15 11:18:24.134036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.134120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.134133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.995 qpair failed and we were unable to recover it. 00:26:26.995 [2024-05-15 11:18:24.134218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.134423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.134437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.995 qpair failed and we were unable to recover it. 00:26:26.995 [2024-05-15 11:18:24.134547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.134627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.134640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.995 qpair failed and we were unable to recover it. 00:26:26.995 [2024-05-15 11:18:24.134811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.134955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.134968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.995 qpair failed and we were unable to recover it. 00:26:26.995 [2024-05-15 11:18:24.135046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.135191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.135205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.995 qpair failed and we were unable to recover it. 00:26:26.995 [2024-05-15 11:18:24.135280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.135435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.135448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.995 qpair failed and we were unable to recover it. 00:26:26.995 [2024-05-15 11:18:24.135523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.135597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.135610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.995 qpair failed and we were unable to recover it. 00:26:26.995 [2024-05-15 11:18:24.135752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.135835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.135848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.995 qpair failed and we were unable to recover it. 00:26:26.995 [2024-05-15 11:18:24.136003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.136094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.136107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.995 qpair failed and we were unable to recover it. 00:26:26.995 [2024-05-15 11:18:24.136272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.136534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.136548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.995 qpair failed and we were unable to recover it. 00:26:26.995 [2024-05-15 11:18:24.136637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.136777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.136790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.995 qpair failed and we were unable to recover it. 00:26:26.995 [2024-05-15 11:18:24.136867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.136944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.136957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.995 qpair failed and we were unable to recover it. 00:26:26.995 [2024-05-15 11:18:24.137042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.137139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.137151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.995 qpair failed and we were unable to recover it. 00:26:26.995 [2024-05-15 11:18:24.137240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.137313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.995 [2024-05-15 11:18:24.137326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.996 qpair failed and we were unable to recover it. 00:26:26.996 [2024-05-15 11:18:24.137474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.137580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.137608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.996 qpair failed and we were unable to recover it. 00:26:26.996 [2024-05-15 11:18:24.137765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.137882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.137911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.996 qpair failed and we were unable to recover it. 00:26:26.996 [2024-05-15 11:18:24.138030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.138203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.138234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.996 qpair failed and we were unable to recover it. 00:26:26.996 [2024-05-15 11:18:24.138354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.138459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.138489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.996 qpair failed and we were unable to recover it. 00:26:26.996 [2024-05-15 11:18:24.138603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.138700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.138716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.996 qpair failed and we were unable to recover it. 00:26:26.996 [2024-05-15 11:18:24.138790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.138941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.138953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.996 qpair failed and we were unable to recover it. 00:26:26.996 [2024-05-15 11:18:24.139097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.139236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.139250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.996 qpair failed and we were unable to recover it. 00:26:26.996 [2024-05-15 11:18:24.139334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.139409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.139422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.996 qpair failed and we were unable to recover it. 00:26:26.996 [2024-05-15 11:18:24.139580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.139668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.139682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.996 qpair failed and we were unable to recover it. 00:26:26.996 [2024-05-15 11:18:24.139794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.139905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.139923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.996 qpair failed and we were unable to recover it. 00:26:26.996 [2024-05-15 11:18:24.140008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.140219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.140253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.996 qpair failed and we were unable to recover it. 00:26:26.996 [2024-05-15 11:18:24.140395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.140505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.140534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.996 qpair failed and we were unable to recover it. 00:26:26.996 [2024-05-15 11:18:24.140641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.140729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.140742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.996 qpair failed and we were unable to recover it. 00:26:26.996 [2024-05-15 11:18:24.140897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.140978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.140991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.996 qpair failed and we were unable to recover it. 00:26:26.996 [2024-05-15 11:18:24.141082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.141242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.141257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.996 qpair failed and we were unable to recover it. 00:26:26.996 [2024-05-15 11:18:24.141350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.141490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.141504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.996 qpair failed and we were unable to recover it. 00:26:26.996 [2024-05-15 11:18:24.141584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.141661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.141674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.996 qpair failed and we were unable to recover it. 00:26:26.996 [2024-05-15 11:18:24.141750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.141887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.141900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.996 qpair failed and we were unable to recover it. 00:26:26.996 [2024-05-15 11:18:24.142001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.142140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.142154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.996 qpair failed and we were unable to recover it. 00:26:26.996 [2024-05-15 11:18:24.142237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.142325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.142339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.996 qpair failed and we were unable to recover it. 00:26:26.996 [2024-05-15 11:18:24.142481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.142553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.142566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.996 qpair failed and we were unable to recover it. 00:26:26.996 [2024-05-15 11:18:24.142649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.142812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.142825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.996 qpair failed and we were unable to recover it. 00:26:26.996 [2024-05-15 11:18:24.142905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.143043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.143056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.996 qpair failed and we were unable to recover it. 00:26:26.996 [2024-05-15 11:18:24.143130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.143212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.143225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.996 qpair failed and we were unable to recover it. 00:26:26.996 [2024-05-15 11:18:24.143303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.143387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.143401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.996 qpair failed and we were unable to recover it. 00:26:26.996 [2024-05-15 11:18:24.143478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.143628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.143641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.996 qpair failed and we were unable to recover it. 00:26:26.996 [2024-05-15 11:18:24.143786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.143944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.143957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.996 qpair failed and we were unable to recover it. 00:26:26.996 [2024-05-15 11:18:24.144045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.144132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.144145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.996 qpair failed and we were unable to recover it. 00:26:26.996 [2024-05-15 11:18:24.144233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.144375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.144389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.996 qpair failed and we were unable to recover it. 00:26:26.996 [2024-05-15 11:18:24.144474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.144555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.144572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.996 qpair failed and we were unable to recover it. 00:26:26.996 [2024-05-15 11:18:24.144643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.144718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.144731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.996 qpair failed and we were unable to recover it. 00:26:26.996 [2024-05-15 11:18:24.144904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.144977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.996 [2024-05-15 11:18:24.144990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.996 qpair failed and we were unable to recover it. 00:26:26.996 [2024-05-15 11:18:24.145088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.145246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.145260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.997 qpair failed and we were unable to recover it. 00:26:26.997 [2024-05-15 11:18:24.145334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.145409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.145423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.997 qpair failed and we were unable to recover it. 00:26:26.997 [2024-05-15 11:18:24.145499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.145573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.145586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.997 qpair failed and we were unable to recover it. 00:26:26.997 [2024-05-15 11:18:24.145724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.145862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.145875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.997 qpair failed and we were unable to recover it. 00:26:26.997 [2024-05-15 11:18:24.145968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.146116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.146129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.997 qpair failed and we were unable to recover it. 00:26:26.997 [2024-05-15 11:18:24.146205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.146360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.146373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.997 qpair failed and we were unable to recover it. 00:26:26.997 [2024-05-15 11:18:24.146454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.146526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.146539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.997 qpair failed and we were unable to recover it. 00:26:26.997 [2024-05-15 11:18:24.146686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.146779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.146792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.997 qpair failed and we were unable to recover it. 00:26:26.997 [2024-05-15 11:18:24.146873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.147023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.147036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.997 qpair failed and we were unable to recover it. 00:26:26.997 [2024-05-15 11:18:24.147132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.147240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.147253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.997 qpair failed and we were unable to recover it. 00:26:26.997 [2024-05-15 11:18:24.147399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.147539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.147552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.997 qpair failed and we were unable to recover it. 00:26:26.997 [2024-05-15 11:18:24.147628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.147785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.147798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.997 qpair failed and we were unable to recover it. 00:26:26.997 [2024-05-15 11:18:24.147881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.147952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.147966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.997 qpair failed and we were unable to recover it. 00:26:26.997 [2024-05-15 11:18:24.148043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.148190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.148220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.997 qpair failed and we were unable to recover it. 00:26:26.997 [2024-05-15 11:18:24.148350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.148472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.148500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.997 qpair failed and we were unable to recover it. 00:26:26.997 [2024-05-15 11:18:24.148627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.148733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.148746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.997 qpair failed and we were unable to recover it. 00:26:26.997 [2024-05-15 11:18:24.148842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.148914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.148926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.997 qpair failed and we were unable to recover it. 00:26:26.997 [2024-05-15 11:18:24.149069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.149148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.149161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.997 qpair failed and we were unable to recover it. 00:26:26.997 [2024-05-15 11:18:24.149347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.149524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.149554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.997 qpair failed and we were unable to recover it. 00:26:26.997 [2024-05-15 11:18:24.149663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.149855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.149883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.997 qpair failed and we were unable to recover it. 00:26:26.997 [2024-05-15 11:18:24.150014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.150130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.150159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.997 qpair failed and we were unable to recover it. 00:26:26.997 [2024-05-15 11:18:24.150291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.150440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.150453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.997 qpair failed and we were unable to recover it. 00:26:26.997 [2024-05-15 11:18:24.150535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.150616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.150628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.997 qpair failed and we were unable to recover it. 00:26:26.997 [2024-05-15 11:18:24.150718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.150863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.150877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.997 qpair failed and we were unable to recover it. 00:26:26.997 [2024-05-15 11:18:24.150964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.151045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.151059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.997 qpair failed and we were unable to recover it. 00:26:26.997 [2024-05-15 11:18:24.151147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.151228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.151242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.997 qpair failed and we were unable to recover it. 00:26:26.997 [2024-05-15 11:18:24.151343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.151413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.151425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.997 qpair failed and we were unable to recover it. 00:26:26.997 [2024-05-15 11:18:24.151520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.151593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.151605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.997 qpair failed and we were unable to recover it. 00:26:26.997 [2024-05-15 11:18:24.151692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.151768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.151782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.997 qpair failed and we were unable to recover it. 00:26:26.997 [2024-05-15 11:18:24.151861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.151946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.151958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.997 qpair failed and we were unable to recover it. 00:26:26.997 [2024-05-15 11:18:24.152054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.152124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.152137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.997 qpair failed and we were unable to recover it. 00:26:26.997 [2024-05-15 11:18:24.152237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.152319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.152332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.997 qpair failed and we were unable to recover it. 00:26:26.997 [2024-05-15 11:18:24.152408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.152471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.997 [2024-05-15 11:18:24.152484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.998 qpair failed and we were unable to recover it. 00:26:26.998 [2024-05-15 11:18:24.152563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.998 [2024-05-15 11:18:24.152639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.998 [2024-05-15 11:18:24.152652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.998 qpair failed and we were unable to recover it. 00:26:26.998 [2024-05-15 11:18:24.152735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.998 [2024-05-15 11:18:24.152821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.998 [2024-05-15 11:18:24.152834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.998 qpair failed and we were unable to recover it. 00:26:26.998 [2024-05-15 11:18:24.152916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.998 [2024-05-15 11:18:24.153001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.998 [2024-05-15 11:18:24.153015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.998 qpair failed and we were unable to recover it. 00:26:26.998 [2024-05-15 11:18:24.153102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.998 [2024-05-15 11:18:24.153213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.998 [2024-05-15 11:18:24.153227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.998 qpair failed and we were unable to recover it. 00:26:26.998 [2024-05-15 11:18:24.153306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.998 [2024-05-15 11:18:24.153391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.998 [2024-05-15 11:18:24.153404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.998 qpair failed and we were unable to recover it. 00:26:26.998 [2024-05-15 11:18:24.153482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.998 [2024-05-15 11:18:24.153626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.998 [2024-05-15 11:18:24.153642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.998 qpair failed and we were unable to recover it. 00:26:26.998 [2024-05-15 11:18:24.153723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.998 [2024-05-15 11:18:24.153802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.998 [2024-05-15 11:18:24.153816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.998 qpair failed and we were unable to recover it. 00:26:26.998 [2024-05-15 11:18:24.153892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.998 [2024-05-15 11:18:24.154059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.998 [2024-05-15 11:18:24.154088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.998 qpair failed and we were unable to recover it. 00:26:26.998 [2024-05-15 11:18:24.154216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.998 [2024-05-15 11:18:24.154333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.998 [2024-05-15 11:18:24.154362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.998 qpair failed and we were unable to recover it. 00:26:26.998 [2024-05-15 11:18:24.154483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.998 [2024-05-15 11:18:24.154603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.998 [2024-05-15 11:18:24.154617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.998 qpair failed and we were unable to recover it. 00:26:26.998 [2024-05-15 11:18:24.154690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.998 [2024-05-15 11:18:24.154855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.998 [2024-05-15 11:18:24.154884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.998 qpair failed and we were unable to recover it. 00:26:26.998 [2024-05-15 11:18:24.155007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.998 [2024-05-15 11:18:24.155124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.998 [2024-05-15 11:18:24.155152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.998 qpair failed and we were unable to recover it. 00:26:26.998 [2024-05-15 11:18:24.155290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.998 [2024-05-15 11:18:24.155397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.998 [2024-05-15 11:18:24.155409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.998 qpair failed and we were unable to recover it. 00:26:26.998 [2024-05-15 11:18:24.155580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.998 [2024-05-15 11:18:24.155690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.998 [2024-05-15 11:18:24.155703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.998 qpair failed and we were unable to recover it. 00:26:26.998 [2024-05-15 11:18:24.155786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.998 [2024-05-15 11:18:24.155931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.998 [2024-05-15 11:18:24.155944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.998 qpair failed and we were unable to recover it. 00:26:26.998 [2024-05-15 11:18:24.156091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.998 [2024-05-15 11:18:24.156163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.998 [2024-05-15 11:18:24.156190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.998 qpair failed and we were unable to recover it. 00:26:26.998 [2024-05-15 11:18:24.156299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.998 [2024-05-15 11:18:24.156466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.998 [2024-05-15 11:18:24.156494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.998 qpair failed and we were unable to recover it. 00:26:26.998 [2024-05-15 11:18:24.156693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.998 [2024-05-15 11:18:24.156808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.998 [2024-05-15 11:18:24.156835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.998 qpair failed and we were unable to recover it. 00:26:26.998 [2024-05-15 11:18:24.156955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.998 [2024-05-15 11:18:24.157064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.998 [2024-05-15 11:18:24.157091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.998 qpair failed and we were unable to recover it. 00:26:26.998 [2024-05-15 11:18:24.157223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.998 [2024-05-15 11:18:24.157351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.998 [2024-05-15 11:18:24.157379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.998 qpair failed and we were unable to recover it. 00:26:26.998 [2024-05-15 11:18:24.157572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.998 [2024-05-15 11:18:24.157684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.998 [2024-05-15 11:18:24.157713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.999 qpair failed and we were unable to recover it. 00:26:26.999 [2024-05-15 11:18:24.157905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.999 [2024-05-15 11:18:24.158017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.999 [2024-05-15 11:18:24.158044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.999 qpair failed and we were unable to recover it. 00:26:26.999 [2024-05-15 11:18:24.158160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.999 [2024-05-15 11:18:24.158345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.999 [2024-05-15 11:18:24.158374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.999 qpair failed and we were unable to recover it. 00:26:26.999 [2024-05-15 11:18:24.158562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.999 [2024-05-15 11:18:24.158679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.999 [2024-05-15 11:18:24.158708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.999 qpair failed and we were unable to recover it. 00:26:26.999 [2024-05-15 11:18:24.158835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.999 [2024-05-15 11:18:24.158909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.999 [2024-05-15 11:18:24.158922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.999 qpair failed and we were unable to recover it. 00:26:26.999 [2024-05-15 11:18:24.159006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.999 [2024-05-15 11:18:24.159082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.999 [2024-05-15 11:18:24.159095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.999 qpair failed and we were unable to recover it. 00:26:26.999 [2024-05-15 11:18:24.159264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.999 [2024-05-15 11:18:24.159397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.999 [2024-05-15 11:18:24.159410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.999 qpair failed and we were unable to recover it. 00:26:26.999 [2024-05-15 11:18:24.159499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.999 [2024-05-15 11:18:24.159575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.999 [2024-05-15 11:18:24.159588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.999 qpair failed and we were unable to recover it. 00:26:26.999 [2024-05-15 11:18:24.159663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.999 [2024-05-15 11:18:24.159810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.999 [2024-05-15 11:18:24.159822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.999 qpair failed and we were unable to recover it. 00:26:26.999 [2024-05-15 11:18:24.159980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.999 [2024-05-15 11:18:24.160136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.999 [2024-05-15 11:18:24.160175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.999 qpair failed and we were unable to recover it. 00:26:26.999 [2024-05-15 11:18:24.160287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.999 [2024-05-15 11:18:24.160425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.999 [2024-05-15 11:18:24.160453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.999 qpair failed and we were unable to recover it. 00:26:26.999 [2024-05-15 11:18:24.160632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.999 [2024-05-15 11:18:24.160752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.999 [2024-05-15 11:18:24.160766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.999 qpair failed and we were unable to recover it. 00:26:26.999 [2024-05-15 11:18:24.160928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.999 [2024-05-15 11:18:24.161070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.999 [2024-05-15 11:18:24.161084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.999 qpair failed and we were unable to recover it. 00:26:26.999 [2024-05-15 11:18:24.161240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.999 [2024-05-15 11:18:24.161433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.999 [2024-05-15 11:18:24.161461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.999 qpair failed and we were unable to recover it. 00:26:26.999 [2024-05-15 11:18:24.161590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.999 [2024-05-15 11:18:24.161760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.999 [2024-05-15 11:18:24.161790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.999 qpair failed and we were unable to recover it. 00:26:26.999 [2024-05-15 11:18:24.161979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.999 [2024-05-15 11:18:24.162072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.999 [2024-05-15 11:18:24.162100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:26.999 qpair failed and we were unable to recover it. 00:26:26.999 [2024-05-15 11:18:24.162253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.999 [2024-05-15 11:18:24.162386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.999 [2024-05-15 11:18:24.162399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.999 qpair failed and we were unable to recover it. 00:26:26.999 [2024-05-15 11:18:24.162479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.999 [2024-05-15 11:18:24.162562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.999 [2024-05-15 11:18:24.162589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.999 qpair failed and we were unable to recover it. 00:26:26.999 [2024-05-15 11:18:24.162810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.999 [2024-05-15 11:18:24.162940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.999 [2024-05-15 11:18:24.162969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.999 qpair failed and we were unable to recover it. 00:26:26.999 [2024-05-15 11:18:24.163174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.999 [2024-05-15 11:18:24.163283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.999 [2024-05-15 11:18:24.163311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.999 qpair failed and we were unable to recover it. 00:26:26.999 [2024-05-15 11:18:24.163424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.999 [2024-05-15 11:18:24.163548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.999 [2024-05-15 11:18:24.163578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.999 qpair failed and we were unable to recover it. 00:26:26.999 [2024-05-15 11:18:24.163680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.999 [2024-05-15 11:18:24.163832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.999 [2024-05-15 11:18:24.163846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:26.999 qpair failed and we were unable to recover it. 00:26:26.999 [2024-05-15 11:18:24.163924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.164058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.164071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.000 qpair failed and we were unable to recover it. 00:26:27.000 [2024-05-15 11:18:24.164162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.164256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.164269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.000 qpair failed and we were unable to recover it. 00:26:27.000 [2024-05-15 11:18:24.164419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.164571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.164584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.000 qpair failed and we were unable to recover it. 00:26:27.000 [2024-05-15 11:18:24.164674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.164812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.164825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.000 qpair failed and we were unable to recover it. 00:26:27.000 [2024-05-15 11:18:24.164900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.164975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.164989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.000 qpair failed and we were unable to recover it. 00:26:27.000 [2024-05-15 11:18:24.165079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.165162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.165180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.000 qpair failed and we were unable to recover it. 00:26:27.000 [2024-05-15 11:18:24.165270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.165349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.165362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.000 qpair failed and we were unable to recover it. 00:26:27.000 [2024-05-15 11:18:24.165451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.165599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.165612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.000 qpair failed and we were unable to recover it. 00:26:27.000 [2024-05-15 11:18:24.165694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.165776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.165790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.000 qpair failed and we were unable to recover it. 00:26:27.000 [2024-05-15 11:18:24.165878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.166024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.166038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.000 qpair failed and we were unable to recover it. 00:26:27.000 [2024-05-15 11:18:24.166134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.166225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.166238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.000 qpair failed and we were unable to recover it. 00:26:27.000 [2024-05-15 11:18:24.166326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.166419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.166432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.000 qpair failed and we were unable to recover it. 00:26:27.000 [2024-05-15 11:18:24.166612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.166711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.166725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.000 qpair failed and we were unable to recover it. 00:26:27.000 [2024-05-15 11:18:24.166801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.166978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.167007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.000 qpair failed and we were unable to recover it. 00:26:27.000 [2024-05-15 11:18:24.167128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.167282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.167312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.000 qpair failed and we were unable to recover it. 00:26:27.000 [2024-05-15 11:18:24.167436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.167631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.167645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.000 qpair failed and we were unable to recover it. 00:26:27.000 [2024-05-15 11:18:24.167727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.167811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.167824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.000 qpair failed and we were unable to recover it. 00:26:27.000 [2024-05-15 11:18:24.167902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.167982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.167996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.000 qpair failed and we were unable to recover it. 00:26:27.000 [2024-05-15 11:18:24.168173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.168261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.168275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.000 qpair failed and we were unable to recover it. 00:26:27.000 [2024-05-15 11:18:24.168351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.168445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.168458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.000 qpair failed and we were unable to recover it. 00:26:27.000 [2024-05-15 11:18:24.168534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.168608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.168622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.000 qpair failed and we were unable to recover it. 00:26:27.000 [2024-05-15 11:18:24.168763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.168907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.168921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.000 qpair failed and we were unable to recover it. 00:26:27.000 [2024-05-15 11:18:24.169010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.169086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.169099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.000 qpair failed and we were unable to recover it. 00:26:27.000 [2024-05-15 11:18:24.169254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.169345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.169358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.000 qpair failed and we were unable to recover it. 00:26:27.000 [2024-05-15 11:18:24.169444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.169518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.169532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.000 qpair failed and we were unable to recover it. 00:26:27.000 [2024-05-15 11:18:24.169688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.169826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.169839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.000 qpair failed and we were unable to recover it. 00:26:27.000 [2024-05-15 11:18:24.169928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.170081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.170094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.000 qpair failed and we were unable to recover it. 00:26:27.000 [2024-05-15 11:18:24.170179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.170273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.170287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.000 qpair failed and we were unable to recover it. 00:26:27.000 [2024-05-15 11:18:24.170379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.170467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.170480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.000 qpair failed and we were unable to recover it. 00:26:27.000 [2024-05-15 11:18:24.170565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.170638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.000 [2024-05-15 11:18:24.170651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.000 qpair failed and we were unable to recover it. 00:26:27.001 [2024-05-15 11:18:24.170749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.170899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.170913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.001 qpair failed and we were unable to recover it. 00:26:27.001 [2024-05-15 11:18:24.170985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.171059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.171072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.001 qpair failed and we were unable to recover it. 00:26:27.001 [2024-05-15 11:18:24.171217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.171358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.171389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.001 qpair failed and we were unable to recover it. 00:26:27.001 [2024-05-15 11:18:24.171522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.171636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.171665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.001 qpair failed and we were unable to recover it. 00:26:27.001 [2024-05-15 11:18:24.171855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.171961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.171990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.001 qpair failed and we were unable to recover it. 00:26:27.001 [2024-05-15 11:18:24.172932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.173042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.173058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.001 qpair failed and we were unable to recover it. 00:26:27.001 [2024-05-15 11:18:24.173156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.173315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.173328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.001 qpair failed and we were unable to recover it. 00:26:27.001 [2024-05-15 11:18:24.173568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.173655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.173668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.001 qpair failed and we were unable to recover it. 00:26:27.001 [2024-05-15 11:18:24.173750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.173841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.173854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.001 qpair failed and we were unable to recover it. 00:26:27.001 [2024-05-15 11:18:24.173955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.174047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.174061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.001 qpair failed and we were unable to recover it. 00:26:27.001 [2024-05-15 11:18:24.174202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.174293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.174307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.001 qpair failed and we were unable to recover it. 00:26:27.001 [2024-05-15 11:18:24.174402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.174498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.174510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.001 qpair failed and we were unable to recover it. 00:26:27.001 [2024-05-15 11:18:24.174657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.174741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.174754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.001 qpair failed and we were unable to recover it. 00:26:27.001 [2024-05-15 11:18:24.174851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.174949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.174962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.001 qpair failed and we were unable to recover it. 00:26:27.001 [2024-05-15 11:18:24.175043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.175136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.175149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.001 qpair failed and we were unable to recover it. 00:26:27.001 [2024-05-15 11:18:24.175240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.175317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.175330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.001 qpair failed and we were unable to recover it. 00:26:27.001 [2024-05-15 11:18:24.175417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.175522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.175535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.001 qpair failed and we were unable to recover it. 00:26:27.001 [2024-05-15 11:18:24.175614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.175691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.175704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.001 qpair failed and we were unable to recover it. 00:26:27.001 [2024-05-15 11:18:24.175791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.175880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.175893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.001 qpair failed and we were unable to recover it. 00:26:27.001 [2024-05-15 11:18:24.175980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.176050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.176063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.001 qpair failed and we were unable to recover it. 00:26:27.001 [2024-05-15 11:18:24.176205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.176377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.176391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.001 qpair failed and we were unable to recover it. 00:26:27.001 [2024-05-15 11:18:24.176466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.176560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.176574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.001 qpair failed and we were unable to recover it. 00:26:27.001 [2024-05-15 11:18:24.176651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.176789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.176802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.001 qpair failed and we were unable to recover it. 00:26:27.001 [2024-05-15 11:18:24.176960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.177037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.177050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.001 qpair failed and we were unable to recover it. 00:26:27.001 [2024-05-15 11:18:24.177193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.177368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.177384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.001 qpair failed and we were unable to recover it. 00:26:27.001 [2024-05-15 11:18:24.177461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.177537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.177551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.001 qpair failed and we were unable to recover it. 00:26:27.001 [2024-05-15 11:18:24.177633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.177715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.177729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.001 qpair failed and we were unable to recover it. 00:26:27.001 [2024-05-15 11:18:24.177868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.178033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.178046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.001 qpair failed and we were unable to recover it. 00:26:27.001 [2024-05-15 11:18:24.178120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.178220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.001 [2024-05-15 11:18:24.178239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.002 qpair failed and we were unable to recover it. 00:26:27.002 [2024-05-15 11:18:24.178313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.178393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.178406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.002 qpair failed and we were unable to recover it. 00:26:27.002 [2024-05-15 11:18:24.178550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.178631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.178644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.002 qpair failed and we were unable to recover it. 00:26:27.002 [2024-05-15 11:18:24.178742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.178817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.178831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.002 qpair failed and we were unable to recover it. 00:26:27.002 [2024-05-15 11:18:24.178920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.179013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.179026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.002 qpair failed and we were unable to recover it. 00:26:27.002 [2024-05-15 11:18:24.179099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.179177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.179191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.002 qpair failed and we were unable to recover it. 00:26:27.002 [2024-05-15 11:18:24.179337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.179480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.179496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.002 qpair failed and we were unable to recover it. 00:26:27.002 [2024-05-15 11:18:24.179588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.179735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.179748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.002 qpair failed and we were unable to recover it. 00:26:27.002 [2024-05-15 11:18:24.179902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.179991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.180004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.002 qpair failed and we were unable to recover it. 00:26:27.002 [2024-05-15 11:18:24.180112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.180208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.180222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.002 qpair failed and we were unable to recover it. 00:26:27.002 [2024-05-15 11:18:24.180301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.180408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.180421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.002 qpair failed and we were unable to recover it. 00:26:27.002 [2024-05-15 11:18:24.180582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.180653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.180666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.002 qpair failed and we were unable to recover it. 00:26:27.002 [2024-05-15 11:18:24.180756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.180837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.180850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.002 qpair failed and we were unable to recover it. 00:26:27.002 [2024-05-15 11:18:24.180939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.181082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.181095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.002 qpair failed and we were unable to recover it. 00:26:27.002 [2024-05-15 11:18:24.181183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.181330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.181344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.002 qpair failed and we were unable to recover it. 00:26:27.002 [2024-05-15 11:18:24.181437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.181619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.181633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.002 qpair failed and we were unable to recover it. 00:26:27.002 [2024-05-15 11:18:24.181729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.181801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.181817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.002 qpair failed and we were unable to recover it. 00:26:27.002 [2024-05-15 11:18:24.181894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.181970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.181984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.002 qpair failed and we were unable to recover it. 00:26:27.002 [2024-05-15 11:18:24.182083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.182182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.182196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.002 qpair failed and we were unable to recover it. 00:26:27.002 [2024-05-15 11:18:24.182273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.182347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.182361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.002 qpair failed and we were unable to recover it. 00:26:27.002 [2024-05-15 11:18:24.182443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.182584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.182597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.002 qpair failed and we were unable to recover it. 00:26:27.002 [2024-05-15 11:18:24.182681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.182761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.182775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.002 qpair failed and we were unable to recover it. 00:26:27.002 [2024-05-15 11:18:24.182864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.182963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.182976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.002 qpair failed and we were unable to recover it. 00:26:27.002 [2024-05-15 11:18:24.183070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.183149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.183162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.002 qpair failed and we were unable to recover it. 00:26:27.002 [2024-05-15 11:18:24.183243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.183326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.183341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.002 qpair failed and we were unable to recover it. 00:26:27.002 [2024-05-15 11:18:24.183430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.183502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.183515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.002 qpair failed and we were unable to recover it. 00:26:27.002 [2024-05-15 11:18:24.183590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.183663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.183676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.002 qpair failed and we were unable to recover it. 00:26:27.002 [2024-05-15 11:18:24.183780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.183948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.183961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.002 qpair failed and we were unable to recover it. 00:26:27.002 [2024-05-15 11:18:24.184048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.184148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.002 [2024-05-15 11:18:24.184161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.002 qpair failed and we were unable to recover it. 00:26:27.002 [2024-05-15 11:18:24.184278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.003 [2024-05-15 11:18:24.184359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.003 [2024-05-15 11:18:24.184372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.003 qpair failed and we were unable to recover it. 00:26:27.003 [2024-05-15 11:18:24.184615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.003 [2024-05-15 11:18:24.184714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.003 [2024-05-15 11:18:24.184728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.003 qpair failed and we were unable to recover it. 00:26:27.003 [2024-05-15 11:18:24.184817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.003 [2024-05-15 11:18:24.184903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.003 [2024-05-15 11:18:24.184917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.003 qpair failed and we were unable to recover it. 00:26:27.003 [2024-05-15 11:18:24.185061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.003 [2024-05-15 11:18:24.185151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.003 [2024-05-15 11:18:24.185169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.003 qpair failed and we were unable to recover it. 00:26:27.003 [2024-05-15 11:18:24.185245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.003 [2024-05-15 11:18:24.185323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.003 [2024-05-15 11:18:24.185336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.003 qpair failed and we were unable to recover it. 00:26:27.003 [2024-05-15 11:18:24.185560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.003 [2024-05-15 11:18:24.185698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.003 [2024-05-15 11:18:24.185711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.003 qpair failed and we were unable to recover it. 00:26:27.003 [2024-05-15 11:18:24.185805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.003 [2024-05-15 11:18:24.185886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.003 [2024-05-15 11:18:24.185900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.003 qpair failed and we were unable to recover it. 00:26:27.003 [2024-05-15 11:18:24.185974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.003 [2024-05-15 11:18:24.186056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.003 [2024-05-15 11:18:24.186069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.003 qpair failed and we were unable to recover it. 00:26:27.003 [2024-05-15 11:18:24.186160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.003 [2024-05-15 11:18:24.186245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.003 [2024-05-15 11:18:24.186259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.003 qpair failed and we were unable to recover it. 00:26:27.003 [2024-05-15 11:18:24.186424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.003 [2024-05-15 11:18:24.186594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.003 [2024-05-15 11:18:24.186608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.003 qpair failed and we were unable to recover it. 00:26:27.003 [2024-05-15 11:18:24.186749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.003 [2024-05-15 11:18:24.186907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.003 [2024-05-15 11:18:24.186920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.003 qpair failed and we were unable to recover it. 00:26:27.003 [2024-05-15 11:18:24.187064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.003 [2024-05-15 11:18:24.187139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.003 [2024-05-15 11:18:24.187155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.003 qpair failed and we were unable to recover it. 00:26:27.003 [2024-05-15 11:18:24.187321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.003 [2024-05-15 11:18:24.187401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.003 [2024-05-15 11:18:24.187414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.003 qpair failed and we were unable to recover it. 00:26:27.003 [2024-05-15 11:18:24.187496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.003 [2024-05-15 11:18:24.187582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.003 [2024-05-15 11:18:24.187595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.003 qpair failed and we were unable to recover it. 00:26:27.003 [2024-05-15 11:18:24.187672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.003 [2024-05-15 11:18:24.187745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.003 [2024-05-15 11:18:24.187758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.003 qpair failed and we were unable to recover it. 00:26:27.003 [2024-05-15 11:18:24.187967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.003 [2024-05-15 11:18:24.188048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.003 [2024-05-15 11:18:24.188061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.003 qpair failed and we were unable to recover it. 00:26:27.003 [2024-05-15 11:18:24.188134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.003 [2024-05-15 11:18:24.188208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.003 [2024-05-15 11:18:24.188222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.003 qpair failed and we were unable to recover it. 00:26:27.003 [2024-05-15 11:18:24.188364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.003 [2024-05-15 11:18:24.188435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.003 [2024-05-15 11:18:24.188448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.003 qpair failed and we were unable to recover it. 00:26:27.003 [2024-05-15 11:18:24.188593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.003 [2024-05-15 11:18:24.188768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.003 [2024-05-15 11:18:24.188781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.003 qpair failed and we were unable to recover it. 00:26:27.003 [2024-05-15 11:18:24.188936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.003 [2024-05-15 11:18:24.189018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.003 [2024-05-15 11:18:24.189031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.003 qpair failed and we were unable to recover it. 00:26:27.003 [2024-05-15 11:18:24.189179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.003 [2024-05-15 11:18:24.189251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.003 [2024-05-15 11:18:24.189265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.003 qpair failed and we were unable to recover it. 00:26:27.003 [2024-05-15 11:18:24.189441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.003 [2024-05-15 11:18:24.189599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.003 [2024-05-15 11:18:24.189613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.003 qpair failed and we were unable to recover it. 00:26:27.003 [2024-05-15 11:18:24.189691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.003 [2024-05-15 11:18:24.189862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.003 [2024-05-15 11:18:24.189874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.003 qpair failed and we were unable to recover it. 00:26:27.003 [2024-05-15 11:18:24.189965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.003 [2024-05-15 11:18:24.190039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.004 [2024-05-15 11:18:24.190053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.004 qpair failed and we were unable to recover it. 00:26:27.004 [2024-05-15 11:18:24.190126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.004 [2024-05-15 11:18:24.190212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.004 [2024-05-15 11:18:24.190226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.004 qpair failed and we were unable to recover it. 00:26:27.004 [2024-05-15 11:18:24.190334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.004 [2024-05-15 11:18:24.190439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.004 [2024-05-15 11:18:24.190452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.004 qpair failed and we were unable to recover it. 00:26:27.004 [2024-05-15 11:18:24.190534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.004 [2024-05-15 11:18:24.190676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.004 [2024-05-15 11:18:24.190689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.004 qpair failed and we were unable to recover it. 00:26:27.004 [2024-05-15 11:18:24.190762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.004 [2024-05-15 11:18:24.190849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.004 [2024-05-15 11:18:24.190862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.004 qpair failed and we were unable to recover it. 00:26:27.004 [2024-05-15 11:18:24.191041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.004 [2024-05-15 11:18:24.191146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.004 [2024-05-15 11:18:24.191159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.004 qpair failed and we were unable to recover it. 00:26:27.004 [2024-05-15 11:18:24.191260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.004 [2024-05-15 11:18:24.191355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.004 [2024-05-15 11:18:24.191369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.004 qpair failed and we were unable to recover it. 00:26:27.004 [2024-05-15 11:18:24.191455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.004 [2024-05-15 11:18:24.191608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.004 [2024-05-15 11:18:24.191621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.004 qpair failed and we were unable to recover it. 00:26:27.004 [2024-05-15 11:18:24.191794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.004 [2024-05-15 11:18:24.191907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.004 [2024-05-15 11:18:24.191920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.004 qpair failed and we were unable to recover it. 00:26:27.004 [2024-05-15 11:18:24.192002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.004 [2024-05-15 11:18:24.192086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.004 [2024-05-15 11:18:24.192099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.004 qpair failed and we were unable to recover it. 00:26:27.004 [2024-05-15 11:18:24.192184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.004 [2024-05-15 11:18:24.192282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.004 [2024-05-15 11:18:24.192295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.004 qpair failed and we were unable to recover it. 00:26:27.292 [2024-05-15 11:18:24.192507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.192655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.192669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.292 qpair failed and we were unable to recover it. 00:26:27.292 [2024-05-15 11:18:24.192757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.192834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.192848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.292 qpair failed and we were unable to recover it. 00:26:27.292 [2024-05-15 11:18:24.192923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.193000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.193013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.292 qpair failed and we were unable to recover it. 00:26:27.292 [2024-05-15 11:18:24.193172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.193252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.193265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.292 qpair failed and we were unable to recover it. 00:26:27.292 [2024-05-15 11:18:24.193340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.193411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.193425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.292 qpair failed and we were unable to recover it. 00:26:27.292 [2024-05-15 11:18:24.193498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.193637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.193650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.292 qpair failed and we were unable to recover it. 00:26:27.292 [2024-05-15 11:18:24.193736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.193885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.193899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.292 qpair failed and we were unable to recover it. 00:26:27.292 [2024-05-15 11:18:24.193973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.194055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.194069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.292 qpair failed and we were unable to recover it. 00:26:27.292 [2024-05-15 11:18:24.194160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.194266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.194279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.292 qpair failed and we were unable to recover it. 00:26:27.292 [2024-05-15 11:18:24.194377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.194457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.194471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.292 qpair failed and we were unable to recover it. 00:26:27.292 [2024-05-15 11:18:24.194544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.194708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.194722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.292 qpair failed and we were unable to recover it. 00:26:27.292 [2024-05-15 11:18:24.194824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.194901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.194915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.292 qpair failed and we were unable to recover it. 00:26:27.292 [2024-05-15 11:18:24.194991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.195150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.195163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.292 qpair failed and we were unable to recover it. 00:26:27.292 [2024-05-15 11:18:24.195252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.195330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.195343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.292 qpair failed and we were unable to recover it. 00:26:27.292 [2024-05-15 11:18:24.195437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.195509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.195522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.292 qpair failed and we were unable to recover it. 00:26:27.292 [2024-05-15 11:18:24.195663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.195739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.195752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.292 qpair failed and we were unable to recover it. 00:26:27.292 [2024-05-15 11:18:24.195826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.195972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.195984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.292 qpair failed and we were unable to recover it. 00:26:27.292 [2024-05-15 11:18:24.196077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.196153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.196172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.292 qpair failed and we were unable to recover it. 00:26:27.292 [2024-05-15 11:18:24.196246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.196392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.196405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.292 qpair failed and we were unable to recover it. 00:26:27.292 [2024-05-15 11:18:24.196488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.196570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.196583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.292 qpair failed and we were unable to recover it. 00:26:27.292 [2024-05-15 11:18:24.196660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.196734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.196747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.292 qpair failed and we were unable to recover it. 00:26:27.292 [2024-05-15 11:18:24.196824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.196910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.196923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.292 qpair failed and we were unable to recover it. 00:26:27.292 [2024-05-15 11:18:24.197015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.197107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.197120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.292 qpair failed and we were unable to recover it. 00:26:27.292 [2024-05-15 11:18:24.197203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.197283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.197296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.292 qpair failed and we were unable to recover it. 00:26:27.292 [2024-05-15 11:18:24.197376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.197448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.197461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.292 qpair failed and we were unable to recover it. 00:26:27.292 [2024-05-15 11:18:24.197551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.197642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.197655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.292 qpair failed and we were unable to recover it. 00:26:27.292 [2024-05-15 11:18:24.197730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.197810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.197823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.292 qpair failed and we were unable to recover it. 00:26:27.292 [2024-05-15 11:18:24.197912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.197993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.292 [2024-05-15 11:18:24.198006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.292 qpair failed and we were unable to recover it. 00:26:27.293 [2024-05-15 11:18:24.198157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.198255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.198269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.293 qpair failed and we were unable to recover it. 00:26:27.293 [2024-05-15 11:18:24.198363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.198532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.198545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.293 qpair failed and we were unable to recover it. 00:26:27.293 [2024-05-15 11:18:24.198622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.198693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.198706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.293 qpair failed and we were unable to recover it. 00:26:27.293 [2024-05-15 11:18:24.198781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.198921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.198934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.293 qpair failed and we were unable to recover it. 00:26:27.293 [2024-05-15 11:18:24.199086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.199170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.199183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.293 qpair failed and we were unable to recover it. 00:26:27.293 [2024-05-15 11:18:24.199258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.199331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.199344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.293 qpair failed and we were unable to recover it. 00:26:27.293 [2024-05-15 11:18:24.199496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.199566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.199579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.293 qpair failed and we were unable to recover it. 00:26:27.293 [2024-05-15 11:18:24.199659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.199734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.199747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.293 qpair failed and we were unable to recover it. 00:26:27.293 [2024-05-15 11:18:24.199933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.200089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.200103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.293 qpair failed and we were unable to recover it. 00:26:27.293 [2024-05-15 11:18:24.200199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.200286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.200299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.293 qpair failed and we were unable to recover it. 00:26:27.293 [2024-05-15 11:18:24.200376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.200448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.200461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.293 qpair failed and we were unable to recover it. 00:26:27.293 [2024-05-15 11:18:24.200538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.200612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.200624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.293 qpair failed and we were unable to recover it. 00:26:27.293 [2024-05-15 11:18:24.200777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.200920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.200933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.293 qpair failed and we were unable to recover it. 00:26:27.293 [2024-05-15 11:18:24.201017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.201104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.201117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.293 qpair failed and we were unable to recover it. 00:26:27.293 [2024-05-15 11:18:24.201219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.201303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.201317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.293 qpair failed and we were unable to recover it. 00:26:27.293 [2024-05-15 11:18:24.201465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.201545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.201558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.293 qpair failed and we were unable to recover it. 00:26:27.293 [2024-05-15 11:18:24.201677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.201770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.201783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.293 qpair failed and we were unable to recover it. 00:26:27.293 [2024-05-15 11:18:24.201936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.202041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.202070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.293 qpair failed and we were unable to recover it. 00:26:27.293 [2024-05-15 11:18:24.202248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.202364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.202393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.293 qpair failed and we were unable to recover it. 00:26:27.293 [2024-05-15 11:18:24.202515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.202587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.202600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.293 qpair failed and we were unable to recover it. 00:26:27.293 [2024-05-15 11:18:24.202746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.202828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.202841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.293 qpair failed and we were unable to recover it. 00:26:27.293 [2024-05-15 11:18:24.202935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.203018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.203046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.293 qpair failed and we were unable to recover it. 00:26:27.293 [2024-05-15 11:18:24.203161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.203285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.203314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.293 qpair failed and we were unable to recover it. 00:26:27.293 [2024-05-15 11:18:24.203439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.203542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.203582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.293 qpair failed and we were unable to recover it. 00:26:27.293 [2024-05-15 11:18:24.203659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.203741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.203755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.293 qpair failed and we were unable to recover it. 00:26:27.293 [2024-05-15 11:18:24.203849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.203997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.204010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.293 qpair failed and we were unable to recover it. 00:26:27.293 [2024-05-15 11:18:24.204103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.204265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.204280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.293 qpair failed and we were unable to recover it. 00:26:27.293 [2024-05-15 11:18:24.204363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.204463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.204476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.293 qpair failed and we were unable to recover it. 00:26:27.293 [2024-05-15 11:18:24.204619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.204693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.293 [2024-05-15 11:18:24.204706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.293 qpair failed and we were unable to recover it. 00:26:27.293 [2024-05-15 11:18:24.204799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.204886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.204899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.294 qpair failed and we were unable to recover it. 00:26:27.294 [2024-05-15 11:18:24.205000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.205147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.205160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.294 qpair failed and we were unable to recover it. 00:26:27.294 [2024-05-15 11:18:24.205336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.205419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.205432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.294 qpair failed and we were unable to recover it. 00:26:27.294 [2024-05-15 11:18:24.205519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.205592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.205606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.294 qpair failed and we were unable to recover it. 00:26:27.294 [2024-05-15 11:18:24.205685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.205825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.205838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.294 qpair failed and we were unable to recover it. 00:26:27.294 [2024-05-15 11:18:24.205983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.206056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.206069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.294 qpair failed and we were unable to recover it. 00:26:27.294 [2024-05-15 11:18:24.206150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.206251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.206265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.294 qpair failed and we were unable to recover it. 00:26:27.294 [2024-05-15 11:18:24.206356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.206512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.206525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.294 qpair failed and we were unable to recover it. 00:26:27.294 [2024-05-15 11:18:24.206667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.206756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.206769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.294 qpair failed and we were unable to recover it. 00:26:27.294 [2024-05-15 11:18:24.206843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.206987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.207000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.294 qpair failed and we were unable to recover it. 00:26:27.294 [2024-05-15 11:18:24.207089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.207161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.207191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.294 qpair failed and we were unable to recover it. 00:26:27.294 [2024-05-15 11:18:24.207283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.207440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.207453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.294 qpair failed and we were unable to recover it. 00:26:27.294 [2024-05-15 11:18:24.207527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.207662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.207675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.294 qpair failed and we were unable to recover it. 00:26:27.294 [2024-05-15 11:18:24.207754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.207824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.207837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.294 qpair failed and we were unable to recover it. 00:26:27.294 [2024-05-15 11:18:24.207913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.207987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.207999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.294 qpair failed and we were unable to recover it. 00:26:27.294 [2024-05-15 11:18:24.208073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.208147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.208161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.294 qpair failed and we were unable to recover it. 00:26:27.294 [2024-05-15 11:18:24.208252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.208333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.208346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.294 qpair failed and we were unable to recover it. 00:26:27.294 [2024-05-15 11:18:24.208442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.208513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.208526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.294 qpair failed and we were unable to recover it. 00:26:27.294 [2024-05-15 11:18:24.208699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.208784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.208797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.294 qpair failed and we were unable to recover it. 00:26:27.294 [2024-05-15 11:18:24.208876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.208958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.208971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.294 qpair failed and we were unable to recover it. 00:26:27.294 [2024-05-15 11:18:24.209053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.209135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.209148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.294 qpair failed and we were unable to recover it. 00:26:27.294 [2024-05-15 11:18:24.209227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.209375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.209388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.294 qpair failed and we were unable to recover it. 00:26:27.294 [2024-05-15 11:18:24.209549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.209652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.209665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.294 qpair failed and we were unable to recover it. 00:26:27.294 [2024-05-15 11:18:24.209741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.209823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.209836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.294 qpair failed and we were unable to recover it. 00:26:27.294 [2024-05-15 11:18:24.209912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.210050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.210064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.294 qpair failed and we were unable to recover it. 00:26:27.294 [2024-05-15 11:18:24.210154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.210248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.210262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.294 qpair failed and we were unable to recover it. 00:26:27.294 [2024-05-15 11:18:24.210340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.210428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.210441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.294 qpair failed and we were unable to recover it. 00:26:27.294 [2024-05-15 11:18:24.210523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.210595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.210611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.294 qpair failed and we were unable to recover it. 00:26:27.294 [2024-05-15 11:18:24.210686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.210764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.210778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.294 qpair failed and we were unable to recover it. 00:26:27.294 [2024-05-15 11:18:24.210855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.210992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.294 [2024-05-15 11:18:24.211006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.295 qpair failed and we were unable to recover it. 00:26:27.295 [2024-05-15 11:18:24.211077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.211151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.211170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.295 qpair failed and we were unable to recover it. 00:26:27.295 [2024-05-15 11:18:24.211329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.211480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.211493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.295 qpair failed and we were unable to recover it. 00:26:27.295 [2024-05-15 11:18:24.211567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.211647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.211660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.295 qpair failed and we were unable to recover it. 00:26:27.295 [2024-05-15 11:18:24.211732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.211804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.211817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.295 qpair failed and we were unable to recover it. 00:26:27.295 [2024-05-15 11:18:24.211893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.211977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.211990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.295 qpair failed and we were unable to recover it. 00:26:27.295 [2024-05-15 11:18:24.212076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.212156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.212190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.295 qpair failed and we were unable to recover it. 00:26:27.295 [2024-05-15 11:18:24.212280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.212424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.212437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.295 qpair failed and we were unable to recover it. 00:26:27.295 [2024-05-15 11:18:24.212512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.212665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.212682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.295 qpair failed and we were unable to recover it. 00:26:27.295 [2024-05-15 11:18:24.212760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.212899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.212913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.295 qpair failed and we were unable to recover it. 00:26:27.295 [2024-05-15 11:18:24.212989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.213074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.213087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.295 qpair failed and we were unable to recover it. 00:26:27.295 [2024-05-15 11:18:24.213169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.213256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.213269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.295 qpair failed and we were unable to recover it. 00:26:27.295 [2024-05-15 11:18:24.213348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.213490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.213503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.295 qpair failed and we were unable to recover it. 00:26:27.295 [2024-05-15 11:18:24.213576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.213715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.213728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.295 qpair failed and we were unable to recover it. 00:26:27.295 [2024-05-15 11:18:24.213813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.213901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.213914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.295 qpair failed and we were unable to recover it. 00:26:27.295 [2024-05-15 11:18:24.213989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.214120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.214134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.295 qpair failed and we were unable to recover it. 00:26:27.295 [2024-05-15 11:18:24.214277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.214358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.214372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.295 qpair failed and we were unable to recover it. 00:26:27.295 [2024-05-15 11:18:24.214533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.214612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.214626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.295 qpair failed and we were unable to recover it. 00:26:27.295 [2024-05-15 11:18:24.214712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.214788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.214803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.295 qpair failed and we were unable to recover it. 00:26:27.295 [2024-05-15 11:18:24.214881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.214957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.214971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.295 qpair failed and we were unable to recover it. 00:26:27.295 [2024-05-15 11:18:24.215047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.215128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.215141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.295 qpair failed and we were unable to recover it. 00:26:27.295 [2024-05-15 11:18:24.215230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.215333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.215346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.295 qpair failed and we were unable to recover it. 00:26:27.295 [2024-05-15 11:18:24.215436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.215525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.215538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.295 qpair failed and we were unable to recover it. 00:26:27.295 [2024-05-15 11:18:24.215690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.215768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.215781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.295 qpair failed and we were unable to recover it. 00:26:27.295 [2024-05-15 11:18:24.215934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.216094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.216107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.295 qpair failed and we were unable to recover it. 00:26:27.295 [2024-05-15 11:18:24.216200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.216284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.216297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.295 qpair failed and we were unable to recover it. 00:26:27.295 [2024-05-15 11:18:24.216372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.216445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.216458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.295 qpair failed and we were unable to recover it. 00:26:27.295 [2024-05-15 11:18:24.216549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.216633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.216646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.295 qpair failed and we were unable to recover it. 00:26:27.295 [2024-05-15 11:18:24.216787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.216954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.216972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.295 qpair failed and we were unable to recover it. 00:26:27.295 [2024-05-15 11:18:24.217041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.217110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.217123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.295 qpair failed and we were unable to recover it. 00:26:27.295 [2024-05-15 11:18:24.217200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.295 [2024-05-15 11:18:24.217279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.296 [2024-05-15 11:18:24.217292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.296 qpair failed and we were unable to recover it. 00:26:27.296 [2024-05-15 11:18:24.217444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.296 [2024-05-15 11:18:24.217516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.296 [2024-05-15 11:18:24.217529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.296 qpair failed and we were unable to recover it. 00:26:27.296 [2024-05-15 11:18:24.217693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.296 [2024-05-15 11:18:24.217766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.296 [2024-05-15 11:18:24.217779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.296 qpair failed and we were unable to recover it. 00:26:27.296 [2024-05-15 11:18:24.217881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.296 [2024-05-15 11:18:24.217955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.296 [2024-05-15 11:18:24.217967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.296 qpair failed and we were unable to recover it. 00:26:27.296 [2024-05-15 11:18:24.218108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.296 [2024-05-15 11:18:24.218178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.296 [2024-05-15 11:18:24.218192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.296 qpair failed and we were unable to recover it. 00:26:27.296 [2024-05-15 11:18:24.218359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.296 [2024-05-15 11:18:24.218450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.296 [2024-05-15 11:18:24.218463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.296 qpair failed and we were unable to recover it. 00:26:27.296 [2024-05-15 11:18:24.218549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.296 [2024-05-15 11:18:24.218636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.296 [2024-05-15 11:18:24.218649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.296 qpair failed and we were unable to recover it. 00:26:27.296 [2024-05-15 11:18:24.218729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.296 [2024-05-15 11:18:24.218800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.296 [2024-05-15 11:18:24.218813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.296 qpair failed and we were unable to recover it. 00:26:27.296 [2024-05-15 11:18:24.218958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.296 [2024-05-15 11:18:24.219101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.296 [2024-05-15 11:18:24.219115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.296 qpair failed and we were unable to recover it. 00:26:27.296 [2024-05-15 11:18:24.219209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.296 [2024-05-15 11:18:24.219292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.296 [2024-05-15 11:18:24.219305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.296 qpair failed and we were unable to recover it. 00:26:27.296 [2024-05-15 11:18:24.219487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.296 [2024-05-15 11:18:24.219580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.296 [2024-05-15 11:18:24.219593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.296 qpair failed and we were unable to recover it. 00:26:27.296 [2024-05-15 11:18:24.219670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.296 [2024-05-15 11:18:24.219885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.296 [2024-05-15 11:18:24.219913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.296 qpair failed and we were unable to recover it. 00:26:27.296 [2024-05-15 11:18:24.220057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.296 [2024-05-15 11:18:24.220181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.296 [2024-05-15 11:18:24.220211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.296 qpair failed and we were unable to recover it. 00:26:27.296 [2024-05-15 11:18:24.220326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.296 [2024-05-15 11:18:24.220440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.296 [2024-05-15 11:18:24.220468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.296 qpair failed and we were unable to recover it. 00:26:27.296 [2024-05-15 11:18:24.220653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.296 [2024-05-15 11:18:24.220841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.296 [2024-05-15 11:18:24.220870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.296 qpair failed and we were unable to recover it. 00:26:27.296 [2024-05-15 11:18:24.220979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.296 [2024-05-15 11:18:24.221153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.296 [2024-05-15 11:18:24.221193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.296 qpair failed and we were unable to recover it. 00:26:27.296 [2024-05-15 11:18:24.221377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.296 [2024-05-15 11:18:24.221623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.296 [2024-05-15 11:18:24.221651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.296 qpair failed and we were unable to recover it. 00:26:27.296 [2024-05-15 11:18:24.221834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.296 [2024-05-15 11:18:24.222001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.296 [2024-05-15 11:18:24.222030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.296 qpair failed and we were unable to recover it. 00:26:27.296 [2024-05-15 11:18:24.222152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.296 [2024-05-15 11:18:24.222282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.296 [2024-05-15 11:18:24.222311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.296 qpair failed and we were unable to recover it. 00:26:27.296 [2024-05-15 11:18:24.222450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.296 [2024-05-15 11:18:24.222573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.296 [2024-05-15 11:18:24.222602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.296 qpair failed and we were unable to recover it. 00:26:27.296 [2024-05-15 11:18:24.222784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.296 [2024-05-15 11:18:24.222957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.296 [2024-05-15 11:18:24.222986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.296 qpair failed and we were unable to recover it. 00:26:27.296 [2024-05-15 11:18:24.223179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.296 [2024-05-15 11:18:24.223298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.296 [2024-05-15 11:18:24.223327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.296 qpair failed and we were unable to recover it. 00:26:27.296 [2024-05-15 11:18:24.223531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.296 [2024-05-15 11:18:24.223639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.296 [2024-05-15 11:18:24.223667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.296 qpair failed and we were unable to recover it. 00:26:27.296 [2024-05-15 11:18:24.223855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.296 [2024-05-15 11:18:24.224038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.296 [2024-05-15 11:18:24.224067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.297 qpair failed and we were unable to recover it. 00:26:27.297 [2024-05-15 11:18:24.224220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.224345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.224374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.297 qpair failed and we were unable to recover it. 00:26:27.297 [2024-05-15 11:18:24.224553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.224723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.224752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.297 qpair failed and we were unable to recover it. 00:26:27.297 [2024-05-15 11:18:24.224932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.225050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.225078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.297 qpair failed and we were unable to recover it. 00:26:27.297 [2024-05-15 11:18:24.225210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.225334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.225362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.297 qpair failed and we were unable to recover it. 00:26:27.297 [2024-05-15 11:18:24.225496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.225604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.225645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.297 qpair failed and we were unable to recover it. 00:26:27.297 [2024-05-15 11:18:24.225785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.225988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.226005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.297 qpair failed and we were unable to recover it. 00:26:27.297 [2024-05-15 11:18:24.226090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.226175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.226192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.297 qpair failed and we were unable to recover it. 00:26:27.297 [2024-05-15 11:18:24.226382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.226559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.226573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.297 qpair failed and we were unable to recover it. 00:26:27.297 [2024-05-15 11:18:24.226650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.226762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.226791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.297 qpair failed and we were unable to recover it. 00:26:27.297 [2024-05-15 11:18:24.226906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.227032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.227061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.297 qpair failed and we were unable to recover it. 00:26:27.297 [2024-05-15 11:18:24.227237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.227361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.227390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.297 qpair failed and we were unable to recover it. 00:26:27.297 [2024-05-15 11:18:24.227625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.227802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.227816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.297 qpair failed and we were unable to recover it. 00:26:27.297 [2024-05-15 11:18:24.227903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.227983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.227996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.297 qpair failed and we were unable to recover it. 00:26:27.297 [2024-05-15 11:18:24.228076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.228219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.228233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.297 qpair failed and we were unable to recover it. 00:26:27.297 [2024-05-15 11:18:24.228322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.228402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.228415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.297 qpair failed and we were unable to recover it. 00:26:27.297 [2024-05-15 11:18:24.228628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.228718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.228731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.297 qpair failed and we were unable to recover it. 00:26:27.297 [2024-05-15 11:18:24.228811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.228898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.228911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.297 qpair failed and we were unable to recover it. 00:26:27.297 [2024-05-15 11:18:24.229002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.229081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.229095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.297 qpair failed and we were unable to recover it. 00:26:27.297 [2024-05-15 11:18:24.229240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.229383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.229396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.297 qpair failed and we were unable to recover it. 00:26:27.297 [2024-05-15 11:18:24.229472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.229560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.229574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.297 qpair failed and we were unable to recover it. 00:26:27.297 [2024-05-15 11:18:24.229730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.229902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.229915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.297 qpair failed and we were unable to recover it. 00:26:27.297 [2024-05-15 11:18:24.229989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.230129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.230142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.297 qpair failed and we were unable to recover it. 00:26:27.297 [2024-05-15 11:18:24.230303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.230392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.230406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.297 qpair failed and we were unable to recover it. 00:26:27.297 [2024-05-15 11:18:24.230548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.230661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.230674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.297 qpair failed and we were unable to recover it. 00:26:27.297 [2024-05-15 11:18:24.230760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.230907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.230921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.297 qpair failed and we were unable to recover it. 00:26:27.297 [2024-05-15 11:18:24.230993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.231097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.231110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.297 qpair failed and we were unable to recover it. 00:26:27.297 [2024-05-15 11:18:24.231252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.231338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.231351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.297 qpair failed and we were unable to recover it. 00:26:27.297 [2024-05-15 11:18:24.231437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.231602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.231615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.297 qpair failed and we were unable to recover it. 00:26:27.297 [2024-05-15 11:18:24.231698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.231853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.297 [2024-05-15 11:18:24.231882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.298 qpair failed and we were unable to recover it. 00:26:27.298 [2024-05-15 11:18:24.231999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.232129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.232158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.298 qpair failed and we were unable to recover it. 00:26:27.298 [2024-05-15 11:18:24.232300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.232408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.232421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.298 qpair failed and we were unable to recover it. 00:26:27.298 [2024-05-15 11:18:24.232502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.232665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.232694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.298 qpair failed and we were unable to recover it. 00:26:27.298 [2024-05-15 11:18:24.232801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.233036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.233066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.298 qpair failed and we were unable to recover it. 00:26:27.298 [2024-05-15 11:18:24.233191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.233302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.233331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.298 qpair failed and we were unable to recover it. 00:26:27.298 [2024-05-15 11:18:24.233506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.233609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.233638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.298 qpair failed and we were unable to recover it. 00:26:27.298 [2024-05-15 11:18:24.233832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.234039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.234067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.298 qpair failed and we were unable to recover it. 00:26:27.298 [2024-05-15 11:18:24.234182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.234286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.234315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.298 qpair failed and we were unable to recover it. 00:26:27.298 [2024-05-15 11:18:24.234523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.234643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.234672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.298 qpair failed and we were unable to recover it. 00:26:27.298 [2024-05-15 11:18:24.234809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.234977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.235005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.298 qpair failed and we were unable to recover it. 00:26:27.298 [2024-05-15 11:18:24.235122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.235322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.235352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.298 qpair failed and we were unable to recover it. 00:26:27.298 [2024-05-15 11:18:24.235464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.235643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.235672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.298 qpair failed and we were unable to recover it. 00:26:27.298 [2024-05-15 11:18:24.235804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.235905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.235933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.298 qpair failed and we were unable to recover it. 00:26:27.298 [2024-05-15 11:18:24.236050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.236213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.236244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.298 qpair failed and we were unable to recover it. 00:26:27.298 [2024-05-15 11:18:24.236459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.236630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.236659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.298 qpair failed and we were unable to recover it. 00:26:27.298 [2024-05-15 11:18:24.236898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.237007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.237036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.298 qpair failed and we were unable to recover it. 00:26:27.298 [2024-05-15 11:18:24.237217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.237490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.237520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.298 qpair failed and we were unable to recover it. 00:26:27.298 [2024-05-15 11:18:24.237650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.237887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.237918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.298 qpair failed and we were unable to recover it. 00:26:27.298 [2024-05-15 11:18:24.238070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.238151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.238169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.298 qpair failed and we were unable to recover it. 00:26:27.298 [2024-05-15 11:18:24.238314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.238476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.238489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.298 qpair failed and we were unable to recover it. 00:26:27.298 [2024-05-15 11:18:24.238629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.238716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.238729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.298 qpair failed and we were unable to recover it. 00:26:27.298 [2024-05-15 11:18:24.238799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.238877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.238890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.298 qpair failed and we were unable to recover it. 00:26:27.298 [2024-05-15 11:18:24.239046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.239128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.239142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.298 qpair failed and we were unable to recover it. 00:26:27.298 [2024-05-15 11:18:24.239287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.239425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.239438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.298 qpair failed and we were unable to recover it. 00:26:27.298 [2024-05-15 11:18:24.239608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.239689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.239702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.298 qpair failed and we were unable to recover it. 00:26:27.298 [2024-05-15 11:18:24.239794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.239864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.239877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.298 qpair failed and we were unable to recover it. 00:26:27.298 [2024-05-15 11:18:24.240019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.240127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.240144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.298 qpair failed and we were unable to recover it. 00:26:27.298 [2024-05-15 11:18:24.240303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.240385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.240400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.298 qpair failed and we were unable to recover it. 00:26:27.298 [2024-05-15 11:18:24.240504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.240671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.240685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.298 qpair failed and we were unable to recover it. 00:26:27.298 [2024-05-15 11:18:24.240770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.298 [2024-05-15 11:18:24.240906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.240920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.299 qpair failed and we were unable to recover it. 00:26:27.299 [2024-05-15 11:18:24.241105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.241182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.241196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.299 qpair failed and we were unable to recover it. 00:26:27.299 [2024-05-15 11:18:24.241273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.241410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.241423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.299 qpair failed and we were unable to recover it. 00:26:27.299 [2024-05-15 11:18:24.241609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.241684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.241696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.299 qpair failed and we were unable to recover it. 00:26:27.299 [2024-05-15 11:18:24.241782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.241869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.241882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.299 qpair failed and we were unable to recover it. 00:26:27.299 [2024-05-15 11:18:24.241978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.242115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.242129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.299 qpair failed and we were unable to recover it. 00:26:27.299 [2024-05-15 11:18:24.242291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.242379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.242393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.299 qpair failed and we were unable to recover it. 00:26:27.299 [2024-05-15 11:18:24.242543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.242629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.242643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.299 qpair failed and we were unable to recover it. 00:26:27.299 [2024-05-15 11:18:24.242807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.242897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.242911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.299 qpair failed and we were unable to recover it. 00:26:27.299 [2024-05-15 11:18:24.243004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.243148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.243162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.299 qpair failed and we were unable to recover it. 00:26:27.299 [2024-05-15 11:18:24.243259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.243346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.243359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.299 qpair failed and we were unable to recover it. 00:26:27.299 [2024-05-15 11:18:24.243501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.243573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.243587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.299 qpair failed and we were unable to recover it. 00:26:27.299 [2024-05-15 11:18:24.243680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.243752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.243764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.299 qpair failed and we were unable to recover it. 00:26:27.299 [2024-05-15 11:18:24.243858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.243946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.243959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.299 qpair failed and we were unable to recover it. 00:26:27.299 [2024-05-15 11:18:24.244050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.244141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.244154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.299 qpair failed and we were unable to recover it. 00:26:27.299 [2024-05-15 11:18:24.244311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.244390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.244403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.299 qpair failed and we were unable to recover it. 00:26:27.299 [2024-05-15 11:18:24.244499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.244574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.244587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.299 qpair failed and we were unable to recover it. 00:26:27.299 [2024-05-15 11:18:24.244774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.244850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.244867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.299 qpair failed and we were unable to recover it. 00:26:27.299 [2024-05-15 11:18:24.245019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.245096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.245109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.299 qpair failed and we were unable to recover it. 00:26:27.299 [2024-05-15 11:18:24.245203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.245350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.245363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.299 qpair failed and we were unable to recover it. 00:26:27.299 [2024-05-15 11:18:24.245437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.245512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.245524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.299 qpair failed and we were unable to recover it. 00:26:27.299 [2024-05-15 11:18:24.245625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.245773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.245787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.299 qpair failed and we were unable to recover it. 00:26:27.299 [2024-05-15 11:18:24.245929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.246070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.246084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.299 qpair failed and we were unable to recover it. 00:26:27.299 [2024-05-15 11:18:24.246186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.246279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.246292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.299 qpair failed and we were unable to recover it. 00:26:27.299 [2024-05-15 11:18:24.246364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.246511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.246524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.299 qpair failed and we were unable to recover it. 00:26:27.299 [2024-05-15 11:18:24.246603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.246705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.246718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.299 qpair failed and we were unable to recover it. 00:26:27.299 [2024-05-15 11:18:24.246862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.246967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.246981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.299 qpair failed and we were unable to recover it. 00:26:27.299 [2024-05-15 11:18:24.247058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.247135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.247147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.299 qpair failed and we were unable to recover it. 00:26:27.299 [2024-05-15 11:18:24.247250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.247389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.247403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.299 qpair failed and we were unable to recover it. 00:26:27.299 [2024-05-15 11:18:24.247501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.247572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.247586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.299 qpair failed and we were unable to recover it. 00:26:27.299 [2024-05-15 11:18:24.247671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.299 [2024-05-15 11:18:24.247756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.247769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.300 qpair failed and we were unable to recover it. 00:26:27.300 [2024-05-15 11:18:24.247922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.248004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.248017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.300 qpair failed and we were unable to recover it. 00:26:27.300 [2024-05-15 11:18:24.248098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.248356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.248370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.300 qpair failed and we were unable to recover it. 00:26:27.300 [2024-05-15 11:18:24.248465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.248616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.248630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.300 qpair failed and we were unable to recover it. 00:26:27.300 [2024-05-15 11:18:24.248711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.248853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.248866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.300 qpair failed and we were unable to recover it. 00:26:27.300 [2024-05-15 11:18:24.248944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.249028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.249041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.300 qpair failed and we were unable to recover it. 00:26:27.300 [2024-05-15 11:18:24.249127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.249219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.249234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.300 qpair failed and we were unable to recover it. 00:26:27.300 [2024-05-15 11:18:24.249315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.249387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.249399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.300 qpair failed and we were unable to recover it. 00:26:27.300 [2024-05-15 11:18:24.249477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.249565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.249578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.300 qpair failed and we were unable to recover it. 00:26:27.300 [2024-05-15 11:18:24.249736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.249811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.249823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.300 qpair failed and we were unable to recover it. 00:26:27.300 [2024-05-15 11:18:24.249906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.249987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.250001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.300 qpair failed and we were unable to recover it. 00:26:27.300 [2024-05-15 11:18:24.250088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.250174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.250188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.300 qpair failed and we were unable to recover it. 00:26:27.300 [2024-05-15 11:18:24.250330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.250464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.250477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.300 qpair failed and we were unable to recover it. 00:26:27.300 [2024-05-15 11:18:24.250560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.250631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.250644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.300 qpair failed and we were unable to recover it. 00:26:27.300 [2024-05-15 11:18:24.250727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.250814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.250827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.300 qpair failed and we were unable to recover it. 00:26:27.300 [2024-05-15 11:18:24.250915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.251085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.251099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.300 qpair failed and we were unable to recover it. 00:26:27.300 [2024-05-15 11:18:24.251182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.251253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.251266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.300 qpair failed and we were unable to recover it. 00:26:27.300 [2024-05-15 11:18:24.251357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.251437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.251450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.300 qpair failed and we were unable to recover it. 00:26:27.300 [2024-05-15 11:18:24.251635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.251812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.251825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.300 qpair failed and we were unable to recover it. 00:26:27.300 [2024-05-15 11:18:24.252004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.252086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.252099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.300 qpair failed and we were unable to recover it. 00:26:27.300 [2024-05-15 11:18:24.252253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.252359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.252372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.300 qpair failed and we were unable to recover it. 00:26:27.300 [2024-05-15 11:18:24.252448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.252531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.252544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.300 qpair failed and we were unable to recover it. 00:26:27.300 [2024-05-15 11:18:24.252628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.252706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.252720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.300 qpair failed and we were unable to recover it. 00:26:27.300 [2024-05-15 11:18:24.252795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.252877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.252890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.300 qpair failed and we were unable to recover it. 00:26:27.300 [2024-05-15 11:18:24.252973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.253047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.253060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.300 qpair failed and we were unable to recover it. 00:26:27.300 [2024-05-15 11:18:24.253139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.253226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.253240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.300 qpair failed and we were unable to recover it. 00:26:27.300 [2024-05-15 11:18:24.253318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.253420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.300 [2024-05-15 11:18:24.253432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.300 qpair failed and we were unable to recover it. 00:26:27.300 [2024-05-15 11:18:24.253512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.253595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.253609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.301 qpair failed and we were unable to recover it. 00:26:27.301 [2024-05-15 11:18:24.253690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.253759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.253775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.301 qpair failed and we were unable to recover it. 00:26:27.301 [2024-05-15 11:18:24.253855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.253933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.253945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.301 qpair failed and we were unable to recover it. 00:26:27.301 [2024-05-15 11:18:24.254021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.254163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.254181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.301 qpair failed and we were unable to recover it. 00:26:27.301 [2024-05-15 11:18:24.254254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.254330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.254342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.301 qpair failed and we were unable to recover it. 00:26:27.301 [2024-05-15 11:18:24.254435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.254508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.254521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.301 qpair failed and we were unable to recover it. 00:26:27.301 [2024-05-15 11:18:24.254625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.254780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.254793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.301 qpair failed and we were unable to recover it. 00:26:27.301 [2024-05-15 11:18:24.254872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.254954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.254967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.301 qpair failed and we were unable to recover it. 00:26:27.301 [2024-05-15 11:18:24.255065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.255149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.255163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.301 qpair failed and we were unable to recover it. 00:26:27.301 [2024-05-15 11:18:24.255247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.255326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.255339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.301 qpair failed and we were unable to recover it. 00:26:27.301 [2024-05-15 11:18:24.255416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.255557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.255570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.301 qpair failed and we were unable to recover it. 00:26:27.301 [2024-05-15 11:18:24.255693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.255773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.255788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.301 qpair failed and we were unable to recover it. 00:26:27.301 [2024-05-15 11:18:24.255863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.255934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.255948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.301 qpair failed and we were unable to recover it. 00:26:27.301 [2024-05-15 11:18:24.256028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.256101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.256114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.301 qpair failed and we were unable to recover it. 00:26:27.301 [2024-05-15 11:18:24.256272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.256366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.256379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.301 qpair failed and we were unable to recover it. 00:26:27.301 [2024-05-15 11:18:24.256477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.256564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.256578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.301 qpair failed and we were unable to recover it. 00:26:27.301 [2024-05-15 11:18:24.256685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.256766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.256778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.301 qpair failed and we were unable to recover it. 00:26:27.301 [2024-05-15 11:18:24.256918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.257007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.257020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.301 qpair failed and we were unable to recover it. 00:26:27.301 [2024-05-15 11:18:24.257098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.257301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.257315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.301 qpair failed and we were unable to recover it. 00:26:27.301 [2024-05-15 11:18:24.257389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.257461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.257474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.301 qpair failed and we were unable to recover it. 00:26:27.301 [2024-05-15 11:18:24.257626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.257709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.257722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.301 qpair failed and we were unable to recover it. 00:26:27.301 [2024-05-15 11:18:24.257798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.257874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.257886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.301 qpair failed and we were unable to recover it. 00:26:27.301 [2024-05-15 11:18:24.257965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.258069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.258083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.301 qpair failed and we were unable to recover it. 00:26:27.301 [2024-05-15 11:18:24.258175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.258284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.258298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.301 qpair failed and we were unable to recover it. 00:26:27.301 [2024-05-15 11:18:24.258509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.258666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.258679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.301 qpair failed and we were unable to recover it. 00:26:27.301 [2024-05-15 11:18:24.258760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.258831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.258844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.301 qpair failed and we were unable to recover it. 00:26:27.301 [2024-05-15 11:18:24.258927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.259011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.259024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.301 qpair failed and we were unable to recover it. 00:26:27.301 [2024-05-15 11:18:24.259109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.259187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.259200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.301 qpair failed and we were unable to recover it. 00:26:27.301 [2024-05-15 11:18:24.259277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.259388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.301 [2024-05-15 11:18:24.259401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.302 qpair failed and we were unable to recover it. 00:26:27.302 [2024-05-15 11:18:24.259476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.259618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.259631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.302 qpair failed and we were unable to recover it. 00:26:27.302 [2024-05-15 11:18:24.259796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.259904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.259918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.302 qpair failed and we were unable to recover it. 00:26:27.302 [2024-05-15 11:18:24.259996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.260083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.260096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.302 qpair failed and we were unable to recover it. 00:26:27.302 [2024-05-15 11:18:24.260261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.260340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.260354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.302 qpair failed and we were unable to recover it. 00:26:27.302 [2024-05-15 11:18:24.260449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.260524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.260538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.302 qpair failed and we were unable to recover it. 00:26:27.302 [2024-05-15 11:18:24.260622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.260715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.260729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.302 qpair failed and we were unable to recover it. 00:26:27.302 [2024-05-15 11:18:24.260813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.260893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.260906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.302 qpair failed and we were unable to recover it. 00:26:27.302 [2024-05-15 11:18:24.260996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.261073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.261086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.302 qpair failed and we were unable to recover it. 00:26:27.302 [2024-05-15 11:18:24.261184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.261330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.261344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.302 qpair failed and we were unable to recover it. 00:26:27.302 [2024-05-15 11:18:24.261420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.261500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.261513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.302 qpair failed and we were unable to recover it. 00:26:27.302 [2024-05-15 11:18:24.261599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.261675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.261688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.302 qpair failed and we were unable to recover it. 00:26:27.302 [2024-05-15 11:18:24.261779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.261851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.261864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.302 qpair failed and we were unable to recover it. 00:26:27.302 [2024-05-15 11:18:24.261953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.262025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.262038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.302 qpair failed and we were unable to recover it. 00:26:27.302 [2024-05-15 11:18:24.262114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.262200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.262215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.302 qpair failed and we were unable to recover it. 00:26:27.302 [2024-05-15 11:18:24.262291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.262428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.262441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.302 qpair failed and we were unable to recover it. 00:26:27.302 [2024-05-15 11:18:24.262529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.262602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.262615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.302 qpair failed and we were unable to recover it. 00:26:27.302 [2024-05-15 11:18:24.262697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.262839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.262852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.302 qpair failed and we were unable to recover it. 00:26:27.302 [2024-05-15 11:18:24.262942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.263012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.263026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.302 qpair failed and we were unable to recover it. 00:26:27.302 [2024-05-15 11:18:24.263200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.263271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.263284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.302 qpair failed and we were unable to recover it. 00:26:27.302 [2024-05-15 11:18:24.263383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.263521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.263534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.302 qpair failed and we were unable to recover it. 00:26:27.302 [2024-05-15 11:18:24.263676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.263762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.263775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.302 qpair failed and we were unable to recover it. 00:26:27.302 [2024-05-15 11:18:24.263855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.263953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.263966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.302 qpair failed and we were unable to recover it. 00:26:27.302 [2024-05-15 11:18:24.264061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.264134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.264147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.302 qpair failed and we were unable to recover it. 00:26:27.302 [2024-05-15 11:18:24.264225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.264302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.264316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.302 qpair failed and we were unable to recover it. 00:26:27.302 [2024-05-15 11:18:24.264455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.264627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.264640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.302 qpair failed and we were unable to recover it. 00:26:27.302 [2024-05-15 11:18:24.264795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.264901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.264914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.302 qpair failed and we were unable to recover it. 00:26:27.302 [2024-05-15 11:18:24.265060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.265134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.265147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.302 qpair failed and we were unable to recover it. 00:26:27.302 [2024-05-15 11:18:24.265233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.265378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.265391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.302 qpair failed and we were unable to recover it. 00:26:27.302 [2024-05-15 11:18:24.265476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.265562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.302 [2024-05-15 11:18:24.265575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.302 qpair failed and we were unable to recover it. 00:26:27.302 [2024-05-15 11:18:24.265731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.265808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.265822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.303 qpair failed and we were unable to recover it. 00:26:27.303 [2024-05-15 11:18:24.265907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.266047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.266060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.303 qpair failed and we were unable to recover it. 00:26:27.303 [2024-05-15 11:18:24.266145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.266235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.266249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.303 qpair failed and we were unable to recover it. 00:26:27.303 [2024-05-15 11:18:24.266390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.266476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.266490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.303 qpair failed and we were unable to recover it. 00:26:27.303 [2024-05-15 11:18:24.266570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.266644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.266657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.303 qpair failed and we were unable to recover it. 00:26:27.303 [2024-05-15 11:18:24.266742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.266816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.266829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.303 qpair failed and we were unable to recover it. 00:26:27.303 [2024-05-15 11:18:24.266913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.266988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.267001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.303 qpair failed and we were unable to recover it. 00:26:27.303 [2024-05-15 11:18:24.267084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.267153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.267171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.303 qpair failed and we were unable to recover it. 00:26:27.303 [2024-05-15 11:18:24.267249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.267324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.267337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.303 qpair failed and we were unable to recover it. 00:26:27.303 [2024-05-15 11:18:24.267431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.267513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.267526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.303 qpair failed and we were unable to recover it. 00:26:27.303 [2024-05-15 11:18:24.267688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.267780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.267793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.303 qpair failed and we were unable to recover it. 00:26:27.303 [2024-05-15 11:18:24.267884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.267957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.267970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.303 qpair failed and we were unable to recover it. 00:26:27.303 [2024-05-15 11:18:24.268054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.268134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.268147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.303 qpair failed and we were unable to recover it. 00:26:27.303 [2024-05-15 11:18:24.268234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.268321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.268334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.303 qpair failed and we were unable to recover it. 00:26:27.303 [2024-05-15 11:18:24.268405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.268564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.268577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.303 qpair failed and we were unable to recover it. 00:26:27.303 [2024-05-15 11:18:24.268656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.268865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.268878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.303 qpair failed and we were unable to recover it. 00:26:27.303 [2024-05-15 11:18:24.268957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.269028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.269041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.303 qpair failed and we were unable to recover it. 00:26:27.303 [2024-05-15 11:18:24.269129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.269217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.269231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.303 qpair failed and we were unable to recover it. 00:26:27.303 [2024-05-15 11:18:24.269379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.269457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.269470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.303 qpair failed and we were unable to recover it. 00:26:27.303 [2024-05-15 11:18:24.269563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.269640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.269653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.303 qpair failed and we were unable to recover it. 00:26:27.303 [2024-05-15 11:18:24.269728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.269807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.269821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.303 qpair failed and we were unable to recover it. 00:26:27.303 [2024-05-15 11:18:24.269899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.269977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.269990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.303 qpair failed and we were unable to recover it. 00:26:27.303 [2024-05-15 11:18:24.270069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.270139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.270152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.303 qpair failed and we were unable to recover it. 00:26:27.303 [2024-05-15 11:18:24.270296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.270450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.270463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.303 qpair failed and we were unable to recover it. 00:26:27.303 [2024-05-15 11:18:24.270536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.270615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.270628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.303 qpair failed and we were unable to recover it. 00:26:27.303 [2024-05-15 11:18:24.270725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.270821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.270834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.303 qpair failed and we were unable to recover it. 00:26:27.303 [2024-05-15 11:18:24.270923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.271006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.271019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.303 qpair failed and we were unable to recover it. 00:26:27.303 [2024-05-15 11:18:24.271125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.271282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.271296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.303 qpair failed and we were unable to recover it. 00:26:27.303 [2024-05-15 11:18:24.271406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.271505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.271519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.303 qpair failed and we were unable to recover it. 00:26:27.303 [2024-05-15 11:18:24.271598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.271687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.303 [2024-05-15 11:18:24.271700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.303 qpair failed and we were unable to recover it. 00:26:27.304 [2024-05-15 11:18:24.271790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.271886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.271899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.304 qpair failed and we were unable to recover it. 00:26:27.304 [2024-05-15 11:18:24.272044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.272191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.272205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.304 qpair failed and we were unable to recover it. 00:26:27.304 [2024-05-15 11:18:24.272290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.272429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.272442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.304 qpair failed and we were unable to recover it. 00:26:27.304 [2024-05-15 11:18:24.272533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.272606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.272620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.304 qpair failed and we were unable to recover it. 00:26:27.304 [2024-05-15 11:18:24.272707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.272851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.272868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.304 qpair failed and we were unable to recover it. 00:26:27.304 [2024-05-15 11:18:24.273060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.273205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.273226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.304 qpair failed and we were unable to recover it. 00:26:27.304 [2024-05-15 11:18:24.273327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.273399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.273412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.304 qpair failed and we were unable to recover it. 00:26:27.304 [2024-05-15 11:18:24.273501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.273591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.273604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.304 qpair failed and we were unable to recover it. 00:26:27.304 [2024-05-15 11:18:24.273764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.273847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.273860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.304 qpair failed and we were unable to recover it. 00:26:27.304 [2024-05-15 11:18:24.273939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.274102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.274115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.304 qpair failed and we were unable to recover it. 00:26:27.304 [2024-05-15 11:18:24.274208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.274282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.274295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.304 qpair failed and we were unable to recover it. 00:26:27.304 [2024-05-15 11:18:24.274383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.274464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.274476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.304 qpair failed and we were unable to recover it. 00:26:27.304 [2024-05-15 11:18:24.274566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.274655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.274668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.304 qpair failed and we were unable to recover it. 00:26:27.304 [2024-05-15 11:18:24.274746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.274820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.274833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.304 qpair failed and we were unable to recover it. 00:26:27.304 [2024-05-15 11:18:24.275019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.275115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.275132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.304 qpair failed and we were unable to recover it. 00:26:27.304 [2024-05-15 11:18:24.275228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.275302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.275315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.304 qpair failed and we were unable to recover it. 00:26:27.304 [2024-05-15 11:18:24.275394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.275484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.275497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.304 qpair failed and we were unable to recover it. 00:26:27.304 [2024-05-15 11:18:24.275588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.275682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.275695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.304 qpair failed and we were unable to recover it. 00:26:27.304 [2024-05-15 11:18:24.275794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.275870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.275883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.304 qpair failed and we were unable to recover it. 00:26:27.304 [2024-05-15 11:18:24.276037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.276129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.276142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.304 qpair failed and we were unable to recover it. 00:26:27.304 [2024-05-15 11:18:24.276228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.276313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.276326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.304 qpair failed and we were unable to recover it. 00:26:27.304 [2024-05-15 11:18:24.276415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.276560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.276574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.304 qpair failed and we were unable to recover it. 00:26:27.304 [2024-05-15 11:18:24.276657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.276725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.276738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.304 qpair failed and we were unable to recover it. 00:26:27.304 [2024-05-15 11:18:24.276814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.276887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.276900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.304 qpair failed and we were unable to recover it. 00:26:27.304 [2024-05-15 11:18:24.277014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.277087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.277103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.304 qpair failed and we were unable to recover it. 00:26:27.304 [2024-05-15 11:18:24.277210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.277294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.277307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.304 qpair failed and we were unable to recover it. 00:26:27.304 [2024-05-15 11:18:24.277381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.277447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.277460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.304 qpair failed and we were unable to recover it. 00:26:27.304 [2024-05-15 11:18:24.277612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.277701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.277714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.304 qpair failed and we were unable to recover it. 00:26:27.304 [2024-05-15 11:18:24.277944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.278015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.278028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.304 qpair failed and we were unable to recover it. 00:26:27.304 [2024-05-15 11:18:24.278116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.278210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.304 [2024-05-15 11:18:24.278224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.305 qpair failed and we were unable to recover it. 00:26:27.305 [2024-05-15 11:18:24.278302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.305 [2024-05-15 11:18:24.278393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.305 [2024-05-15 11:18:24.278406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.305 qpair failed and we were unable to recover it. 00:26:27.305 [2024-05-15 11:18:24.278479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.305 [2024-05-15 11:18:24.278561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.305 [2024-05-15 11:18:24.278574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.305 qpair failed and we were unable to recover it. 00:26:27.305 [2024-05-15 11:18:24.278721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.305 [2024-05-15 11:18:24.278858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.305 [2024-05-15 11:18:24.278876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.305 qpair failed and we were unable to recover it. 00:26:27.305 [2024-05-15 11:18:24.278948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.305 [2024-05-15 11:18:24.279121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.305 [2024-05-15 11:18:24.279134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.305 qpair failed and we were unable to recover it. 00:26:27.305 [2024-05-15 11:18:24.279279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.305 [2024-05-15 11:18:24.279360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.305 [2024-05-15 11:18:24.279376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.305 qpair failed and we were unable to recover it. 00:26:27.305 [2024-05-15 11:18:24.279533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.305 [2024-05-15 11:18:24.279606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.305 [2024-05-15 11:18:24.279619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.305 qpair failed and we were unable to recover it. 00:26:27.305 [2024-05-15 11:18:24.279773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.305 [2024-05-15 11:18:24.279980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.305 [2024-05-15 11:18:24.279993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.305 qpair failed and we were unable to recover it. 00:26:27.305 [2024-05-15 11:18:24.280086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.305 [2024-05-15 11:18:24.280226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.305 [2024-05-15 11:18:24.280239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.305 qpair failed and we were unable to recover it. 00:26:27.305 [2024-05-15 11:18:24.280317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.305 [2024-05-15 11:18:24.280395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.305 [2024-05-15 11:18:24.280409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.305 qpair failed and we were unable to recover it. 00:26:27.305 [2024-05-15 11:18:24.280479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.305 [2024-05-15 11:18:24.280622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.305 [2024-05-15 11:18:24.280636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.305 qpair failed and we were unable to recover it. 00:26:27.305 [2024-05-15 11:18:24.280778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.305 [2024-05-15 11:18:24.280864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.305 [2024-05-15 11:18:24.280877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.305 qpair failed and we were unable to recover it. 00:26:27.305 [2024-05-15 11:18:24.280964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.305 [2024-05-15 11:18:24.281055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.305 [2024-05-15 11:18:24.281068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.305 qpair failed and we were unable to recover it. 00:26:27.305 [2024-05-15 11:18:24.281146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.305 [2024-05-15 11:18:24.281235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.305 [2024-05-15 11:18:24.281249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.305 qpair failed and we were unable to recover it. 00:26:27.305 [2024-05-15 11:18:24.281396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.305 [2024-05-15 11:18:24.281477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.305 [2024-05-15 11:18:24.281490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.305 qpair failed and we were unable to recover it. 00:26:27.305 [2024-05-15 11:18:24.281603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.305 [2024-05-15 11:18:24.281677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.305 [2024-05-15 11:18:24.281690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.305 qpair failed and we were unable to recover it. 00:26:27.305 [2024-05-15 11:18:24.281856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.305 [2024-05-15 11:18:24.281948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.305 [2024-05-15 11:18:24.281961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.305 qpair failed and we were unable to recover it. 00:26:27.305 [2024-05-15 11:18:24.282059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.305 [2024-05-15 11:18:24.282153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.305 [2024-05-15 11:18:24.282188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.305 qpair failed and we were unable to recover it. 00:26:27.305 [2024-05-15 11:18:24.282294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.305 [2024-05-15 11:18:24.282374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.305 [2024-05-15 11:18:24.282388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.305 qpair failed and we were unable to recover it. 00:26:27.305 [2024-05-15 11:18:24.282470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.305 [2024-05-15 11:18:24.282552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.305 [2024-05-15 11:18:24.282566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.305 qpair failed and we were unable to recover it. 00:26:27.305 [2024-05-15 11:18:24.282640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.305 [2024-05-15 11:18:24.282715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.305 [2024-05-15 11:18:24.282729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.305 qpair failed and we were unable to recover it. 00:26:27.305 [2024-05-15 11:18:24.282803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.305 [2024-05-15 11:18:24.282893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.305 [2024-05-15 11:18:24.282907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.305 qpair failed and we were unable to recover it. 00:26:27.305 [2024-05-15 11:18:24.282987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.305 [2024-05-15 11:18:24.283068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.305 [2024-05-15 11:18:24.283082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.305 qpair failed and we were unable to recover it. 00:26:27.305 [2024-05-15 11:18:24.283242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.305 [2024-05-15 11:18:24.283318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.305 [2024-05-15 11:18:24.283331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.305 qpair failed and we were unable to recover it. 00:26:27.305 [2024-05-15 11:18:24.283480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.305 [2024-05-15 11:18:24.283556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.305 [2024-05-15 11:18:24.283569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.305 qpair failed and we were unable to recover it. 00:26:27.305 [2024-05-15 11:18:24.283659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.305 [2024-05-15 11:18:24.283745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.283758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.306 qpair failed and we were unable to recover it. 00:26:27.306 [2024-05-15 11:18:24.283827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.283907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.283920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.306 qpair failed and we were unable to recover it. 00:26:27.306 [2024-05-15 11:18:24.283998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.284066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.284079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.306 qpair failed and we were unable to recover it. 00:26:27.306 [2024-05-15 11:18:24.284226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.284309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.284322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.306 qpair failed and we were unable to recover it. 00:26:27.306 [2024-05-15 11:18:24.284426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.284524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.284536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.306 qpair failed and we were unable to recover it. 00:26:27.306 [2024-05-15 11:18:24.284627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.284697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.284710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.306 qpair failed and we were unable to recover it. 00:26:27.306 [2024-05-15 11:18:24.284859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.284929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.284943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.306 qpair failed and we were unable to recover it. 00:26:27.306 [2024-05-15 11:18:24.285019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.285287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.285301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.306 qpair failed and we were unable to recover it. 00:26:27.306 [2024-05-15 11:18:24.285449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.285522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.285536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.306 qpair failed and we were unable to recover it. 00:26:27.306 [2024-05-15 11:18:24.285702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.285774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.285788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.306 qpair failed and we were unable to recover it. 00:26:27.306 [2024-05-15 11:18:24.285865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.285949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.285962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.306 qpair failed and we were unable to recover it. 00:26:27.306 [2024-05-15 11:18:24.286031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.286148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.286161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.306 qpair failed and we were unable to recover it. 00:26:27.306 [2024-05-15 11:18:24.286256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.286327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.286340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.306 qpair failed and we were unable to recover it. 00:26:27.306 [2024-05-15 11:18:24.286416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.286489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.286502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.306 qpair failed and we were unable to recover it. 00:26:27.306 [2024-05-15 11:18:24.286641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.286712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.286726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.306 qpair failed and we were unable to recover it. 00:26:27.306 [2024-05-15 11:18:24.286804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.286875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.286887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.306 qpair failed and we were unable to recover it. 00:26:27.306 [2024-05-15 11:18:24.287028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.287105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.287118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.306 qpair failed and we were unable to recover it. 00:26:27.306 [2024-05-15 11:18:24.287213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.287287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.287300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.306 qpair failed and we were unable to recover it. 00:26:27.306 [2024-05-15 11:18:24.287377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.287459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.287472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.306 qpair failed and we were unable to recover it. 00:26:27.306 [2024-05-15 11:18:24.287547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.287627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.287641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.306 qpair failed and we were unable to recover it. 00:26:27.306 [2024-05-15 11:18:24.287731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.287813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.287826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.306 qpair failed and we were unable to recover it. 00:26:27.306 [2024-05-15 11:18:24.287908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.287981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.287994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.306 qpair failed and we were unable to recover it. 00:26:27.306 [2024-05-15 11:18:24.288137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.288307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.288321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.306 qpair failed and we were unable to recover it. 00:26:27.306 [2024-05-15 11:18:24.288395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.288486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.288499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.306 qpair failed and we were unable to recover it. 00:26:27.306 [2024-05-15 11:18:24.288587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.288805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.288818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.306 qpair failed and we were unable to recover it. 00:26:27.306 [2024-05-15 11:18:24.288892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.288985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.288998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.306 qpair failed and we were unable to recover it. 00:26:27.306 [2024-05-15 11:18:24.289077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.289146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.289159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.306 qpair failed and we were unable to recover it. 00:26:27.306 [2024-05-15 11:18:24.289250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.289392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.289405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.306 qpair failed and we were unable to recover it. 00:26:27.306 [2024-05-15 11:18:24.289548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.289623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.289636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.306 qpair failed and we were unable to recover it. 00:26:27.306 [2024-05-15 11:18:24.289699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.289794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.289808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.306 qpair failed and we were unable to recover it. 00:26:27.306 [2024-05-15 11:18:24.289905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.306 [2024-05-15 11:18:24.289981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.289994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.307 qpair failed and we were unable to recover it. 00:26:27.307 [2024-05-15 11:18:24.290089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.290227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.290241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.307 qpair failed and we were unable to recover it. 00:26:27.307 [2024-05-15 11:18:24.290314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.290523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.290536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.307 qpair failed and we were unable to recover it. 00:26:27.307 [2024-05-15 11:18:24.290691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.290783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.290796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.307 qpair failed and we were unable to recover it. 00:26:27.307 [2024-05-15 11:18:24.290889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.290978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.290992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.307 qpair failed and we were unable to recover it. 00:26:27.307 [2024-05-15 11:18:24.291088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.291177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.291191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.307 qpair failed and we were unable to recover it. 00:26:27.307 [2024-05-15 11:18:24.291285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.291369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.291382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.307 qpair failed and we were unable to recover it. 00:26:27.307 [2024-05-15 11:18:24.291459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.291632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.291645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.307 qpair failed and we were unable to recover it. 00:26:27.307 [2024-05-15 11:18:24.291722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.291805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.291818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.307 qpair failed and we were unable to recover it. 00:26:27.307 [2024-05-15 11:18:24.291974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.292065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.292078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.307 qpair failed and we were unable to recover it. 00:26:27.307 [2024-05-15 11:18:24.292173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.292256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.292269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.307 qpair failed and we were unable to recover it. 00:26:27.307 [2024-05-15 11:18:24.292351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.292503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.292516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.307 qpair failed and we were unable to recover it. 00:26:27.307 [2024-05-15 11:18:24.292605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.292721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.292734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.307 qpair failed and we were unable to recover it. 00:26:27.307 [2024-05-15 11:18:24.292812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.292893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.292906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.307 qpair failed and we were unable to recover it. 00:26:27.307 [2024-05-15 11:18:24.292987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.293202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.293216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.307 qpair failed and we were unable to recover it. 00:26:27.307 [2024-05-15 11:18:24.293354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.293500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.293514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.307 qpair failed and we were unable to recover it. 00:26:27.307 [2024-05-15 11:18:24.293725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.293862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.293876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.307 qpair failed and we were unable to recover it. 00:26:27.307 [2024-05-15 11:18:24.294018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.294168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.294182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.307 qpair failed and we were unable to recover it. 00:26:27.307 [2024-05-15 11:18:24.294343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.294422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.294435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.307 qpair failed and we were unable to recover it. 00:26:27.307 [2024-05-15 11:18:24.294525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.294678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.294691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.307 qpair failed and we were unable to recover it. 00:26:27.307 [2024-05-15 11:18:24.294846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.295015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.295028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.307 qpair failed and we were unable to recover it. 00:26:27.307 [2024-05-15 11:18:24.295201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.295318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.295335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:27.307 qpair failed and we were unable to recover it. 00:26:27.307 [2024-05-15 11:18:24.295432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.295532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.295545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:27.307 qpair failed and we were unable to recover it. 00:26:27.307 [2024-05-15 11:18:24.295631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.295705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.295718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:27.307 qpair failed and we were unable to recover it. 00:26:27.307 [2024-05-15 11:18:24.295797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.295948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.295962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:27.307 qpair failed and we were unable to recover it. 00:26:27.307 [2024-05-15 11:18:24.296100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.296182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.296196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:27.307 qpair failed and we were unable to recover it. 00:26:27.307 [2024-05-15 11:18:24.296272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.296351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.296363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:27.307 qpair failed and we were unable to recover it. 00:26:27.307 [2024-05-15 11:18:24.296447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.296520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.296532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:27.307 qpair failed and we were unable to recover it. 00:26:27.307 [2024-05-15 11:18:24.296616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.296699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.296712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:27.307 qpair failed and we were unable to recover it. 00:26:27.307 [2024-05-15 11:18:24.296802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.296889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.307 [2024-05-15 11:18:24.296904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:27.307 qpair failed and we were unable to recover it. 00:26:27.307 [2024-05-15 11:18:24.297042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.297134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.297146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:27.308 qpair failed and we were unable to recover it. 00:26:27.308 [2024-05-15 11:18:24.297246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.297340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.297353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.308 qpair failed and we were unable to recover it. 00:26:27.308 [2024-05-15 11:18:24.297496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.297573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.297583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.308 qpair failed and we were unable to recover it. 00:26:27.308 [2024-05-15 11:18:24.297745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.297809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.297818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.308 qpair failed and we were unable to recover it. 00:26:27.308 [2024-05-15 11:18:24.297899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.298119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.298128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.308 qpair failed and we were unable to recover it. 00:26:27.308 [2024-05-15 11:18:24.298215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.298288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.298297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.308 qpair failed and we were unable to recover it. 00:26:27.308 [2024-05-15 11:18:24.298357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.298423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.298433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.308 qpair failed and we were unable to recover it. 00:26:27.308 [2024-05-15 11:18:24.298526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.298660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.298670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.308 qpair failed and we were unable to recover it. 00:26:27.308 [2024-05-15 11:18:24.298770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.298848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.298858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.308 qpair failed and we were unable to recover it. 00:26:27.308 [2024-05-15 11:18:24.298926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.298989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.298999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.308 qpair failed and we were unable to recover it. 00:26:27.308 [2024-05-15 11:18:24.299070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.299136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.299145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.308 qpair failed and we were unable to recover it. 00:26:27.308 [2024-05-15 11:18:24.299252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.299333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.299343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.308 qpair failed and we were unable to recover it. 00:26:27.308 [2024-05-15 11:18:24.299434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.299513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.299523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.308 qpair failed and we were unable to recover it. 00:26:27.308 [2024-05-15 11:18:24.299590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.299650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.299660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.308 qpair failed and we were unable to recover it. 00:26:27.308 [2024-05-15 11:18:24.299800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.299886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.299895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.308 qpair failed and we were unable to recover it. 00:26:27.308 [2024-05-15 11:18:24.299964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.300110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.300120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.308 qpair failed and we were unable to recover it. 00:26:27.308 [2024-05-15 11:18:24.300200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.300341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.300350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.308 qpair failed and we were unable to recover it. 00:26:27.308 [2024-05-15 11:18:24.300423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.300488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.300497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.308 qpair failed and we were unable to recover it. 00:26:27.308 [2024-05-15 11:18:24.300566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.300699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.300709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.308 qpair failed and we were unable to recover it. 00:26:27.308 [2024-05-15 11:18:24.300774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.300837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.300846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.308 qpair failed and we were unable to recover it. 00:26:27.308 [2024-05-15 11:18:24.300922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.300990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.301000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.308 qpair failed and we were unable to recover it. 00:26:27.308 [2024-05-15 11:18:24.301065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.301135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.301145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.308 qpair failed and we were unable to recover it. 00:26:27.308 [2024-05-15 11:18:24.301234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.301372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.301382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.308 qpair failed and we were unable to recover it. 00:26:27.308 [2024-05-15 11:18:24.301456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.301510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.301519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.308 qpair failed and we were unable to recover it. 00:26:27.308 [2024-05-15 11:18:24.301597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.301730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.301739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.308 qpair failed and we were unable to recover it. 00:26:27.308 [2024-05-15 11:18:24.301882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.301973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.301983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.308 qpair failed and we were unable to recover it. 00:26:27.308 [2024-05-15 11:18:24.302116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.302190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.302201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.308 qpair failed and we were unable to recover it. 00:26:27.308 [2024-05-15 11:18:24.302333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.302400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.302410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.308 qpair failed and we were unable to recover it. 00:26:27.308 [2024-05-15 11:18:24.302496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.302562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.302572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.308 qpair failed and we were unable to recover it. 00:26:27.308 [2024-05-15 11:18:24.302647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.302731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.308 [2024-05-15 11:18:24.302741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.309 qpair failed and we were unable to recover it. 00:26:27.309 [2024-05-15 11:18:24.302808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.309 [2024-05-15 11:18:24.302884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.309 [2024-05-15 11:18:24.302893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.309 qpair failed and we were unable to recover it. 00:26:27.309 [2024-05-15 11:18:24.302961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.309 [2024-05-15 11:18:24.303027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.309 [2024-05-15 11:18:24.303038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.309 qpair failed and we were unable to recover it. 00:26:27.309 [2024-05-15 11:18:24.303173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.309 [2024-05-15 11:18:24.303316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.309 [2024-05-15 11:18:24.303325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.309 qpair failed and we were unable to recover it. 00:26:27.309 [2024-05-15 11:18:24.303397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.309 [2024-05-15 11:18:24.303461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.309 [2024-05-15 11:18:24.303471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.309 qpair failed and we were unable to recover it. 00:26:27.309 [2024-05-15 11:18:24.303551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.309 [2024-05-15 11:18:24.303615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.309 [2024-05-15 11:18:24.303625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.309 qpair failed and we were unable to recover it. 00:26:27.309 [2024-05-15 11:18:24.303702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.309 [2024-05-15 11:18:24.303767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.309 [2024-05-15 11:18:24.303776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.309 qpair failed and we were unable to recover it. 00:26:27.309 [2024-05-15 11:18:24.303845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.309 [2024-05-15 11:18:24.303925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.309 [2024-05-15 11:18:24.303936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.309 qpair failed and we were unable to recover it. 00:26:27.309 [2024-05-15 11:18:24.304004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.309 [2024-05-15 11:18:24.304081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.309 [2024-05-15 11:18:24.304092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.309 qpair failed and we were unable to recover it. 00:26:27.309 [2024-05-15 11:18:24.304236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.309 [2024-05-15 11:18:24.304324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.309 [2024-05-15 11:18:24.304334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.309 qpair failed and we were unable to recover it. 00:26:27.309 [2024-05-15 11:18:24.304422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.309 [2024-05-15 11:18:24.304497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.309 [2024-05-15 11:18:24.304507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.309 qpair failed and we were unable to recover it. 00:26:27.309 [2024-05-15 11:18:24.304637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.309 [2024-05-15 11:18:24.304716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.309 [2024-05-15 11:18:24.304725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.309 qpair failed and we were unable to recover it. 00:26:27.309 [2024-05-15 11:18:24.304791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.309 [2024-05-15 11:18:24.304869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.309 [2024-05-15 11:18:24.304881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.309 qpair failed and we were unable to recover it. 00:26:27.309 [2024-05-15 11:18:24.305021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.309 [2024-05-15 11:18:24.305083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.309 [2024-05-15 11:18:24.305092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.309 qpair failed and we were unable to recover it. 00:26:27.309 [2024-05-15 11:18:24.305171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.309 [2024-05-15 11:18:24.305251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.309 [2024-05-15 11:18:24.305260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.309 qpair failed and we were unable to recover it. 00:26:27.309 [2024-05-15 11:18:24.305328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.309 [2024-05-15 11:18:24.305408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.309 [2024-05-15 11:18:24.305418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.309 qpair failed and we were unable to recover it. 00:26:27.309 [2024-05-15 11:18:24.305499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.309 [2024-05-15 11:18:24.305571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.309 [2024-05-15 11:18:24.305581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.309 qpair failed and we were unable to recover it. 00:26:27.309 [2024-05-15 11:18:24.305646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.309 [2024-05-15 11:18:24.305714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.309 [2024-05-15 11:18:24.305724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.309 qpair failed and we were unable to recover it. 00:26:27.309 [2024-05-15 11:18:24.305791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.309 [2024-05-15 11:18:24.305963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.309 [2024-05-15 11:18:24.305972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.309 qpair failed and we were unable to recover it. 00:26:27.309 [2024-05-15 11:18:24.306046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.309 [2024-05-15 11:18:24.306102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.309 [2024-05-15 11:18:24.306112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.309 qpair failed and we were unable to recover it. 00:26:27.309 [2024-05-15 11:18:24.306183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.309 [2024-05-15 11:18:24.306260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.309 [2024-05-15 11:18:24.306269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.309 qpair failed and we were unable to recover it. 00:26:27.309 [2024-05-15 11:18:24.306407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.309 [2024-05-15 11:18:24.306487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.309 [2024-05-15 11:18:24.306496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.309 qpair failed and we were unable to recover it. 00:26:27.309 [2024-05-15 11:18:24.306566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.309 [2024-05-15 11:18:24.306710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.309 [2024-05-15 11:18:24.306721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.309 qpair failed and we were unable to recover it. 00:26:27.309 [2024-05-15 11:18:24.306802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.309 [2024-05-15 11:18:24.306874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.309 [2024-05-15 11:18:24.306883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.309 qpair failed and we were unable to recover it. 00:26:27.309 [2024-05-15 11:18:24.307013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.309 [2024-05-15 11:18:24.307079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.309 [2024-05-15 11:18:24.307089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.309 qpair failed and we were unable to recover it. 00:26:27.309 [2024-05-15 11:18:24.307176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.309 [2024-05-15 11:18:24.307307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.309 [2024-05-15 11:18:24.307317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.310 qpair failed and we were unable to recover it. 00:26:27.310 [2024-05-15 11:18:24.307409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.307480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.307490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.310 qpair failed and we were unable to recover it. 00:26:27.310 [2024-05-15 11:18:24.307558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.307626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.307638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.310 qpair failed and we were unable to recover it. 00:26:27.310 [2024-05-15 11:18:24.307711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.307782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.307792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.310 qpair failed and we were unable to recover it. 00:26:27.310 [2024-05-15 11:18:24.307858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.307918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.307928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.310 qpair failed and we were unable to recover it. 00:26:27.310 [2024-05-15 11:18:24.308000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.308063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.308072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.310 qpair failed and we were unable to recover it. 00:26:27.310 [2024-05-15 11:18:24.308174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.308248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.308258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.310 qpair failed and we were unable to recover it. 00:26:27.310 [2024-05-15 11:18:24.308325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.308391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.308402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.310 qpair failed and we were unable to recover it. 00:26:27.310 [2024-05-15 11:18:24.308477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.308548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.308558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.310 qpair failed and we were unable to recover it. 00:26:27.310 [2024-05-15 11:18:24.308627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.308693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.308703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.310 qpair failed and we were unable to recover it. 00:26:27.310 [2024-05-15 11:18:24.308779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.308912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.308921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.310 qpair failed and we were unable to recover it. 00:26:27.310 [2024-05-15 11:18:24.309001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.309067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.309077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.310 qpair failed and we were unable to recover it. 00:26:27.310 [2024-05-15 11:18:24.309152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.309313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.309323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.310 qpair failed and we were unable to recover it. 00:26:27.310 [2024-05-15 11:18:24.309482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.309618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.309628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.310 qpair failed and we were unable to recover it. 00:26:27.310 [2024-05-15 11:18:24.309699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.309775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.309787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.310 qpair failed and we were unable to recover it. 00:26:27.310 [2024-05-15 11:18:24.310001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.310136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.310145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.310 qpair failed and we were unable to recover it. 00:26:27.310 [2024-05-15 11:18:24.310301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.310439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.310448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.310 qpair failed and we were unable to recover it. 00:26:27.310 [2024-05-15 11:18:24.310582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.310728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.310740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.310 qpair failed and we were unable to recover it. 00:26:27.310 [2024-05-15 11:18:24.310825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.310893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.310902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.310 qpair failed and we were unable to recover it. 00:26:27.310 [2024-05-15 11:18:24.310984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.311052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.311062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.310 qpair failed and we were unable to recover it. 00:26:27.310 [2024-05-15 11:18:24.311129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.311198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.311208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.310 qpair failed and we were unable to recover it. 00:26:27.310 [2024-05-15 11:18:24.311342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.311406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.311417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.310 qpair failed and we were unable to recover it. 00:26:27.310 [2024-05-15 11:18:24.311580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.311663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.311673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.310 qpair failed and we were unable to recover it. 00:26:27.310 [2024-05-15 11:18:24.311757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.311841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.311850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.310 qpair failed and we were unable to recover it. 00:26:27.310 [2024-05-15 11:18:24.311927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.312006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.312016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.310 qpair failed and we were unable to recover it. 00:26:27.310 [2024-05-15 11:18:24.312096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.312258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.312268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.310 qpair failed and we were unable to recover it. 00:26:27.310 [2024-05-15 11:18:24.312337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.312420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.312429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.310 qpair failed and we were unable to recover it. 00:26:27.310 [2024-05-15 11:18:24.312515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.312579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.312588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.310 qpair failed and we were unable to recover it. 00:26:27.310 [2024-05-15 11:18:24.312669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.312804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.312814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.310 qpair failed and we were unable to recover it. 00:26:27.310 [2024-05-15 11:18:24.312979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.313110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.310 [2024-05-15 11:18:24.313119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.310 qpair failed and we were unable to recover it. 00:26:27.310 [2024-05-15 11:18:24.313202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.313281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.313291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.311 qpair failed and we were unable to recover it. 00:26:27.311 [2024-05-15 11:18:24.313381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.313453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.313463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.311 qpair failed and we were unable to recover it. 00:26:27.311 [2024-05-15 11:18:24.313552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.313624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.313633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.311 qpair failed and we were unable to recover it. 00:26:27.311 [2024-05-15 11:18:24.313719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.313798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.313807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.311 qpair failed and we were unable to recover it. 00:26:27.311 [2024-05-15 11:18:24.313873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.313946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.313956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.311 qpair failed and we were unable to recover it. 00:26:27.311 [2024-05-15 11:18:24.314139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.314290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.314300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.311 qpair failed and we were unable to recover it. 00:26:27.311 [2024-05-15 11:18:24.314366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.314427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.314437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.311 qpair failed and we were unable to recover it. 00:26:27.311 [2024-05-15 11:18:24.314519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.314722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.314732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.311 qpair failed and we were unable to recover it. 00:26:27.311 [2024-05-15 11:18:24.314807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.314879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.314888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.311 qpair failed and we were unable to recover it. 00:26:27.311 [2024-05-15 11:18:24.314961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.315042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.315051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.311 qpair failed and we were unable to recover it. 00:26:27.311 [2024-05-15 11:18:24.315118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.315290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.315301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.311 qpair failed and we were unable to recover it. 00:26:27.311 [2024-05-15 11:18:24.315457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.315524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.315533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.311 qpair failed and we were unable to recover it. 00:26:27.311 [2024-05-15 11:18:24.315598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.315660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.315669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.311 qpair failed and we were unable to recover it. 00:26:27.311 [2024-05-15 11:18:24.315739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.315880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.315889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.311 qpair failed and we were unable to recover it. 00:26:27.311 [2024-05-15 11:18:24.316026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.316090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.316099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.311 qpair failed and we were unable to recover it. 00:26:27.311 [2024-05-15 11:18:24.316179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.316316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.316326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.311 qpair failed and we were unable to recover it. 00:26:27.311 [2024-05-15 11:18:24.316480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.316561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.316571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.311 qpair failed and we were unable to recover it. 00:26:27.311 [2024-05-15 11:18:24.316772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.316840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.316850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.311 qpair failed and we were unable to recover it. 00:26:27.311 [2024-05-15 11:18:24.316940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.317020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.317030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.311 qpair failed and we were unable to recover it. 00:26:27.311 [2024-05-15 11:18:24.317109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.317176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.317187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.311 qpair failed and we were unable to recover it. 00:26:27.311 [2024-05-15 11:18:24.317251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.317329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.317338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.311 qpair failed and we were unable to recover it. 00:26:27.311 [2024-05-15 11:18:24.317404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.317562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.317572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.311 qpair failed and we were unable to recover it. 00:26:27.311 [2024-05-15 11:18:24.317639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.317721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.317730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.311 qpair failed and we were unable to recover it. 00:26:27.311 [2024-05-15 11:18:24.317866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.317946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.317956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.311 qpair failed and we were unable to recover it. 00:26:27.311 [2024-05-15 11:18:24.318022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.318158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.318173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.311 qpair failed and we were unable to recover it. 00:26:27.311 [2024-05-15 11:18:24.318260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.318333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.318343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.311 qpair failed and we were unable to recover it. 00:26:27.311 [2024-05-15 11:18:24.318424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.318573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.318583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.311 qpair failed and we were unable to recover it. 00:26:27.311 [2024-05-15 11:18:24.318675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.318754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.318764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.311 qpair failed and we were unable to recover it. 00:26:27.311 [2024-05-15 11:18:24.318911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.318983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.318993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.311 qpair failed and we were unable to recover it. 00:26:27.311 [2024-05-15 11:18:24.319093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.319192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.311 [2024-05-15 11:18:24.319203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.311 qpair failed and we were unable to recover it. 00:26:27.312 [2024-05-15 11:18:24.319358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.319432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.319441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.312 qpair failed and we were unable to recover it. 00:26:27.312 [2024-05-15 11:18:24.319586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.319655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.319664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.312 qpair failed and we were unable to recover it. 00:26:27.312 [2024-05-15 11:18:24.319752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.319833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.319843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.312 qpair failed and we were unable to recover it. 00:26:27.312 [2024-05-15 11:18:24.319914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.320054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.320063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.312 qpair failed and we were unable to recover it. 00:26:27.312 [2024-05-15 11:18:24.320150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.320227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.320237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.312 qpair failed and we were unable to recover it. 00:26:27.312 [2024-05-15 11:18:24.320321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.320387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.320396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.312 qpair failed and we were unable to recover it. 00:26:27.312 [2024-05-15 11:18:24.320464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.320599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.320608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.312 qpair failed and we were unable to recover it. 00:26:27.312 [2024-05-15 11:18:24.320742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.320904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.320914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.312 qpair failed and we were unable to recover it. 00:26:27.312 [2024-05-15 11:18:24.321067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.321144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.321154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.312 qpair failed and we were unable to recover it. 00:26:27.312 [2024-05-15 11:18:24.321294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.321363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.321374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.312 qpair failed and we were unable to recover it. 00:26:27.312 [2024-05-15 11:18:24.321573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.321653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.321663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.312 qpair failed and we were unable to recover it. 00:26:27.312 [2024-05-15 11:18:24.321754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.321888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.321897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.312 qpair failed and we were unable to recover it. 00:26:27.312 [2024-05-15 11:18:24.321987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.322076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.322086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.312 qpair failed and we were unable to recover it. 00:26:27.312 [2024-05-15 11:18:24.322153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.322333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.322344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.312 qpair failed and we were unable to recover it. 00:26:27.312 [2024-05-15 11:18:24.322414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.322506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.322515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.312 qpair failed and we were unable to recover it. 00:26:27.312 [2024-05-15 11:18:24.322658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.322742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.322752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.312 qpair failed and we were unable to recover it. 00:26:27.312 [2024-05-15 11:18:24.322826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.322914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.322923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.312 qpair failed and we were unable to recover it. 00:26:27.312 [2024-05-15 11:18:24.323078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.323156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.323181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.312 qpair failed and we were unable to recover it. 00:26:27.312 [2024-05-15 11:18:24.323357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.323436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.323446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.312 qpair failed and we were unable to recover it. 00:26:27.312 [2024-05-15 11:18:24.323583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.323739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.323749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.312 qpair failed and we were unable to recover it. 00:26:27.312 [2024-05-15 11:18:24.323829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.323958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.323968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.312 qpair failed and we were unable to recover it. 00:26:27.312 [2024-05-15 11:18:24.324046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.324246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.324257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.312 qpair failed and we were unable to recover it. 00:26:27.312 [2024-05-15 11:18:24.324402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.324547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.324557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.312 qpair failed and we were unable to recover it. 00:26:27.312 [2024-05-15 11:18:24.324643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.324784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.324793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.312 qpair failed and we were unable to recover it. 00:26:27.312 [2024-05-15 11:18:24.324873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.325032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.325042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.312 qpair failed and we were unable to recover it. 00:26:27.312 [2024-05-15 11:18:24.325108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.325332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.325342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.312 qpair failed and we were unable to recover it. 00:26:27.312 [2024-05-15 11:18:24.325424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.325507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.325516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.312 qpair failed and we were unable to recover it. 00:26:27.312 [2024-05-15 11:18:24.325671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.325746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.325755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.312 qpair failed and we were unable to recover it. 00:26:27.312 [2024-05-15 11:18:24.325827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.325925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.325934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.312 qpair failed and we were unable to recover it. 00:26:27.312 [2024-05-15 11:18:24.326001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.312 [2024-05-15 11:18:24.326075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.326084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.313 qpair failed and we were unable to recover it. 00:26:27.313 [2024-05-15 11:18:24.326153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.326290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.326300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.313 qpair failed and we were unable to recover it. 00:26:27.313 [2024-05-15 11:18:24.326381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.326446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.326455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.313 qpair failed and we were unable to recover it. 00:26:27.313 [2024-05-15 11:18:24.326540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.326675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.326684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.313 qpair failed and we were unable to recover it. 00:26:27.313 [2024-05-15 11:18:24.326761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.326916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.326926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.313 qpair failed and we were unable to recover it. 00:26:27.313 [2024-05-15 11:18:24.327068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.327126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.327136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.313 qpair failed and we were unable to recover it. 00:26:27.313 [2024-05-15 11:18:24.327211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.327408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.327418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.313 qpair failed and we were unable to recover it. 00:26:27.313 [2024-05-15 11:18:24.327488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.327571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.327580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.313 qpair failed and we were unable to recover it. 00:26:27.313 [2024-05-15 11:18:24.327658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.327796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.327807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.313 qpair failed and we were unable to recover it. 00:26:27.313 [2024-05-15 11:18:24.327893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.327975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.327985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.313 qpair failed and we were unable to recover it. 00:26:27.313 [2024-05-15 11:18:24.328129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.328209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.328219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.313 qpair failed and we were unable to recover it. 00:26:27.313 [2024-05-15 11:18:24.328355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.328446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.328456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.313 qpair failed and we were unable to recover it. 00:26:27.313 [2024-05-15 11:18:24.328599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.328730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.328739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.313 qpair failed and we were unable to recover it. 00:26:27.313 [2024-05-15 11:18:24.328811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.328951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.328966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.313 qpair failed and we were unable to recover it. 00:26:27.313 [2024-05-15 11:18:24.329049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.329198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.329215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.313 qpair failed and we were unable to recover it. 00:26:27.313 [2024-05-15 11:18:24.329283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.329483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.329493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.313 qpair failed and we were unable to recover it. 00:26:27.313 [2024-05-15 11:18:24.329629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.329774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.329784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.313 qpair failed and we were unable to recover it. 00:26:27.313 [2024-05-15 11:18:24.329873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.330008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.330018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.313 qpair failed and we were unable to recover it. 00:26:27.313 [2024-05-15 11:18:24.330190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.330269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.330279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.313 qpair failed and we were unable to recover it. 00:26:27.313 [2024-05-15 11:18:24.330367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.330564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.330573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.313 qpair failed and we were unable to recover it. 00:26:27.313 [2024-05-15 11:18:24.330709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.330775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.330785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.313 qpair failed and we were unable to recover it. 00:26:27.313 [2024-05-15 11:18:24.330853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.330924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.330933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.313 qpair failed and we were unable to recover it. 00:26:27.313 [2024-05-15 11:18:24.331003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.331095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.331104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.313 qpair failed and we were unable to recover it. 00:26:27.313 [2024-05-15 11:18:24.331191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.331273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.331282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.313 qpair failed and we were unable to recover it. 00:26:27.313 [2024-05-15 11:18:24.331360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.331436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.331445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.313 qpair failed and we were unable to recover it. 00:26:27.313 [2024-05-15 11:18:24.331538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.331616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.331626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.313 qpair failed and we were unable to recover it. 00:26:27.313 [2024-05-15 11:18:24.331702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.331908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.331917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.313 qpair failed and we were unable to recover it. 00:26:27.313 [2024-05-15 11:18:24.332013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.332091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.332101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.313 qpair failed and we were unable to recover it. 00:26:27.313 [2024-05-15 11:18:24.332237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.332369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.332379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.313 qpair failed and we were unable to recover it. 00:26:27.313 [2024-05-15 11:18:24.332450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.332601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.313 [2024-05-15 11:18:24.332611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.313 qpair failed and we were unable to recover it. 00:26:27.313 [2024-05-15 11:18:24.332700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.332834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.332844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.314 qpair failed and we were unable to recover it. 00:26:27.314 [2024-05-15 11:18:24.332979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.333128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.333137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.314 qpair failed and we were unable to recover it. 00:26:27.314 [2024-05-15 11:18:24.333216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.333352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.333362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.314 qpair failed and we were unable to recover it. 00:26:27.314 [2024-05-15 11:18:24.333501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.333724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.333734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.314 qpair failed and we were unable to recover it. 00:26:27.314 [2024-05-15 11:18:24.333868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.333948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.333957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.314 qpair failed and we were unable to recover it. 00:26:27.314 [2024-05-15 11:18:24.334018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.334162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.334176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.314 qpair failed and we were unable to recover it. 00:26:27.314 [2024-05-15 11:18:24.334311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.334392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.334402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.314 qpair failed and we were unable to recover it. 00:26:27.314 [2024-05-15 11:18:24.334471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.334538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.334548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.314 qpair failed and we were unable to recover it. 00:26:27.314 [2024-05-15 11:18:24.334620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.334701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.334711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.314 qpair failed and we were unable to recover it. 00:26:27.314 [2024-05-15 11:18:24.334852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.335004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.335014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.314 qpair failed and we were unable to recover it. 00:26:27.314 [2024-05-15 11:18:24.335083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.335215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.335225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.314 qpair failed and we were unable to recover it. 00:26:27.314 [2024-05-15 11:18:24.335363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.335495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.335505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.314 qpair failed and we were unable to recover it. 00:26:27.314 [2024-05-15 11:18:24.335586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.335728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.335737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.314 qpair failed and we were unable to recover it. 00:26:27.314 [2024-05-15 11:18:24.335882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.335946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.335956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.314 qpair failed and we were unable to recover it. 00:26:27.314 [2024-05-15 11:18:24.336156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.336244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.336253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.314 qpair failed and we were unable to recover it. 00:26:27.314 [2024-05-15 11:18:24.336321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.336465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.336475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.314 qpair failed and we were unable to recover it. 00:26:27.314 [2024-05-15 11:18:24.336550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.336631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.336641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.314 qpair failed and we were unable to recover it. 00:26:27.314 [2024-05-15 11:18:24.336772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.336852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.336862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.314 qpair failed and we were unable to recover it. 00:26:27.314 [2024-05-15 11:18:24.336996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.337139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.337149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.314 qpair failed and we were unable to recover it. 00:26:27.314 [2024-05-15 11:18:24.337236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.337384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.337394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.314 qpair failed and we were unable to recover it. 00:26:27.314 [2024-05-15 11:18:24.337473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.337559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.337568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.314 qpair failed and we were unable to recover it. 00:26:27.314 [2024-05-15 11:18:24.337635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.337720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.337730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.314 qpair failed and we were unable to recover it. 00:26:27.314 [2024-05-15 11:18:24.337810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.337899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.337909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.314 qpair failed and we were unable to recover it. 00:26:27.314 [2024-05-15 11:18:24.337984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.338052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.338062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.314 qpair failed and we were unable to recover it. 00:26:27.314 [2024-05-15 11:18:24.338139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.338208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.338219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.314 qpair failed and we were unable to recover it. 00:26:27.314 [2024-05-15 11:18:24.338283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.338356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.338366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.314 qpair failed and we were unable to recover it. 00:26:27.314 [2024-05-15 11:18:24.338424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.314 [2024-05-15 11:18:24.338493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.338502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.315 qpair failed and we were unable to recover it. 00:26:27.315 [2024-05-15 11:18:24.338638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.338709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.338719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.315 qpair failed and we were unable to recover it. 00:26:27.315 [2024-05-15 11:18:24.338799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.338873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.338883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.315 qpair failed and we were unable to recover it. 00:26:27.315 [2024-05-15 11:18:24.338949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.339013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.339024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.315 qpair failed and we were unable to recover it. 00:26:27.315 [2024-05-15 11:18:24.339092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.339157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.339184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.315 qpair failed and we were unable to recover it. 00:26:27.315 [2024-05-15 11:18:24.339251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.339317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.339327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.315 qpair failed and we were unable to recover it. 00:26:27.315 [2024-05-15 11:18:24.339413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.339493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.339503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.315 qpair failed and we were unable to recover it. 00:26:27.315 [2024-05-15 11:18:24.339710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.339803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.339812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.315 qpair failed and we were unable to recover it. 00:26:27.315 [2024-05-15 11:18:24.339882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.339953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.339963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.315 qpair failed and we were unable to recover it. 00:26:27.315 [2024-05-15 11:18:24.340028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.340160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.340173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.315 qpair failed and we were unable to recover it. 00:26:27.315 [2024-05-15 11:18:24.340327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.340407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.340417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.315 qpair failed and we were unable to recover it. 00:26:27.315 [2024-05-15 11:18:24.340488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.340548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.340557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.315 qpair failed and we were unable to recover it. 00:26:27.315 [2024-05-15 11:18:24.340631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.340785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.340795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.315 qpair failed and we were unable to recover it. 00:26:27.315 [2024-05-15 11:18:24.340937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.341032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.341044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.315 qpair failed and we were unable to recover it. 00:26:27.315 [2024-05-15 11:18:24.341115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.341246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.341256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.315 qpair failed and we were unable to recover it. 00:26:27.315 [2024-05-15 11:18:24.341324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.341483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.341493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.315 qpair failed and we were unable to recover it. 00:26:27.315 [2024-05-15 11:18:24.341656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.341721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.341730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.315 qpair failed and we were unable to recover it. 00:26:27.315 [2024-05-15 11:18:24.341812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.341959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.341969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.315 qpair failed and we were unable to recover it. 00:26:27.315 [2024-05-15 11:18:24.342173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.342272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.342282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.315 qpair failed and we were unable to recover it. 00:26:27.315 [2024-05-15 11:18:24.342362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.342437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.342446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.315 qpair failed and we were unable to recover it. 00:26:27.315 [2024-05-15 11:18:24.342581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.342733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.342743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.315 qpair failed and we were unable to recover it. 00:26:27.315 [2024-05-15 11:18:24.342831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.342972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.342982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.315 qpair failed and we were unable to recover it. 00:26:27.315 [2024-05-15 11:18:24.343060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.343212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.343222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.315 qpair failed and we were unable to recover it. 00:26:27.315 [2024-05-15 11:18:24.343362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.343570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.343582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.315 qpair failed and we were unable to recover it. 00:26:27.315 [2024-05-15 11:18:24.343732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.343802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.343812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.315 qpair failed and we were unable to recover it. 00:26:27.315 [2024-05-15 11:18:24.343941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.344018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.344027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.315 qpair failed and we were unable to recover it. 00:26:27.315 [2024-05-15 11:18:24.344092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.344233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.344243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.315 qpair failed and we were unable to recover it. 00:26:27.315 [2024-05-15 11:18:24.344466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.344564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.344574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.315 qpair failed and we were unable to recover it. 00:26:27.315 [2024-05-15 11:18:24.344656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.344806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.344816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.315 qpair failed and we were unable to recover it. 00:26:27.315 [2024-05-15 11:18:24.344893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.344971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.344981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.315 qpair failed and we were unable to recover it. 00:26:27.315 [2024-05-15 11:18:24.345176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.345245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.345255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.315 qpair failed and we were unable to recover it. 00:26:27.315 [2024-05-15 11:18:24.345340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.345422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.345432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.315 qpair failed and we were unable to recover it. 00:26:27.315 [2024-05-15 11:18:24.345576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.315 [2024-05-15 11:18:24.345711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.345721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.316 qpair failed and we were unable to recover it. 00:26:27.316 [2024-05-15 11:18:24.345800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.345864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.345877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.316 qpair failed and we were unable to recover it. 00:26:27.316 [2024-05-15 11:18:24.346034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.346146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.346156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.316 qpair failed and we were unable to recover it. 00:26:27.316 [2024-05-15 11:18:24.346251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.346334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.346344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.316 qpair failed and we were unable to recover it. 00:26:27.316 [2024-05-15 11:18:24.346430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.346512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.346521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.316 qpair failed and we were unable to recover it. 00:26:27.316 [2024-05-15 11:18:24.346654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.346796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.346806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.316 qpair failed and we were unable to recover it. 00:26:27.316 [2024-05-15 11:18:24.346873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.346935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.346944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.316 qpair failed and we were unable to recover it. 00:26:27.316 [2024-05-15 11:18:24.347019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.347155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.347181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.316 qpair failed and we were unable to recover it. 00:26:27.316 [2024-05-15 11:18:24.347325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.347497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.347506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.316 qpair failed and we were unable to recover it. 00:26:27.316 [2024-05-15 11:18:24.347591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.347675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.347685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.316 qpair failed and we were unable to recover it. 00:26:27.316 [2024-05-15 11:18:24.347760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.347834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.347843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.316 qpair failed and we were unable to recover it. 00:26:27.316 [2024-05-15 11:18:24.347992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.348063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.348073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.316 qpair failed and we were unable to recover it. 00:26:27.316 [2024-05-15 11:18:24.348147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.348225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.348235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.316 qpair failed and we were unable to recover it. 00:26:27.316 [2024-05-15 11:18:24.348328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.348473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.348482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.316 qpair failed and we were unable to recover it. 00:26:27.316 [2024-05-15 11:18:24.348563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.348654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.348663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.316 qpair failed and we were unable to recover it. 00:26:27.316 [2024-05-15 11:18:24.348873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.349043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.349052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.316 qpair failed and we were unable to recover it. 00:26:27.316 [2024-05-15 11:18:24.349117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.349179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.349189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.316 qpair failed and we were unable to recover it. 00:26:27.316 [2024-05-15 11:18:24.349271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.349412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.349421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.316 qpair failed and we were unable to recover it. 00:26:27.316 [2024-05-15 11:18:24.349565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.349641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.349650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.316 qpair failed and we were unable to recover it. 00:26:27.316 [2024-05-15 11:18:24.349730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.349789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.349798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.316 qpair failed and we were unable to recover it. 00:26:27.316 [2024-05-15 11:18:24.349892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.350034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.350044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.316 qpair failed and we were unable to recover it. 00:26:27.316 [2024-05-15 11:18:24.350246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.350359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.350368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.316 qpair failed and we were unable to recover it. 00:26:27.316 [2024-05-15 11:18:24.350451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.350526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.350535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.316 qpair failed and we were unable to recover it. 00:26:27.316 [2024-05-15 11:18:24.350689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.350774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.350783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.316 qpair failed and we were unable to recover it. 00:26:27.316 [2024-05-15 11:18:24.350853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.351075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.351084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.316 qpair failed and we were unable to recover it. 00:26:27.316 [2024-05-15 11:18:24.351220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.351287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.351296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.316 qpair failed and we were unable to recover it. 00:26:27.316 [2024-05-15 11:18:24.351429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.351493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.351502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.316 qpair failed and we were unable to recover it. 00:26:27.316 [2024-05-15 11:18:24.351587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.351720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.351730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.316 qpair failed and we were unable to recover it. 00:26:27.316 [2024-05-15 11:18:24.351803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.352004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.352014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.316 qpair failed and we were unable to recover it. 00:26:27.316 [2024-05-15 11:18:24.352113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.352194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.352204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.316 qpair failed and we were unable to recover it. 00:26:27.316 [2024-05-15 11:18:24.352291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.352374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.352384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.316 qpair failed and we were unable to recover it. 00:26:27.316 [2024-05-15 11:18:24.352459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.352534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.316 [2024-05-15 11:18:24.352544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.316 qpair failed and we were unable to recover it. 00:26:27.317 [2024-05-15 11:18:24.352633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.352786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.352796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.317 qpair failed and we were unable to recover it. 00:26:27.317 [2024-05-15 11:18:24.352884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.352951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.352961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.317 qpair failed and we were unable to recover it. 00:26:27.317 [2024-05-15 11:18:24.353027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.353108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.353118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.317 qpair failed and we were unable to recover it. 00:26:27.317 [2024-05-15 11:18:24.353217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.353291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.353300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.317 qpair failed and we were unable to recover it. 00:26:27.317 [2024-05-15 11:18:24.353381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.353531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.353541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.317 qpair failed and we were unable to recover it. 00:26:27.317 [2024-05-15 11:18:24.353614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.353762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.353771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.317 qpair failed and we were unable to recover it. 00:26:27.317 [2024-05-15 11:18:24.353838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.353919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.353928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.317 qpair failed and we were unable to recover it. 00:26:27.317 [2024-05-15 11:18:24.353992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.354126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.354135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.317 qpair failed and we were unable to recover it. 00:26:27.317 [2024-05-15 11:18:24.354211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.354298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.354307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.317 qpair failed and we were unable to recover it. 00:26:27.317 [2024-05-15 11:18:24.354387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.354457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.354467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.317 qpair failed and we were unable to recover it. 00:26:27.317 [2024-05-15 11:18:24.354596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.354737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.354747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.317 qpair failed and we were unable to recover it. 00:26:27.317 [2024-05-15 11:18:24.354881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.354948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.354958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.317 qpair failed and we were unable to recover it. 00:26:27.317 [2024-05-15 11:18:24.355034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.355115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.355125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.317 qpair failed and we were unable to recover it. 00:26:27.317 [2024-05-15 11:18:24.355198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.355328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.355338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.317 qpair failed and we were unable to recover it. 00:26:27.317 [2024-05-15 11:18:24.355472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.355565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.355574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.317 qpair failed and we were unable to recover it. 00:26:27.317 [2024-05-15 11:18:24.355639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.355704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.355713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.317 qpair failed and we were unable to recover it. 00:26:27.317 [2024-05-15 11:18:24.355851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.355927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.355937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.317 qpair failed and we were unable to recover it. 00:26:27.317 [2024-05-15 11:18:24.356094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.356156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.356184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.317 qpair failed and we were unable to recover it. 00:26:27.317 [2024-05-15 11:18:24.356334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.356485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.356494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.317 qpair failed and we were unable to recover it. 00:26:27.317 [2024-05-15 11:18:24.356590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.356667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.356676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.317 qpair failed and we were unable to recover it. 00:26:27.317 [2024-05-15 11:18:24.356749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.356814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.356824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.317 qpair failed and we were unable to recover it. 00:26:27.317 [2024-05-15 11:18:24.356961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.357112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.357122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.317 qpair failed and we were unable to recover it. 00:26:27.317 [2024-05-15 11:18:24.357204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.357278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.357288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.317 qpair failed and we were unable to recover it. 00:26:27.317 [2024-05-15 11:18:24.357362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.357495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.357505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.317 qpair failed and we were unable to recover it. 00:26:27.317 [2024-05-15 11:18:24.357588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.357731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.357741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.317 qpair failed and we were unable to recover it. 00:26:27.317 [2024-05-15 11:18:24.357896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.358038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.358048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.317 qpair failed and we were unable to recover it. 00:26:27.317 [2024-05-15 11:18:24.358186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.358271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.358281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.317 qpair failed and we were unable to recover it. 00:26:27.317 [2024-05-15 11:18:24.358419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.358487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.358496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.317 qpair failed and we were unable to recover it. 00:26:27.317 [2024-05-15 11:18:24.358578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.358714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.358724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.317 qpair failed and we were unable to recover it. 00:26:27.317 [2024-05-15 11:18:24.358819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.358894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.358903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.317 qpair failed and we were unable to recover it. 00:26:27.317 [2024-05-15 11:18:24.359051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.359116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.359125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.317 qpair failed and we were unable to recover it. 00:26:27.317 [2024-05-15 11:18:24.359259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.359348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.359358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.317 qpair failed and we were unable to recover it. 00:26:27.317 [2024-05-15 11:18:24.359430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.359571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.317 [2024-05-15 11:18:24.359581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.317 qpair failed and we were unable to recover it. 00:26:27.317 [2024-05-15 11:18:24.359662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.318 [2024-05-15 11:18:24.359827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.318 [2024-05-15 11:18:24.359836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.318 qpair failed and we were unable to recover it. 00:26:27.318 [2024-05-15 11:18:24.359916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.318 [2024-05-15 11:18:24.359982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.318 [2024-05-15 11:18:24.359992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.318 qpair failed and we were unable to recover it. 00:26:27.318 [2024-05-15 11:18:24.360075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.318 [2024-05-15 11:18:24.360159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.318 [2024-05-15 11:18:24.360173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.318 qpair failed and we were unable to recover it. 00:26:27.318 [2024-05-15 11:18:24.360335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.318 [2024-05-15 11:18:24.360410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.318 [2024-05-15 11:18:24.360419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.318 qpair failed and we were unable to recover it. 00:26:27.318 [2024-05-15 11:18:24.360554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.318 [2024-05-15 11:18:24.360647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.318 [2024-05-15 11:18:24.360657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.318 qpair failed and we were unable to recover it. 00:26:27.318 [2024-05-15 11:18:24.360757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.318 [2024-05-15 11:18:24.360832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.318 [2024-05-15 11:18:24.360841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.318 qpair failed and we were unable to recover it. 00:26:27.318 [2024-05-15 11:18:24.360977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.318 [2024-05-15 11:18:24.361107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.318 [2024-05-15 11:18:24.361117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.318 qpair failed and we were unable to recover it. 00:26:27.318 [2024-05-15 11:18:24.361320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.318 [2024-05-15 11:18:24.361406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.318 [2024-05-15 11:18:24.361416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.318 qpair failed and we were unable to recover it. 00:26:27.318 [2024-05-15 11:18:24.361498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.318 [2024-05-15 11:18:24.361583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.318 [2024-05-15 11:18:24.361595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.318 qpair failed and we were unable to recover it. 00:26:27.318 [2024-05-15 11:18:24.361683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.318 [2024-05-15 11:18:24.361776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.318 [2024-05-15 11:18:24.361785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.318 qpair failed and we were unable to recover it. 00:26:27.318 [2024-05-15 11:18:24.361866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.318 [2024-05-15 11:18:24.361951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.318 [2024-05-15 11:18:24.361963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.319 qpair failed and we were unable to recover it. 00:26:27.319 [2024-05-15 11:18:24.362103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.319 [2024-05-15 11:18:24.362237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.319 [2024-05-15 11:18:24.362247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.319 qpair failed and we were unable to recover it. 00:26:27.319 [2024-05-15 11:18:24.362336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.319 [2024-05-15 11:18:24.362488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.319 [2024-05-15 11:18:24.362497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.319 qpair failed and we were unable to recover it. 00:26:27.319 [2024-05-15 11:18:24.362563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.319 [2024-05-15 11:18:24.362695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.319 [2024-05-15 11:18:24.362704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.319 qpair failed and we were unable to recover it. 00:26:27.319 [2024-05-15 11:18:24.362790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.319 [2024-05-15 11:18:24.362926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.319 [2024-05-15 11:18:24.362935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.319 qpair failed and we were unable to recover it. 00:26:27.319 [2024-05-15 11:18:24.363023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.319 [2024-05-15 11:18:24.363093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.319 [2024-05-15 11:18:24.363102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.319 qpair failed and we were unable to recover it. 00:26:27.319 [2024-05-15 11:18:24.363172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.319 [2024-05-15 11:18:24.363252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.319 [2024-05-15 11:18:24.363261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.319 qpair failed and we were unable to recover it. 00:26:27.319 [2024-05-15 11:18:24.363414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.319 [2024-05-15 11:18:24.363497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.319 [2024-05-15 11:18:24.363507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.319 qpair failed and we were unable to recover it. 00:26:27.319 [2024-05-15 11:18:24.363582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.319 [2024-05-15 11:18:24.363661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.319 [2024-05-15 11:18:24.363671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.319 qpair failed and we were unable to recover it. 00:26:27.319 [2024-05-15 11:18:24.363761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.319 [2024-05-15 11:18:24.363847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.319 [2024-05-15 11:18:24.363857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.319 qpair failed and we were unable to recover it. 00:26:27.319 [2024-05-15 11:18:24.364010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.319 [2024-05-15 11:18:24.364079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.319 [2024-05-15 11:18:24.364088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.319 qpair failed and we were unable to recover it. 00:26:27.319 [2024-05-15 11:18:24.364172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.319 [2024-05-15 11:18:24.364239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.319 [2024-05-15 11:18:24.364249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.319 qpair failed and we were unable to recover it. 00:26:27.319 [2024-05-15 11:18:24.364313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.319 [2024-05-15 11:18:24.364444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.319 [2024-05-15 11:18:24.364453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.319 qpair failed and we were unable to recover it. 00:26:27.319 [2024-05-15 11:18:24.364535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.319 [2024-05-15 11:18:24.364666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.319 [2024-05-15 11:18:24.364675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.319 qpair failed and we were unable to recover it. 00:26:27.319 [2024-05-15 11:18:24.364743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.319 [2024-05-15 11:18:24.364882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.319 [2024-05-15 11:18:24.364891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.319 qpair failed and we were unable to recover it. 00:26:27.319 [2024-05-15 11:18:24.364980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.319 [2024-05-15 11:18:24.365044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.319 [2024-05-15 11:18:24.365054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.319 qpair failed and we were unable to recover it. 00:26:27.319 [2024-05-15 11:18:24.365194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.319 [2024-05-15 11:18:24.365339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.319 [2024-05-15 11:18:24.365349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.319 qpair failed and we were unable to recover it. 00:26:27.319 [2024-05-15 11:18:24.365417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.319 [2024-05-15 11:18:24.365495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.319 [2024-05-15 11:18:24.365504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.319 qpair failed and we were unable to recover it. 00:26:27.319 [2024-05-15 11:18:24.365576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.319 [2024-05-15 11:18:24.365786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.319 [2024-05-15 11:18:24.365796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.319 qpair failed and we were unable to recover it. 00:26:27.319 [2024-05-15 11:18:24.366002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.319 [2024-05-15 11:18:24.366152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.319 [2024-05-15 11:18:24.366162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.319 qpair failed and we were unable to recover it. 00:26:27.319 [2024-05-15 11:18:24.366260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.319 [2024-05-15 11:18:24.366408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.319 [2024-05-15 11:18:24.366417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.319 qpair failed and we were unable to recover it. 00:26:27.319 [2024-05-15 11:18:24.366483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.319 [2024-05-15 11:18:24.366562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.319 [2024-05-15 11:18:24.366572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.319 qpair failed and we were unable to recover it. 00:26:27.319 [2024-05-15 11:18:24.366745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.319 [2024-05-15 11:18:24.366878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.319 [2024-05-15 11:18:24.366887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.319 qpair failed and we were unable to recover it. 00:26:27.319 [2024-05-15 11:18:24.366968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.319 [2024-05-15 11:18:24.367033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.319 [2024-05-15 11:18:24.367042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.319 qpair failed and we were unable to recover it. 00:26:27.319 [2024-05-15 11:18:24.367193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.319 [2024-05-15 11:18:24.367359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.319 [2024-05-15 11:18:24.367369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.319 qpair failed and we were unable to recover it. 00:26:27.319 [2024-05-15 11:18:24.367451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.319 [2024-05-15 11:18:24.367591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.319 [2024-05-15 11:18:24.367601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.319 qpair failed and we were unable to recover it. 00:26:27.320 [2024-05-15 11:18:24.367744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.367879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.367889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.320 qpair failed and we were unable to recover it. 00:26:27.320 [2024-05-15 11:18:24.368028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.368122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.368131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.320 qpair failed and we were unable to recover it. 00:26:27.320 [2024-05-15 11:18:24.368302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.368399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.368408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.320 qpair failed and we were unable to recover it. 00:26:27.320 [2024-05-15 11:18:24.368494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.368624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.368634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.320 qpair failed and we were unable to recover it. 00:26:27.320 [2024-05-15 11:18:24.368726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.368867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.368877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.320 qpair failed and we were unable to recover it. 00:26:27.320 [2024-05-15 11:18:24.368954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.369104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.369114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.320 qpair failed and we were unable to recover it. 00:26:27.320 [2024-05-15 11:18:24.369188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.369367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.369377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.320 qpair failed and we were unable to recover it. 00:26:27.320 [2024-05-15 11:18:24.369451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.369599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.369608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.320 qpair failed and we were unable to recover it. 00:26:27.320 [2024-05-15 11:18:24.369745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.369879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.369888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.320 qpair failed and we were unable to recover it. 00:26:27.320 [2024-05-15 11:18:24.369970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.370034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.370043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.320 qpair failed and we were unable to recover it. 00:26:27.320 [2024-05-15 11:18:24.370199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.370402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.370411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.320 qpair failed and we were unable to recover it. 00:26:27.320 [2024-05-15 11:18:24.370561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.370738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.370747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.320 qpair failed and we were unable to recover it. 00:26:27.320 [2024-05-15 11:18:24.370881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.371022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.371031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.320 qpair failed and we were unable to recover it. 00:26:27.320 [2024-05-15 11:18:24.371115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.371282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.371292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.320 qpair failed and we were unable to recover it. 00:26:27.320 [2024-05-15 11:18:24.371461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.371603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.371612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.320 qpair failed and we were unable to recover it. 00:26:27.320 [2024-05-15 11:18:24.371822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.371913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.371923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.320 qpair failed and we were unable to recover it. 00:26:27.320 [2024-05-15 11:18:24.372013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.372153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.372163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.320 qpair failed and we were unable to recover it. 00:26:27.320 [2024-05-15 11:18:24.372334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.372478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.372487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.320 qpair failed and we were unable to recover it. 00:26:27.320 [2024-05-15 11:18:24.372646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.372713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.372722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.320 qpair failed and we were unable to recover it. 00:26:27.320 [2024-05-15 11:18:24.372806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.372934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.372943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.320 qpair failed and we were unable to recover it. 00:26:27.320 [2024-05-15 11:18:24.373031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.373095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.373105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.320 qpair failed and we were unable to recover it. 00:26:27.320 [2024-05-15 11:18:24.373337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.373428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.373438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.320 qpair failed and we were unable to recover it. 00:26:27.320 [2024-05-15 11:18:24.373607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.373768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.373777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.320 qpair failed and we were unable to recover it. 00:26:27.320 [2024-05-15 11:18:24.373853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.374050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.374060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.320 qpair failed and we were unable to recover it. 00:26:27.320 [2024-05-15 11:18:24.374140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.374218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.374228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.320 qpair failed and we were unable to recover it. 00:26:27.320 [2024-05-15 11:18:24.374371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.374443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.374453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.320 qpair failed and we were unable to recover it. 00:26:27.320 [2024-05-15 11:18:24.374583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.374733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.374743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.320 qpair failed and we were unable to recover it. 00:26:27.320 [2024-05-15 11:18:24.374944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.375085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.375095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.320 qpair failed and we were unable to recover it. 00:26:27.320 [2024-05-15 11:18:24.375186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.375331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.375341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.320 qpair failed and we were unable to recover it. 00:26:27.320 [2024-05-15 11:18:24.375409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.375638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.375648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.320 qpair failed and we were unable to recover it. 00:26:27.320 [2024-05-15 11:18:24.375742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.375816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.375826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.320 qpair failed and we were unable to recover it. 00:26:27.320 [2024-05-15 11:18:24.375961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.376110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.376121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.320 qpair failed and we were unable to recover it. 00:26:27.320 [2024-05-15 11:18:24.376258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.376434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.376443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.320 qpair failed and we were unable to recover it. 00:26:27.320 [2024-05-15 11:18:24.376681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.376889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.376898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.320 qpair failed and we were unable to recover it. 00:26:27.320 [2024-05-15 11:18:24.377044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.320 [2024-05-15 11:18:24.377130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.377140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.321 qpair failed and we were unable to recover it. 00:26:27.321 [2024-05-15 11:18:24.377290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.377461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.377470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.321 qpair failed and we were unable to recover it. 00:26:27.321 [2024-05-15 11:18:24.377557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.377649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.377659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.321 qpair failed and we were unable to recover it. 00:26:27.321 [2024-05-15 11:18:24.377875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.378122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.378132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.321 qpair failed and we were unable to recover it. 00:26:27.321 [2024-05-15 11:18:24.378216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.378380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.378389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.321 qpair failed and we were unable to recover it. 00:26:27.321 [2024-05-15 11:18:24.378514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.378647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.378656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.321 qpair failed and we were unable to recover it. 00:26:27.321 [2024-05-15 11:18:24.378747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.378882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.378892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.321 qpair failed and we were unable to recover it. 00:26:27.321 [2024-05-15 11:18:24.379076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.379226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.379250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.321 qpair failed and we were unable to recover it. 00:26:27.321 [2024-05-15 11:18:24.379384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.379465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.379474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.321 qpair failed and we were unable to recover it. 00:26:27.321 [2024-05-15 11:18:24.379549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.379630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.379639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.321 qpair failed and we were unable to recover it. 00:26:27.321 [2024-05-15 11:18:24.379819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.379896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.379906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.321 qpair failed and we were unable to recover it. 00:26:27.321 [2024-05-15 11:18:24.380041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.380105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.380114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.321 qpair failed and we were unable to recover it. 00:26:27.321 [2024-05-15 11:18:24.380264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.380333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.380343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.321 qpair failed and we were unable to recover it. 00:26:27.321 [2024-05-15 11:18:24.380476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.380611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.380620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.321 qpair failed and we were unable to recover it. 00:26:27.321 [2024-05-15 11:18:24.380694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.380766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.380776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.321 qpair failed and we were unable to recover it. 00:26:27.321 [2024-05-15 11:18:24.380926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.381003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.381013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.321 qpair failed and we were unable to recover it. 00:26:27.321 [2024-05-15 11:18:24.381226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.381425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.381434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.321 qpair failed and we were unable to recover it. 00:26:27.321 [2024-05-15 11:18:24.381520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.381652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.381665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.321 qpair failed and we were unable to recover it. 00:26:27.321 [2024-05-15 11:18:24.381833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.382030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.382040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.321 qpair failed and we were unable to recover it. 00:26:27.321 [2024-05-15 11:18:24.382234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.382316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.382326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.321 qpair failed and we were unable to recover it. 00:26:27.321 [2024-05-15 11:18:24.382463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.382541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.382550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.321 qpair failed and we were unable to recover it. 00:26:27.321 [2024-05-15 11:18:24.382626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.382764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.382774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.321 qpair failed and we were unable to recover it. 00:26:27.321 [2024-05-15 11:18:24.382926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.383080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.383089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.321 qpair failed and we were unable to recover it. 00:26:27.321 [2024-05-15 11:18:24.383223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.383301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.383311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.321 qpair failed and we were unable to recover it. 00:26:27.321 [2024-05-15 11:18:24.383521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.383612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.383622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.321 qpair failed and we were unable to recover it. 00:26:27.321 [2024-05-15 11:18:24.383841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.383907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.383917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.321 qpair failed and we were unable to recover it. 00:26:27.321 [2024-05-15 11:18:24.384054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.384142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.384151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.321 qpair failed and we were unable to recover it. 00:26:27.321 [2024-05-15 11:18:24.384245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.384340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.384351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.321 qpair failed and we were unable to recover it. 00:26:27.321 [2024-05-15 11:18:24.384484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.384560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.384570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.321 qpair failed and we were unable to recover it. 00:26:27.321 [2024-05-15 11:18:24.384651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.384893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.384903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.321 qpair failed and we were unable to recover it. 00:26:27.321 [2024-05-15 11:18:24.384995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.385066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.321 [2024-05-15 11:18:24.385075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.321 qpair failed and we were unable to recover it. 00:26:27.322 [2024-05-15 11:18:24.385230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.385393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.385403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.322 qpair failed and we were unable to recover it. 00:26:27.322 [2024-05-15 11:18:24.385470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.385619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.385628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.322 qpair failed and we were unable to recover it. 00:26:27.322 [2024-05-15 11:18:24.385707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.385791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.385800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.322 qpair failed and we were unable to recover it. 00:26:27.322 [2024-05-15 11:18:24.385876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.385972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.385981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.322 qpair failed and we were unable to recover it. 00:26:27.322 [2024-05-15 11:18:24.386134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.386230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.386240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.322 qpair failed and we were unable to recover it. 00:26:27.322 [2024-05-15 11:18:24.386318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.386383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.386392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.322 qpair failed and we were unable to recover it. 00:26:27.322 [2024-05-15 11:18:24.386462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.386623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.386632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.322 qpair failed and we were unable to recover it. 00:26:27.322 [2024-05-15 11:18:24.386702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.386793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.386802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.322 qpair failed and we were unable to recover it. 00:26:27.322 [2024-05-15 11:18:24.386895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.386983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.386993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.322 qpair failed and we were unable to recover it. 00:26:27.322 [2024-05-15 11:18:24.387057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.387195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.387205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.322 qpair failed and we were unable to recover it. 00:26:27.322 [2024-05-15 11:18:24.387367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.387455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.387465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.322 qpair failed and we were unable to recover it. 00:26:27.322 [2024-05-15 11:18:24.387532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.387599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.387608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.322 qpair failed and we were unable to recover it. 00:26:27.322 [2024-05-15 11:18:24.387684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.387764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.387773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.322 qpair failed and we were unable to recover it. 00:26:27.322 [2024-05-15 11:18:24.387872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.387954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.387964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.322 qpair failed and we were unable to recover it. 00:26:27.322 [2024-05-15 11:18:24.388104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.388185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.388194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.322 qpair failed and we were unable to recover it. 00:26:27.322 [2024-05-15 11:18:24.388287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.388447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.388457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.322 qpair failed and we were unable to recover it. 00:26:27.322 [2024-05-15 11:18:24.388543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.388610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.388619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.322 qpair failed and we were unable to recover it. 00:26:27.322 [2024-05-15 11:18:24.388735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.388866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.388875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.322 qpair failed and we were unable to recover it. 00:26:27.322 [2024-05-15 11:18:24.388969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.389117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.389126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.322 qpair failed and we were unable to recover it. 00:26:27.322 [2024-05-15 11:18:24.389264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.389394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.389404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.322 qpair failed and we were unable to recover it. 00:26:27.322 [2024-05-15 11:18:24.389475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.389566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.389576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.322 qpair failed and we were unable to recover it. 00:26:27.322 [2024-05-15 11:18:24.389650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.389850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.389860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.322 qpair failed and we were unable to recover it. 00:26:27.322 [2024-05-15 11:18:24.390061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.390142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.390151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.322 qpair failed and we were unable to recover it. 00:26:27.322 [2024-05-15 11:18:24.390289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.390367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.390376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.322 qpair failed and we were unable to recover it. 00:26:27.322 [2024-05-15 11:18:24.390453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.390583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.390592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.322 qpair failed and we were unable to recover it. 00:26:27.322 [2024-05-15 11:18:24.390679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.390760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.390769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.322 qpair failed and we were unable to recover it. 00:26:27.322 [2024-05-15 11:18:24.390839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.390960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.390970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.322 qpair failed and we were unable to recover it. 00:26:27.322 [2024-05-15 11:18:24.391135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.391268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.391280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.322 qpair failed and we were unable to recover it. 00:26:27.322 [2024-05-15 11:18:24.391359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.391414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.391424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.322 qpair failed and we were unable to recover it. 00:26:27.322 [2024-05-15 11:18:24.391517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.391653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.391663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.322 qpair failed and we were unable to recover it. 00:26:27.322 [2024-05-15 11:18:24.391816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.391969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.391979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.322 qpair failed and we were unable to recover it. 00:26:27.322 [2024-05-15 11:18:24.392127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.392269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.392279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.322 qpair failed and we were unable to recover it. 00:26:27.322 [2024-05-15 11:18:24.392353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.392545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.392554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.322 qpair failed and we were unable to recover it. 00:26:27.322 [2024-05-15 11:18:24.392693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.392895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.392905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.322 qpair failed and we were unable to recover it. 00:26:27.322 [2024-05-15 11:18:24.393000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.393132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.393142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.322 qpair failed and we were unable to recover it. 00:26:27.322 [2024-05-15 11:18:24.393223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.393308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.322 [2024-05-15 11:18:24.393318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.322 qpair failed and we were unable to recover it. 00:26:27.323 [2024-05-15 11:18:24.393469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.393631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.393641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.323 qpair failed and we were unable to recover it. 00:26:27.323 [2024-05-15 11:18:24.393842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.393906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.393915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.323 qpair failed and we were unable to recover it. 00:26:27.323 [2024-05-15 11:18:24.394049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.394120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.394129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.323 qpair failed and we were unable to recover it. 00:26:27.323 [2024-05-15 11:18:24.394225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.394290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.394299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.323 qpair failed and we were unable to recover it. 00:26:27.323 [2024-05-15 11:18:24.394397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.394547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.394557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.323 qpair failed and we were unable to recover it. 00:26:27.323 [2024-05-15 11:18:24.394701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.394844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.394854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.323 qpair failed and we were unable to recover it. 00:26:27.323 [2024-05-15 11:18:24.394934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.395000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.395010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.323 qpair failed and we were unable to recover it. 00:26:27.323 [2024-05-15 11:18:24.395085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.395227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.395237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.323 qpair failed and we were unable to recover it. 00:26:27.323 [2024-05-15 11:18:24.395327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.395583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.395593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.323 qpair failed and we were unable to recover it. 00:26:27.323 [2024-05-15 11:18:24.395677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.395807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.395816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.323 qpair failed and we were unable to recover it. 00:26:27.323 [2024-05-15 11:18:24.395950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.396119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.396129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.323 qpair failed and we were unable to recover it. 00:26:27.323 [2024-05-15 11:18:24.396204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.396285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.396294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.323 qpair failed and we were unable to recover it. 00:26:27.323 [2024-05-15 11:18:24.396366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.396446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.396456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.323 qpair failed and we were unable to recover it. 00:26:27.323 [2024-05-15 11:18:24.396586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.396668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.396679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.323 qpair failed and we were unable to recover it. 00:26:27.323 [2024-05-15 11:18:24.396755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.396885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.396895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.323 qpair failed and we were unable to recover it. 00:26:27.323 [2024-05-15 11:18:24.397043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.397114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.397123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.323 qpair failed and we were unable to recover it. 00:26:27.323 [2024-05-15 11:18:24.397202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.397360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.397370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.323 qpair failed and we were unable to recover it. 00:26:27.323 [2024-05-15 11:18:24.397438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.397570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.397579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.323 qpair failed and we were unable to recover it. 00:26:27.323 [2024-05-15 11:18:24.397650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.397737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.397746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.323 qpair failed and we were unable to recover it. 00:26:27.323 [2024-05-15 11:18:24.397912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.397990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.397999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.323 qpair failed and we were unable to recover it. 00:26:27.323 [2024-05-15 11:18:24.398083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.398172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.398182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.323 qpair failed and we were unable to recover it. 00:26:27.323 [2024-05-15 11:18:24.398336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.398492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.398502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.323 qpair failed and we were unable to recover it. 00:26:27.323 [2024-05-15 11:18:24.398664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.398777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.398786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.323 qpair failed and we were unable to recover it. 00:26:27.323 [2024-05-15 11:18:24.398865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.399012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.399022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.323 qpair failed and we were unable to recover it. 00:26:27.323 [2024-05-15 11:18:24.399185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.399427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.399437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.323 qpair failed and we were unable to recover it. 00:26:27.323 [2024-05-15 11:18:24.399532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.399672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.399682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.323 qpair failed and we were unable to recover it. 00:26:27.323 [2024-05-15 11:18:24.399774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.399843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.399852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.323 qpair failed and we were unable to recover it. 00:26:27.323 [2024-05-15 11:18:24.399935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.400065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.323 [2024-05-15 11:18:24.400074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.324 qpair failed and we were unable to recover it. 00:26:27.324 [2024-05-15 11:18:24.400190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.400257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.400267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.324 qpair failed and we were unable to recover it. 00:26:27.324 [2024-05-15 11:18:24.400353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.400427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.400437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.324 qpair failed and we were unable to recover it. 00:26:27.324 [2024-05-15 11:18:24.400502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.400568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.400577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.324 qpair failed and we were unable to recover it. 00:26:27.324 [2024-05-15 11:18:24.400645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.400718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.400728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.324 qpair failed and we were unable to recover it. 00:26:27.324 [2024-05-15 11:18:24.400794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.400878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.400888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.324 qpair failed and we were unable to recover it. 00:26:27.324 [2024-05-15 11:18:24.400965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.401029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.401038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.324 qpair failed and we were unable to recover it. 00:26:27.324 [2024-05-15 11:18:24.401239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.401333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.401343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.324 qpair failed and we were unable to recover it. 00:26:27.324 [2024-05-15 11:18:24.401435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.401577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.401586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.324 qpair failed and we were unable to recover it. 00:26:27.324 [2024-05-15 11:18:24.401790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.401866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.401876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.324 qpair failed and we were unable to recover it. 00:26:27.324 [2024-05-15 11:18:24.402011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.402160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.402173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.324 qpair failed and we were unable to recover it. 00:26:27.324 [2024-05-15 11:18:24.402256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.402332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.402341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.324 qpair failed and we were unable to recover it. 00:26:27.324 [2024-05-15 11:18:24.402475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.402543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.402552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.324 qpair failed and we were unable to recover it. 00:26:27.324 [2024-05-15 11:18:24.402698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.402783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.402792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.324 qpair failed and we were unable to recover it. 00:26:27.324 [2024-05-15 11:18:24.402939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.403015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.403025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.324 qpair failed and we were unable to recover it. 00:26:27.324 [2024-05-15 11:18:24.403174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.403317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.403326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.324 qpair failed and we were unable to recover it. 00:26:27.324 [2024-05-15 11:18:24.403549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.403748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.403758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.324 qpair failed and we were unable to recover it. 00:26:27.324 [2024-05-15 11:18:24.403851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.403933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.403942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.324 qpair failed and we were unable to recover it. 00:26:27.324 [2024-05-15 11:18:24.404097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.404178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.404189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.324 qpair failed and we were unable to recover it. 00:26:27.324 [2024-05-15 11:18:24.404280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.404366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.404375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.324 qpair failed and we were unable to recover it. 00:26:27.324 [2024-05-15 11:18:24.404439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.404574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.404583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.324 qpair failed and we were unable to recover it. 00:26:27.324 [2024-05-15 11:18:24.404677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.404827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.404837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.324 qpair failed and we were unable to recover it. 00:26:27.324 [2024-05-15 11:18:24.404969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.405129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.405139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.324 qpair failed and we were unable to recover it. 00:26:27.324 [2024-05-15 11:18:24.405223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.405319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.405328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.324 qpair failed and we were unable to recover it. 00:26:27.324 [2024-05-15 11:18:24.405405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.405496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.405505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.324 qpair failed and we were unable to recover it. 00:26:27.324 [2024-05-15 11:18:24.405727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.405876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.405886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.324 qpair failed and we were unable to recover it. 00:26:27.324 [2024-05-15 11:18:24.406057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.406197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.406207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.324 qpair failed and we were unable to recover it. 00:26:27.324 [2024-05-15 11:18:24.406279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.406359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.406368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.324 qpair failed and we were unable to recover it. 00:26:27.324 [2024-05-15 11:18:24.406575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.406639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.406648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.324 qpair failed and we were unable to recover it. 00:26:27.324 [2024-05-15 11:18:24.406869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.407018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.407028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.324 qpair failed and we were unable to recover it. 00:26:27.324 [2024-05-15 11:18:24.407114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.407253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.407263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.324 qpair failed and we were unable to recover it. 00:26:27.324 [2024-05-15 11:18:24.407399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.407620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.407629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.324 qpair failed and we were unable to recover it. 00:26:27.324 [2024-05-15 11:18:24.407709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.407878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.407887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.324 qpair failed and we were unable to recover it. 00:26:27.324 [2024-05-15 11:18:24.408029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.408114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.408123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.324 qpair failed and we were unable to recover it. 00:26:27.324 [2024-05-15 11:18:24.408272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.408418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.408428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.324 qpair failed and we were unable to recover it. 00:26:27.324 [2024-05-15 11:18:24.408596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.408743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.408753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.324 qpair failed and we were unable to recover it. 00:26:27.324 [2024-05-15 11:18:24.408829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.408922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.408932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.324 qpair failed and we were unable to recover it. 00:26:27.324 [2024-05-15 11:18:24.409079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.409142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.324 [2024-05-15 11:18:24.409151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.324 qpair failed and we were unable to recover it. 00:26:27.325 [2024-05-15 11:18:24.409249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.409384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.409395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.325 qpair failed and we were unable to recover it. 00:26:27.325 [2024-05-15 11:18:24.409598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.409677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.409688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.325 qpair failed and we were unable to recover it. 00:26:27.325 [2024-05-15 11:18:24.409780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.409859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.409869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.325 qpair failed and we were unable to recover it. 00:26:27.325 [2024-05-15 11:18:24.410006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.410096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.410106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.325 qpair failed and we were unable to recover it. 00:26:27.325 [2024-05-15 11:18:24.410254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.410496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.410505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.325 qpair failed and we were unable to recover it. 00:26:27.325 [2024-05-15 11:18:24.410579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.410801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.410811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.325 qpair failed and we were unable to recover it. 00:26:27.325 [2024-05-15 11:18:24.410878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.411100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.411110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.325 qpair failed and we were unable to recover it. 00:26:27.325 [2024-05-15 11:18:24.411275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.411367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.411377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.325 qpair failed and we were unable to recover it. 00:26:27.325 [2024-05-15 11:18:24.411524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.411662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.411672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.325 qpair failed and we were unable to recover it. 00:26:27.325 [2024-05-15 11:18:24.411762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.411834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.411844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.325 qpair failed and we were unable to recover it. 00:26:27.325 [2024-05-15 11:18:24.412068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.412241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.412251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.325 qpair failed and we were unable to recover it. 00:26:27.325 [2024-05-15 11:18:24.412454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.412527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.412537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.325 qpair failed and we were unable to recover it. 00:26:27.325 [2024-05-15 11:18:24.412686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.412871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.412881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.325 qpair failed and we were unable to recover it. 00:26:27.325 [2024-05-15 11:18:24.413084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.413159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.413175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.325 qpair failed and we were unable to recover it. 00:26:27.325 [2024-05-15 11:18:24.413254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.413339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.413348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.325 qpair failed and we were unable to recover it. 00:26:27.325 [2024-05-15 11:18:24.413517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.413670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.413679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.325 qpair failed and we were unable to recover it. 00:26:27.325 [2024-05-15 11:18:24.413842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.413912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.413922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.325 qpair failed and we were unable to recover it. 00:26:27.325 [2024-05-15 11:18:24.414059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.414152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.414161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.325 qpair failed and we were unable to recover it. 00:26:27.325 [2024-05-15 11:18:24.414301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.414445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.414455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.325 qpair failed and we were unable to recover it. 00:26:27.325 [2024-05-15 11:18:24.414545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.414696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.414705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.325 qpair failed and we were unable to recover it. 00:26:27.325 [2024-05-15 11:18:24.414796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.414934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.414943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.325 qpair failed and we were unable to recover it. 00:26:27.325 [2024-05-15 11:18:24.415087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.415236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.415246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.325 qpair failed and we were unable to recover it. 00:26:27.325 [2024-05-15 11:18:24.415326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.415527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.415536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.325 qpair failed and we were unable to recover it. 00:26:27.325 [2024-05-15 11:18:24.415612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.415695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.415704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.325 qpair failed and we were unable to recover it. 00:26:27.325 [2024-05-15 11:18:24.415789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.415869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.415878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.325 qpair failed and we were unable to recover it. 00:26:27.325 [2024-05-15 11:18:24.415972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.416139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.416149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.325 qpair failed and we were unable to recover it. 00:26:27.325 [2024-05-15 11:18:24.416361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.416441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.416452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.325 qpair failed and we were unable to recover it. 00:26:27.325 [2024-05-15 11:18:24.416598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.416670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.416679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.325 qpair failed and we were unable to recover it. 00:26:27.325 [2024-05-15 11:18:24.416838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.416976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.416985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.325 qpair failed and we were unable to recover it. 00:26:27.325 [2024-05-15 11:18:24.417069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.417141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.417151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.325 qpair failed and we were unable to recover it. 00:26:27.325 [2024-05-15 11:18:24.417252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.417393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.417402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.325 qpair failed and we were unable to recover it. 00:26:27.325 [2024-05-15 11:18:24.417545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.417689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.417699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.325 qpair failed and we were unable to recover it. 00:26:27.325 [2024-05-15 11:18:24.417797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.417948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.417957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.325 qpair failed and we were unable to recover it. 00:26:27.325 [2024-05-15 11:18:24.418096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.418173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.418183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.325 qpair failed and we were unable to recover it. 00:26:27.325 [2024-05-15 11:18:24.418317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.418480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.325 [2024-05-15 11:18:24.418489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.325 qpair failed and we were unable to recover it. 00:26:27.325 [2024-05-15 11:18:24.418575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.418704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.418714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.326 qpair failed and we were unable to recover it. 00:26:27.326 [2024-05-15 11:18:24.418858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.418995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.419009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.326 qpair failed and we were unable to recover it. 00:26:27.326 [2024-05-15 11:18:24.419158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.419254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.419265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.326 qpair failed and we were unable to recover it. 00:26:27.326 [2024-05-15 11:18:24.419346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.419520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.419529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.326 qpair failed and we were unable to recover it. 00:26:27.326 [2024-05-15 11:18:24.419621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.419694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.419704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.326 qpair failed and we were unable to recover it. 00:26:27.326 [2024-05-15 11:18:24.419782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.419938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.419948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.326 qpair failed and we were unable to recover it. 00:26:27.326 [2024-05-15 11:18:24.420098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.420248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.420258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.326 qpair failed and we were unable to recover it. 00:26:27.326 [2024-05-15 11:18:24.420326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.420480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.420490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.326 qpair failed and we were unable to recover it. 00:26:27.326 [2024-05-15 11:18:24.420578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.420729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.420739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.326 qpair failed and we were unable to recover it. 00:26:27.326 [2024-05-15 11:18:24.420923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.421092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.421102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.326 qpair failed and we were unable to recover it. 00:26:27.326 [2024-05-15 11:18:24.421227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.421368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.421377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.326 qpair failed and we were unable to recover it. 00:26:27.326 [2024-05-15 11:18:24.421605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.421686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.421697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.326 qpair failed and we were unable to recover it. 00:26:27.326 [2024-05-15 11:18:24.421778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.421943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.421953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.326 qpair failed and we were unable to recover it. 00:26:27.326 [2024-05-15 11:18:24.422101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.422183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.422193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.326 qpair failed and we were unable to recover it. 00:26:27.326 [2024-05-15 11:18:24.422338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.422498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.422508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.326 qpair failed and we were unable to recover it. 00:26:27.326 [2024-05-15 11:18:24.422588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.422738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.422747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.326 qpair failed and we were unable to recover it. 00:26:27.326 [2024-05-15 11:18:24.422892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.423106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.423116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.326 qpair failed and we were unable to recover it. 00:26:27.326 [2024-05-15 11:18:24.423262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.423424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.423433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.326 qpair failed and we were unable to recover it. 00:26:27.326 [2024-05-15 11:18:24.423500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.423571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.423581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.326 qpair failed and we were unable to recover it. 00:26:27.326 [2024-05-15 11:18:24.423754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.423887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.423897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.326 qpair failed and we were unable to recover it. 00:26:27.326 [2024-05-15 11:18:24.424106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.424256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.424266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.326 qpair failed and we were unable to recover it. 00:26:27.326 [2024-05-15 11:18:24.424342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.424506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.424516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.326 qpair failed and we were unable to recover it. 00:26:27.326 [2024-05-15 11:18:24.424663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.424751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.424760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.326 qpair failed and we were unable to recover it. 00:26:27.326 [2024-05-15 11:18:24.424829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.424912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.424922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.326 qpair failed and we were unable to recover it. 00:26:27.326 [2024-05-15 11:18:24.425069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.425206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.425216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.326 qpair failed and we were unable to recover it. 00:26:27.326 [2024-05-15 11:18:24.425353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.425510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.425520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.326 qpair failed and we were unable to recover it. 00:26:27.326 [2024-05-15 11:18:24.425688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.425783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.425793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.326 qpair failed and we were unable to recover it. 00:26:27.326 [2024-05-15 11:18:24.425930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.426084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.326 [2024-05-15 11:18:24.426094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.326 qpair failed and we were unable to recover it. 00:26:27.326 [2024-05-15 11:18:24.426229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.426323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.426332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.327 qpair failed and we were unable to recover it. 00:26:27.327 [2024-05-15 11:18:24.426476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.426627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.426636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.327 qpair failed and we were unable to recover it. 00:26:27.327 [2024-05-15 11:18:24.426767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.426934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.426944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.327 qpair failed and we were unable to recover it. 00:26:27.327 [2024-05-15 11:18:24.427038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.427174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.427184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.327 qpair failed and we were unable to recover it. 00:26:27.327 [2024-05-15 11:18:24.427320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.427471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.427481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.327 qpair failed and we were unable to recover it. 00:26:27.327 [2024-05-15 11:18:24.427690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.427823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.427833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.327 qpair failed and we were unable to recover it. 00:26:27.327 [2024-05-15 11:18:24.427907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.428065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.428074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.327 qpair failed and we were unable to recover it. 00:26:27.327 [2024-05-15 11:18:24.428289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.428426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.428435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.327 qpair failed and we were unable to recover it. 00:26:27.327 [2024-05-15 11:18:24.428577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.428778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.428788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.327 qpair failed and we were unable to recover it. 00:26:27.327 [2024-05-15 11:18:24.428876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.428956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.428966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.327 qpair failed and we were unable to recover it. 00:26:27.327 [2024-05-15 11:18:24.429139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.429214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.429224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.327 qpair failed and we were unable to recover it. 00:26:27.327 [2024-05-15 11:18:24.429359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.429501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.429510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.327 qpair failed and we were unable to recover it. 00:26:27.327 [2024-05-15 11:18:24.429579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.429709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.429718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.327 qpair failed and we were unable to recover it. 00:26:27.327 [2024-05-15 11:18:24.429861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.429952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.429961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.327 qpair failed and we were unable to recover it. 00:26:27.327 [2024-05-15 11:18:24.430045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.430178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.430188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.327 qpair failed and we were unable to recover it. 00:26:27.327 [2024-05-15 11:18:24.430266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.430404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.430414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.327 qpair failed and we were unable to recover it. 00:26:27.327 [2024-05-15 11:18:24.430582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.430736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.430746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.327 qpair failed and we were unable to recover it. 00:26:27.327 [2024-05-15 11:18:24.430889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.430983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.430993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.327 qpair failed and we were unable to recover it. 00:26:27.327 [2024-05-15 11:18:24.431131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.431212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.431222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.327 qpair failed and we were unable to recover it. 00:26:27.327 [2024-05-15 11:18:24.431309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.431453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.431463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.327 qpair failed and we were unable to recover it. 00:26:27.327 [2024-05-15 11:18:24.431597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.431675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.431684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.327 qpair failed and we were unable to recover it. 00:26:27.327 [2024-05-15 11:18:24.431749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.431850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.431859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.327 qpair failed and we were unable to recover it. 00:26:27.327 [2024-05-15 11:18:24.432017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.432097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.432106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.327 qpair failed and we were unable to recover it. 00:26:27.327 [2024-05-15 11:18:24.432267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.432345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.432355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.327 qpair failed and we were unable to recover it. 00:26:27.327 [2024-05-15 11:18:24.432420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.432515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.432525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.327 qpair failed and we were unable to recover it. 00:26:27.327 [2024-05-15 11:18:24.432609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.432685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.432695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.327 qpair failed and we were unable to recover it. 00:26:27.327 [2024-05-15 11:18:24.432918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.433063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.433072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.327 qpair failed and we were unable to recover it. 00:26:27.327 [2024-05-15 11:18:24.433228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.433301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.433310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.327 qpair failed and we were unable to recover it. 00:26:27.327 [2024-05-15 11:18:24.433508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.433647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.433656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.327 qpair failed and we were unable to recover it. 00:26:27.327 [2024-05-15 11:18:24.433864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.433946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.433955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.327 qpair failed and we were unable to recover it. 00:26:27.327 [2024-05-15 11:18:24.434023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.434162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.434174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.327 qpair failed and we were unable to recover it. 00:26:27.327 [2024-05-15 11:18:24.434383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.434521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.434531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.327 qpair failed and we were unable to recover it. 00:26:27.327 [2024-05-15 11:18:24.434666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.434833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.434843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.327 qpair failed and we were unable to recover it. 00:26:27.327 [2024-05-15 11:18:24.434991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.435082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.435092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.327 qpair failed and we were unable to recover it. 00:26:27.327 [2024-05-15 11:18:24.435181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.435344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.435354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.327 qpair failed and we were unable to recover it. 00:26:27.327 [2024-05-15 11:18:24.435495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.435627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.435638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.327 qpair failed and we were unable to recover it. 00:26:27.327 [2024-05-15 11:18:24.435805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.436004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.436015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.327 qpair failed and we were unable to recover it. 00:26:27.327 [2024-05-15 11:18:24.436190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.436267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.327 [2024-05-15 11:18:24.436276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.328 qpair failed and we were unable to recover it. 00:26:27.328 [2024-05-15 11:18:24.436356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.436419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.436428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.328 qpair failed and we were unable to recover it. 00:26:27.328 [2024-05-15 11:18:24.436493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.436645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.436656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.328 qpair failed and we were unable to recover it. 00:26:27.328 [2024-05-15 11:18:24.436801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.436878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.436888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.328 qpair failed and we were unable to recover it. 00:26:27.328 [2024-05-15 11:18:24.436971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.437116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.437126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.328 qpair failed and we were unable to recover it. 00:26:27.328 [2024-05-15 11:18:24.437203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.437273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.437282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.328 qpair failed and we were unable to recover it. 00:26:27.328 [2024-05-15 11:18:24.437432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.437529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.437538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.328 qpair failed and we were unable to recover it. 00:26:27.328 [2024-05-15 11:18:24.437742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.437892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.437902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.328 qpair failed and we were unable to recover it. 00:26:27.328 [2024-05-15 11:18:24.438039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.438114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.438123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.328 qpair failed and we were unable to recover it. 00:26:27.328 [2024-05-15 11:18:24.438329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.438405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.438415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.328 qpair failed and we were unable to recover it. 00:26:27.328 [2024-05-15 11:18:24.438555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.438636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.438645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.328 qpair failed and we were unable to recover it. 00:26:27.328 [2024-05-15 11:18:24.438737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.438825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.438835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.328 qpair failed and we were unable to recover it. 00:26:27.328 [2024-05-15 11:18:24.438925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.439060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.439070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.328 qpair failed and we were unable to recover it. 00:26:27.328 [2024-05-15 11:18:24.439204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.439268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.439277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.328 qpair failed and we were unable to recover it. 00:26:27.328 [2024-05-15 11:18:24.439412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.439488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.439497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.328 qpair failed and we were unable to recover it. 00:26:27.328 [2024-05-15 11:18:24.439646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.439711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.439721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.328 qpair failed and we were unable to recover it. 00:26:27.328 [2024-05-15 11:18:24.439877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.439959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.439969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.328 qpair failed and we were unable to recover it. 00:26:27.328 [2024-05-15 11:18:24.440182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.440342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.440352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.328 qpair failed and we were unable to recover it. 00:26:27.328 [2024-05-15 11:18:24.440576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.440651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.440661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.328 qpair failed and we were unable to recover it. 00:26:27.328 [2024-05-15 11:18:24.440910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.440995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.441005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.328 qpair failed and we were unable to recover it. 00:26:27.328 [2024-05-15 11:18:24.441096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.441173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.441183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.328 qpair failed and we were unable to recover it. 00:26:27.328 [2024-05-15 11:18:24.441430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.441487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.441497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.328 qpair failed and we were unable to recover it. 00:26:27.328 [2024-05-15 11:18:24.441584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.441726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.441736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.328 qpair failed and we were unable to recover it. 00:26:27.328 [2024-05-15 11:18:24.441884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.441983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.441992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.328 qpair failed and we were unable to recover it. 00:26:27.328 [2024-05-15 11:18:24.442083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.442283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.442293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.328 qpair failed and we were unable to recover it. 00:26:27.328 [2024-05-15 11:18:24.442429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.442511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.442520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.328 qpair failed and we were unable to recover it. 00:26:27.328 [2024-05-15 11:18:24.442579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.442686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.442695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.328 qpair failed and we were unable to recover it. 00:26:27.328 [2024-05-15 11:18:24.442900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.443041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.443051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.328 qpair failed and we were unable to recover it. 00:26:27.328 [2024-05-15 11:18:24.443147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.443286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.443296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.328 qpair failed and we were unable to recover it. 00:26:27.328 [2024-05-15 11:18:24.443532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.443609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.443619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.328 qpair failed and we were unable to recover it. 00:26:27.328 [2024-05-15 11:18:24.443817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.443951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.443961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.328 qpair failed and we were unable to recover it. 00:26:27.328 [2024-05-15 11:18:24.444106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.444315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.444325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.328 qpair failed and we were unable to recover it. 00:26:27.328 [2024-05-15 11:18:24.444464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.444599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.444609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.328 qpair failed and we were unable to recover it. 00:26:27.328 [2024-05-15 11:18:24.444700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.444858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.444868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.328 qpair failed and we were unable to recover it. 00:26:27.328 [2024-05-15 11:18:24.445006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.445102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.445111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.328 qpair failed and we were unable to recover it. 00:26:27.328 [2024-05-15 11:18:24.445256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.445325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.445335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.328 qpair failed and we were unable to recover it. 00:26:27.328 [2024-05-15 11:18:24.445514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.445655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.328 [2024-05-15 11:18:24.445665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.329 qpair failed and we were unable to recover it. 00:26:27.329 [2024-05-15 11:18:24.445871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.446028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.446037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.329 qpair failed and we were unable to recover it. 00:26:27.329 [2024-05-15 11:18:24.446118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.446264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.446274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.329 qpair failed and we were unable to recover it. 00:26:27.329 [2024-05-15 11:18:24.446501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.446632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.446642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.329 qpair failed and we were unable to recover it. 00:26:27.329 [2024-05-15 11:18:24.446723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.446871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.446881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.329 qpair failed and we were unable to recover it. 00:26:27.329 [2024-05-15 11:18:24.447061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.447143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.447152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.329 qpair failed and we were unable to recover it. 00:26:27.329 [2024-05-15 11:18:24.447297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.447460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.447470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.329 qpair failed and we were unable to recover it. 00:26:27.329 [2024-05-15 11:18:24.447602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.447672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.447683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.329 qpair failed and we were unable to recover it. 00:26:27.329 [2024-05-15 11:18:24.447815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.447903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.447913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.329 qpair failed and we were unable to recover it. 00:26:27.329 [2024-05-15 11:18:24.447989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.448081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.448091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.329 qpair failed and we were unable to recover it. 00:26:27.329 [2024-05-15 11:18:24.448236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.448315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.448325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.329 qpair failed and we were unable to recover it. 00:26:27.329 [2024-05-15 11:18:24.448465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.448740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.448750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.329 qpair failed and we were unable to recover it. 00:26:27.329 [2024-05-15 11:18:24.448829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.448911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.448921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.329 qpair failed and we were unable to recover it. 00:26:27.329 [2024-05-15 11:18:24.449056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.449155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.449168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.329 qpair failed and we were unable to recover it. 00:26:27.329 [2024-05-15 11:18:24.449260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.449392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.449402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.329 qpair failed and we were unable to recover it. 00:26:27.329 [2024-05-15 11:18:24.449546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.449623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.449633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.329 qpair failed and we were unable to recover it. 00:26:27.329 [2024-05-15 11:18:24.449723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.449792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.449802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.329 qpair failed and we were unable to recover it. 00:26:27.329 [2024-05-15 11:18:24.449879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.449956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.449965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.329 qpair failed and we were unable to recover it. 00:26:27.329 [2024-05-15 11:18:24.450032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.450159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.450172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.329 qpair failed and we were unable to recover it. 00:26:27.329 [2024-05-15 11:18:24.450411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.450634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.450644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.329 qpair failed and we were unable to recover it. 00:26:27.329 [2024-05-15 11:18:24.450724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.450855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.450864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.329 qpair failed and we were unable to recover it. 00:26:27.329 [2024-05-15 11:18:24.450963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.451033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.451043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.329 qpair failed and we were unable to recover it. 00:26:27.329 [2024-05-15 11:18:24.451190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.451340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.451349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.329 qpair failed and we were unable to recover it. 00:26:27.329 [2024-05-15 11:18:24.451421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.451498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.451508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.329 qpair failed and we were unable to recover it. 00:26:27.329 [2024-05-15 11:18:24.451646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.451723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.451732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.329 qpair failed and we were unable to recover it. 00:26:27.329 [2024-05-15 11:18:24.451871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.451947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.451957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.329 qpair failed and we were unable to recover it. 00:26:27.329 [2024-05-15 11:18:24.452088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.452216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.452227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.329 qpair failed and we were unable to recover it. 00:26:27.329 [2024-05-15 11:18:24.452370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.452436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.452446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.329 qpair failed and we were unable to recover it. 00:26:27.329 [2024-05-15 11:18:24.452592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.452693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.452703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.329 qpair failed and we were unable to recover it. 00:26:27.329 [2024-05-15 11:18:24.452781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.452949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.452959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.329 qpair failed and we were unable to recover it. 00:26:27.329 [2024-05-15 11:18:24.453091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.453193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.453203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.329 qpair failed and we were unable to recover it. 00:26:27.329 [2024-05-15 11:18:24.453277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.453470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.453479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.329 qpair failed and we were unable to recover it. 00:26:27.329 [2024-05-15 11:18:24.453566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.453648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.453658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.329 qpair failed and we were unable to recover it. 00:26:27.329 [2024-05-15 11:18:24.453854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.454027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.454037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.329 qpair failed and we were unable to recover it. 00:26:27.329 [2024-05-15 11:18:24.454105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.454240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.454251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.329 qpair failed and we were unable to recover it. 00:26:27.329 [2024-05-15 11:18:24.454389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.454471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.454481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.329 qpair failed and we were unable to recover it. 00:26:27.329 [2024-05-15 11:18:24.454559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.454649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.454659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.329 qpair failed and we were unable to recover it. 00:26:27.329 [2024-05-15 11:18:24.454732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.454794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.454803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.329 qpair failed and we were unable to recover it. 00:26:27.329 [2024-05-15 11:18:24.454891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.454969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.454979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.329 qpair failed and we were unable to recover it. 00:26:27.329 [2024-05-15 11:18:24.455050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.455117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.455126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.329 qpair failed and we were unable to recover it. 00:26:27.329 [2024-05-15 11:18:24.455207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.455343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.329 [2024-05-15 11:18:24.455353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.329 qpair failed and we were unable to recover it. 00:26:27.330 [2024-05-15 11:18:24.455433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.455514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.455526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.330 qpair failed and we were unable to recover it. 00:26:27.330 [2024-05-15 11:18:24.455664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.455797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.455807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.330 qpair failed and we were unable to recover it. 00:26:27.330 [2024-05-15 11:18:24.455888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.456059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.456069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.330 qpair failed and we were unable to recover it. 00:26:27.330 [2024-05-15 11:18:24.456207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.456271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.456281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.330 qpair failed and we were unable to recover it. 00:26:27.330 [2024-05-15 11:18:24.456377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.456587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.456597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.330 qpair failed and we were unable to recover it. 00:26:27.330 [2024-05-15 11:18:24.456678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.456756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.456766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.330 qpair failed and we were unable to recover it. 00:26:27.330 [2024-05-15 11:18:24.456896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.457054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.457064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.330 qpair failed and we were unable to recover it. 00:26:27.330 [2024-05-15 11:18:24.457141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.457213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.457223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.330 qpair failed and we were unable to recover it. 00:26:27.330 [2024-05-15 11:18:24.457305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.457502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.457511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.330 qpair failed and we were unable to recover it. 00:26:27.330 [2024-05-15 11:18:24.457650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.457716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.457725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.330 qpair failed and we were unable to recover it. 00:26:27.330 [2024-05-15 11:18:24.457865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.458034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.458047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.330 qpair failed and we were unable to recover it. 00:26:27.330 [2024-05-15 11:18:24.458140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.458278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.458289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.330 qpair failed and we were unable to recover it. 00:26:27.330 [2024-05-15 11:18:24.458433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.458565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.458574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.330 qpair failed and we were unable to recover it. 00:26:27.330 [2024-05-15 11:18:24.458648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.458804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.458814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.330 qpair failed and we were unable to recover it. 00:26:27.330 [2024-05-15 11:18:24.458885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.458970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.458980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.330 qpair failed and we were unable to recover it. 00:26:27.330 [2024-05-15 11:18:24.459144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.459318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.459327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.330 qpair failed and we were unable to recover it. 00:26:27.330 [2024-05-15 11:18:24.459396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.459528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.459538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.330 qpair failed and we were unable to recover it. 00:26:27.330 [2024-05-15 11:18:24.459605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.459766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.459775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.330 qpair failed and we were unable to recover it. 00:26:27.330 [2024-05-15 11:18:24.459910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.459988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.459998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.330 qpair failed and we were unable to recover it. 00:26:27.330 [2024-05-15 11:18:24.460093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.460192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.460202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.330 qpair failed and we were unable to recover it. 00:26:27.330 [2024-05-15 11:18:24.460301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.460385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.460396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.330 qpair failed and we were unable to recover it. 00:26:27.330 [2024-05-15 11:18:24.460464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.460555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.460565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.330 qpair failed and we were unable to recover it. 00:26:27.330 [2024-05-15 11:18:24.460636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.460765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.460775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.330 qpair failed and we were unable to recover it. 00:26:27.330 [2024-05-15 11:18:24.460870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.460949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.460959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.330 qpair failed and we were unable to recover it. 00:26:27.330 [2024-05-15 11:18:24.461036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.461125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.461134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.330 qpair failed and we were unable to recover it. 00:26:27.330 [2024-05-15 11:18:24.461228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.461365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.461375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.330 qpair failed and we were unable to recover it. 00:26:27.330 [2024-05-15 11:18:24.461451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.461512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.461522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.330 qpair failed and we were unable to recover it. 00:26:27.330 [2024-05-15 11:18:24.461667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.461737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.461746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.330 qpair failed and we were unable to recover it. 00:26:27.330 [2024-05-15 11:18:24.461829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.461905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.461914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.330 qpair failed and we were unable to recover it. 00:26:27.330 [2024-05-15 11:18:24.461992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.462200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.462210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.330 qpair failed and we were unable to recover it. 00:26:27.330 [2024-05-15 11:18:24.462345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.462429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.462442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.330 qpair failed and we were unable to recover it. 00:26:27.330 [2024-05-15 11:18:24.462518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.462600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.462610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.330 qpair failed and we were unable to recover it. 00:26:27.330 [2024-05-15 11:18:24.462679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.462847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.462856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.330 qpair failed and we were unable to recover it. 00:26:27.330 [2024-05-15 11:18:24.463061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.463214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.463224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.330 qpair failed and we were unable to recover it. 00:26:27.330 [2024-05-15 11:18:24.463288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.463430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.463439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.330 qpair failed and we were unable to recover it. 00:26:27.330 [2024-05-15 11:18:24.463518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.463584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.330 [2024-05-15 11:18:24.463594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.330 qpair failed and we were unable to recover it. 00:26:27.330 [2024-05-15 11:18:24.463660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.463811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.463820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.331 qpair failed and we were unable to recover it. 00:26:27.331 [2024-05-15 11:18:24.463897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.464041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.464050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.331 qpair failed and we were unable to recover it. 00:26:27.331 [2024-05-15 11:18:24.464125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.464205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.464215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.331 qpair failed and we were unable to recover it. 00:26:27.331 [2024-05-15 11:18:24.464282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.464343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.464351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.331 qpair failed and we were unable to recover it. 00:26:27.331 [2024-05-15 11:18:24.464422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.464492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.464501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.331 qpair failed and we were unable to recover it. 00:26:27.331 [2024-05-15 11:18:24.464657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.464731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.464740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.331 qpair failed and we were unable to recover it. 00:26:27.331 [2024-05-15 11:18:24.464828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.464922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.464932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.331 qpair failed and we were unable to recover it. 00:26:27.331 [2024-05-15 11:18:24.465004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.465096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.465105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.331 qpair failed and we were unable to recover it. 00:26:27.331 [2024-05-15 11:18:24.465175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.465324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.465333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.331 qpair failed and we were unable to recover it. 00:26:27.331 [2024-05-15 11:18:24.465396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.465528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.465538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.331 qpair failed and we were unable to recover it. 00:26:27.331 [2024-05-15 11:18:24.465608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.465742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.465751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.331 qpair failed and we were unable to recover it. 00:26:27.331 [2024-05-15 11:18:24.465819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.465897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.465906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.331 qpair failed and we were unable to recover it. 00:26:27.331 [2024-05-15 11:18:24.465983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.466128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.466137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.331 qpair failed and we were unable to recover it. 00:26:27.331 [2024-05-15 11:18:24.466237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.466380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.466390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.331 qpair failed and we were unable to recover it. 00:26:27.331 [2024-05-15 11:18:24.466457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.466531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.466540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.331 qpair failed and we were unable to recover it. 00:26:27.331 [2024-05-15 11:18:24.466677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.466752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.466762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.331 qpair failed and we were unable to recover it. 00:26:27.331 [2024-05-15 11:18:24.466838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.466991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.467001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.331 qpair failed and we were unable to recover it. 00:26:27.331 [2024-05-15 11:18:24.467139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.467201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.467211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.331 qpair failed and we were unable to recover it. 00:26:27.331 [2024-05-15 11:18:24.467343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.467433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.467443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.331 qpair failed and we were unable to recover it. 00:26:27.331 [2024-05-15 11:18:24.467578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.467759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.467768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.331 qpair failed and we were unable to recover it. 00:26:27.331 [2024-05-15 11:18:24.467830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.468041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.468051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.331 qpair failed and we were unable to recover it. 00:26:27.331 [2024-05-15 11:18:24.468133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.468221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.468231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.331 qpair failed and we were unable to recover it. 00:26:27.331 [2024-05-15 11:18:24.468299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.468444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.468453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.331 qpair failed and we were unable to recover it. 00:26:27.331 [2024-05-15 11:18:24.468598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.468745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.468755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.331 qpair failed and we were unable to recover it. 00:26:27.331 [2024-05-15 11:18:24.468909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.468997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.469007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.331 qpair failed and we were unable to recover it. 00:26:27.331 [2024-05-15 11:18:24.469216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.469302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.469312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.331 qpair failed and we were unable to recover it. 00:26:27.331 [2024-05-15 11:18:24.469405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.469485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.469495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.331 qpair failed and we were unable to recover it. 00:26:27.331 [2024-05-15 11:18:24.469579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.469737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.469746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.331 qpair failed and we were unable to recover it. 00:26:27.331 [2024-05-15 11:18:24.469886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.469953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.469963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.331 qpair failed and we were unable to recover it. 00:26:27.331 [2024-05-15 11:18:24.470059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.470149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.470158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.331 qpair failed and we were unable to recover it. 00:26:27.331 [2024-05-15 11:18:24.470250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.470313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.470323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.331 qpair failed and we were unable to recover it. 00:26:27.331 [2024-05-15 11:18:24.470389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.470520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.470529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.331 qpair failed and we were unable to recover it. 00:26:27.331 [2024-05-15 11:18:24.470606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.470738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.470748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.331 qpair failed and we were unable to recover it. 00:26:27.331 [2024-05-15 11:18:24.470900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.471045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.471055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.331 qpair failed and we were unable to recover it. 00:26:27.331 [2024-05-15 11:18:24.471128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.471208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.471218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.331 qpair failed and we were unable to recover it. 00:26:27.331 [2024-05-15 11:18:24.471298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.471447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.471457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.331 qpair failed and we were unable to recover it. 00:26:27.331 [2024-05-15 11:18:24.471610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.471814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.471824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.331 qpair failed and we were unable to recover it. 00:26:27.331 [2024-05-15 11:18:24.471913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.472003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.472020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.331 qpair failed and we were unable to recover it. 00:26:27.331 [2024-05-15 11:18:24.472101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.472249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.472260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.331 qpair failed and we were unable to recover it. 00:26:27.331 [2024-05-15 11:18:24.472466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.472543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.472552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.331 qpair failed and we were unable to recover it. 00:26:27.331 [2024-05-15 11:18:24.472708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.472789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.472798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.331 qpair failed and we were unable to recover it. 00:26:27.331 [2024-05-15 11:18:24.472887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.331 [2024-05-15 11:18:24.472968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.472977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.332 qpair failed and we were unable to recover it. 00:26:27.332 [2024-05-15 11:18:24.473146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.473328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.473338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.332 qpair failed and we were unable to recover it. 00:26:27.332 [2024-05-15 11:18:24.473417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.473496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.473505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.332 qpair failed and we were unable to recover it. 00:26:27.332 [2024-05-15 11:18:24.473596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.473726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.473735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.332 qpair failed and we were unable to recover it. 00:26:27.332 [2024-05-15 11:18:24.473817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.473894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.473904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.332 qpair failed and we were unable to recover it. 00:26:27.332 [2024-05-15 11:18:24.474049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.474195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.474205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.332 qpair failed and we were unable to recover it. 00:26:27.332 [2024-05-15 11:18:24.474338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.474403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.474413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.332 qpair failed and we were unable to recover it. 00:26:27.332 [2024-05-15 11:18:24.474483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.474564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.474573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.332 qpair failed and we were unable to recover it. 00:26:27.332 [2024-05-15 11:18:24.474705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.474853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.474863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.332 qpair failed and we were unable to recover it. 00:26:27.332 [2024-05-15 11:18:24.474948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.475089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.475099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.332 qpair failed and we were unable to recover it. 00:26:27.332 [2024-05-15 11:18:24.475241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.475391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.475401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.332 qpair failed and we were unable to recover it. 00:26:27.332 [2024-05-15 11:18:24.475536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.475699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.475709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.332 qpair failed and we were unable to recover it. 00:26:27.332 [2024-05-15 11:18:24.475788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.475931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.475940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.332 qpair failed and we were unable to recover it. 00:26:27.332 [2024-05-15 11:18:24.476108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.476183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.476193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.332 qpair failed and we were unable to recover it. 00:26:27.332 [2024-05-15 11:18:24.476283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.476349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.476359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.332 qpair failed and we were unable to recover it. 00:26:27.332 [2024-05-15 11:18:24.476583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.476655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.476664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.332 qpair failed and we were unable to recover it. 00:26:27.332 [2024-05-15 11:18:24.476829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.477034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.477043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.332 qpair failed and we were unable to recover it. 00:26:27.332 [2024-05-15 11:18:24.477185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.477266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.477275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.332 qpair failed and we were unable to recover it. 00:26:27.332 [2024-05-15 11:18:24.477499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.477585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.477595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.332 qpair failed and we were unable to recover it. 00:26:27.332 [2024-05-15 11:18:24.477742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.477887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.477897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.332 qpair failed and we were unable to recover it. 00:26:27.332 [2024-05-15 11:18:24.478120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.478196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.478205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.332 qpair failed and we were unable to recover it. 00:26:27.332 [2024-05-15 11:18:24.478359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.478451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.478461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.332 qpair failed and we were unable to recover it. 00:26:27.332 [2024-05-15 11:18:24.478599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.478751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.478761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.332 qpair failed and we were unable to recover it. 00:26:27.332 [2024-05-15 11:18:24.478984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.479061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.479071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.332 qpair failed and we were unable to recover it. 00:26:27.332 [2024-05-15 11:18:24.479212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.479346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.479356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.332 qpair failed and we were unable to recover it. 00:26:27.332 [2024-05-15 11:18:24.479422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.479601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.479610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.332 qpair failed and we were unable to recover it. 00:26:27.332 [2024-05-15 11:18:24.479832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.480001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.480011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.332 qpair failed and we were unable to recover it. 00:26:27.332 [2024-05-15 11:18:24.480155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.480247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.480258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.332 qpair failed and we were unable to recover it. 00:26:27.332 [2024-05-15 11:18:24.480348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.480433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.480443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.332 qpair failed and we were unable to recover it. 00:26:27.332 [2024-05-15 11:18:24.480540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.480674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.480683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.332 qpair failed and we were unable to recover it. 00:26:27.332 [2024-05-15 11:18:24.480819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.480949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.480958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.332 qpair failed and we were unable to recover it. 00:26:27.332 [2024-05-15 11:18:24.481141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.481284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.481294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.332 qpair failed and we were unable to recover it. 00:26:27.332 [2024-05-15 11:18:24.481428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.481511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.481520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.332 qpair failed and we were unable to recover it. 00:26:27.332 [2024-05-15 11:18:24.481587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.481805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.481815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.332 qpair failed and we were unable to recover it. 00:26:27.332 [2024-05-15 11:18:24.481913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.481983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.481992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.332 qpair failed and we were unable to recover it. 00:26:27.332 [2024-05-15 11:18:24.482198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.482336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.482346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.332 qpair failed and we were unable to recover it. 00:26:27.332 [2024-05-15 11:18:24.482422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.482573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.482582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.332 qpair failed and we were unable to recover it. 00:26:27.332 [2024-05-15 11:18:24.482739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.482960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.482970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.332 qpair failed and we were unable to recover it. 00:26:27.332 [2024-05-15 11:18:24.483122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.483277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.483287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.332 qpair failed and we were unable to recover it. 00:26:27.332 [2024-05-15 11:18:24.483512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.483664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.483673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.332 qpair failed and we were unable to recover it. 00:26:27.332 [2024-05-15 11:18:24.483761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.483839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.483848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.332 qpair failed and we were unable to recover it. 00:26:27.332 [2024-05-15 11:18:24.483914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.484119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.484129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.332 qpair failed and we were unable to recover it. 00:26:27.332 [2024-05-15 11:18:24.484196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.332 [2024-05-15 11:18:24.484281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.484291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.333 qpair failed and we were unable to recover it. 00:26:27.333 [2024-05-15 11:18:24.484443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.484541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.484550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.333 qpair failed and we were unable to recover it. 00:26:27.333 [2024-05-15 11:18:24.484749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.484819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.484829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.333 qpair failed and we were unable to recover it. 00:26:27.333 [2024-05-15 11:18:24.484901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.484991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.485000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.333 qpair failed and we were unable to recover it. 00:26:27.333 [2024-05-15 11:18:24.485133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.485207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.485217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.333 qpair failed and we were unable to recover it. 00:26:27.333 [2024-05-15 11:18:24.485367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.485521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.485530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.333 qpair failed and we were unable to recover it. 00:26:27.333 [2024-05-15 11:18:24.485611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.485805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.485815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.333 qpair failed and we were unable to recover it. 00:26:27.333 [2024-05-15 11:18:24.485903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.485971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.485981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.333 qpair failed and we were unable to recover it. 00:26:27.333 [2024-05-15 11:18:24.486056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.486205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.486215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.333 qpair failed and we were unable to recover it. 00:26:27.333 [2024-05-15 11:18:24.486393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.486597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.486606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.333 qpair failed and we were unable to recover it. 00:26:27.333 [2024-05-15 11:18:24.486741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.486816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.486826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.333 qpair failed and we were unable to recover it. 00:26:27.333 [2024-05-15 11:18:24.486902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.487065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.487075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.333 qpair failed and we were unable to recover it. 00:26:27.333 [2024-05-15 11:18:24.487159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.487261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.487271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.333 qpair failed and we were unable to recover it. 00:26:27.333 [2024-05-15 11:18:24.487497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.487631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.487640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.333 qpair failed and we were unable to recover it. 00:26:27.333 [2024-05-15 11:18:24.487711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.487785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.487794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.333 qpair failed and we were unable to recover it. 00:26:27.333 [2024-05-15 11:18:24.487879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.487973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.487983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.333 qpair failed and we were unable to recover it. 00:26:27.333 [2024-05-15 11:18:24.488118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.488254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.488264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.333 qpair failed and we were unable to recover it. 00:26:27.333 [2024-05-15 11:18:24.488488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.488571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.488581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.333 qpair failed and we were unable to recover it. 00:26:27.333 [2024-05-15 11:18:24.488666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.488875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.488885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.333 qpair failed and we were unable to recover it. 00:26:27.333 [2024-05-15 11:18:24.489035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.489186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.489196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.333 qpair failed and we were unable to recover it. 00:26:27.333 [2024-05-15 11:18:24.489340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.489410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.489419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.333 qpair failed and we were unable to recover it. 00:26:27.333 [2024-05-15 11:18:24.489490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.489716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.489726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.333 qpair failed and we were unable to recover it. 00:26:27.333 [2024-05-15 11:18:24.489822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.489957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.489967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.333 qpair failed and we were unable to recover it. 00:26:27.333 [2024-05-15 11:18:24.490048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.490201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.490211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.333 qpair failed and we were unable to recover it. 00:26:27.333 [2024-05-15 11:18:24.490415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.490496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.490506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.333 qpair failed and we were unable to recover it. 00:26:27.333 [2024-05-15 11:18:24.490674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.490806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.490815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.333 qpair failed and we were unable to recover it. 00:26:27.333 [2024-05-15 11:18:24.490977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.491177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.491187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.333 qpair failed and we were unable to recover it. 00:26:27.333 [2024-05-15 11:18:24.491255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.491399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.491408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.333 qpair failed and we were unable to recover it. 00:26:27.333 [2024-05-15 11:18:24.491542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.491682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.491691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.333 qpair failed and we were unable to recover it. 00:26:27.333 [2024-05-15 11:18:24.491826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.491988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.491998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.333 qpair failed and we were unable to recover it. 00:26:27.333 [2024-05-15 11:18:24.492085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.492220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.492230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.333 qpair failed and we were unable to recover it. 00:26:27.333 [2024-05-15 11:18:24.492373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.492455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.492464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.333 qpair failed and we were unable to recover it. 00:26:27.333 [2024-05-15 11:18:24.492564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.492653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.492663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.333 qpair failed and we were unable to recover it. 00:26:27.333 [2024-05-15 11:18:24.492814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.492901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.492911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.333 qpair failed and we were unable to recover it. 00:26:27.333 [2024-05-15 11:18:24.493062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.493141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.493151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.333 qpair failed and we were unable to recover it. 00:26:27.333 [2024-05-15 11:18:24.493235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.493399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.493408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.333 qpair failed and we were unable to recover it. 00:26:27.333 [2024-05-15 11:18:24.493501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.493634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.333 [2024-05-15 11:18:24.493644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.333 qpair failed and we were unable to recover it. 00:26:27.333 [2024-05-15 11:18:24.493723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.334 [2024-05-15 11:18:24.493797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.334 [2024-05-15 11:18:24.493807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.334 qpair failed and we were unable to recover it. 00:26:27.334 [2024-05-15 11:18:24.493951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.334 [2024-05-15 11:18:24.494022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.334 [2024-05-15 11:18:24.494031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.334 qpair failed and we were unable to recover it. 00:26:27.334 [2024-05-15 11:18:24.494171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.334 [2024-05-15 11:18:24.494269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.334 [2024-05-15 11:18:24.494279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.334 qpair failed and we were unable to recover it. 00:26:27.334 [2024-05-15 11:18:24.494429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.334 [2024-05-15 11:18:24.494580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.334 [2024-05-15 11:18:24.494590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.334 qpair failed and we were unable to recover it. 00:26:27.334 [2024-05-15 11:18:24.494725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.334 [2024-05-15 11:18:24.494900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.334 [2024-05-15 11:18:24.494910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.334 qpair failed and we were unable to recover it. 00:26:27.334 [2024-05-15 11:18:24.495006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.334 [2024-05-15 11:18:24.495073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.334 [2024-05-15 11:18:24.495086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.334 qpair failed and we were unable to recover it. 00:26:27.334 [2024-05-15 11:18:24.495154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.334 [2024-05-15 11:18:24.495252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.334 [2024-05-15 11:18:24.495262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.334 qpair failed and we were unable to recover it. 00:26:27.334 [2024-05-15 11:18:24.495341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.334 [2024-05-15 11:18:24.495545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.334 [2024-05-15 11:18:24.495555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.334 qpair failed and we were unable to recover it. 00:26:27.334 [2024-05-15 11:18:24.495703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.334 [2024-05-15 11:18:24.495783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.334 [2024-05-15 11:18:24.495793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.334 qpair failed and we were unable to recover it. 00:26:27.334 [2024-05-15 11:18:24.495865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.334 [2024-05-15 11:18:24.496063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.334 [2024-05-15 11:18:24.496073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.334 qpair failed and we were unable to recover it. 00:26:27.334 [2024-05-15 11:18:24.496155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.334 [2024-05-15 11:18:24.496364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.334 [2024-05-15 11:18:24.496374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.334 qpair failed and we were unable to recover it. 00:26:27.334 [2024-05-15 11:18:24.496529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.334 [2024-05-15 11:18:24.496702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.334 [2024-05-15 11:18:24.496712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.334 qpair failed and we were unable to recover it. 00:26:27.334 [2024-05-15 11:18:24.496915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.334 [2024-05-15 11:18:24.496999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.334 [2024-05-15 11:18:24.497009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.334 qpair failed and we were unable to recover it. 00:26:27.334 [2024-05-15 11:18:24.497100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.334 [2024-05-15 11:18:24.497186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.334 [2024-05-15 11:18:24.497196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.334 qpair failed and we were unable to recover it. 00:26:27.334 [2024-05-15 11:18:24.497270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.334 [2024-05-15 11:18:24.497448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.334 [2024-05-15 11:18:24.497458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.334 qpair failed and we were unable to recover it. 00:26:27.334 [2024-05-15 11:18:24.497668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.334 [2024-05-15 11:18:24.497796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.334 [2024-05-15 11:18:24.497809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.334 qpair failed and we were unable to recover it. 00:26:27.334 [2024-05-15 11:18:24.497947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.334 [2024-05-15 11:18:24.498038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.334 [2024-05-15 11:18:24.498047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.334 qpair failed and we were unable to recover it. 00:26:27.334 [2024-05-15 11:18:24.498201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.334 [2024-05-15 11:18:24.498340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.334 [2024-05-15 11:18:24.498350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.334 qpair failed and we were unable to recover it. 00:26:27.334 [2024-05-15 11:18:24.498513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.334 [2024-05-15 11:18:24.498637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.334 [2024-05-15 11:18:24.498647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.334 qpair failed and we were unable to recover it. 00:26:27.334 [2024-05-15 11:18:24.498790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.334 [2024-05-15 11:18:24.498931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.334 [2024-05-15 11:18:24.498941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.334 qpair failed and we were unable to recover it. 00:26:27.334 [2024-05-15 11:18:24.499031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.334 [2024-05-15 11:18:24.499250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.334 [2024-05-15 11:18:24.499259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.334 qpair failed and we were unable to recover it. 00:26:27.334 [2024-05-15 11:18:24.499409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.334 [2024-05-15 11:18:24.499630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.334 [2024-05-15 11:18:24.499641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.334 qpair failed and we were unable to recover it. 00:26:27.334 [2024-05-15 11:18:24.499802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.334 [2024-05-15 11:18:24.500029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.500039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.335 qpair failed and we were unable to recover it. 00:26:27.335 [2024-05-15 11:18:24.500195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.500373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.500384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.335 qpair failed and we were unable to recover it. 00:26:27.335 [2024-05-15 11:18:24.500636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.500768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.500778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.335 qpair failed and we were unable to recover it. 00:26:27.335 [2024-05-15 11:18:24.500938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.501030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.501043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.335 qpair failed and we were unable to recover it. 00:26:27.335 [2024-05-15 11:18:24.501193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.501338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.501348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.335 qpair failed and we were unable to recover it. 00:26:27.335 [2024-05-15 11:18:24.501503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.501649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.501659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.335 qpair failed and we were unable to recover it. 00:26:27.335 [2024-05-15 11:18:24.501798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.501887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.501897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.335 qpair failed and we were unable to recover it. 00:26:27.335 [2024-05-15 11:18:24.502097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.502168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.502177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.335 qpair failed and we were unable to recover it. 00:26:27.335 [2024-05-15 11:18:24.502245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.502408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.502420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.335 qpair failed and we were unable to recover it. 00:26:27.335 [2024-05-15 11:18:24.502557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.502764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.502774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.335 qpair failed and we were unable to recover it. 00:26:27.335 [2024-05-15 11:18:24.502872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.502946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.502955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.335 qpair failed and we were unable to recover it. 00:26:27.335 [2024-05-15 11:18:24.503033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.503188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.503198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.335 qpair failed and we were unable to recover it. 00:26:27.335 [2024-05-15 11:18:24.503289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.503438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.503448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.335 qpair failed and we were unable to recover it. 00:26:27.335 [2024-05-15 11:18:24.503601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.503737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.503750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.335 qpair failed and we were unable to recover it. 00:26:27.335 [2024-05-15 11:18:24.503892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.503971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.503981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.335 qpair failed and we were unable to recover it. 00:26:27.335 [2024-05-15 11:18:24.504128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.504200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.504210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.335 qpair failed and we were unable to recover it. 00:26:27.335 [2024-05-15 11:18:24.504301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.504446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.504456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.335 qpair failed and we were unable to recover it. 00:26:27.335 [2024-05-15 11:18:24.504655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.504741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.504750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.335 qpair failed and we were unable to recover it. 00:26:27.335 [2024-05-15 11:18:24.504899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.505041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.505051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.335 qpair failed and we were unable to recover it. 00:26:27.335 [2024-05-15 11:18:24.505124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.505269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.505281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.335 qpair failed and we were unable to recover it. 00:26:27.335 [2024-05-15 11:18:24.505382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.505547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.505557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.335 qpair failed and we were unable to recover it. 00:26:27.335 [2024-05-15 11:18:24.505698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.505839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.505848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.335 qpair failed and we were unable to recover it. 00:26:27.335 [2024-05-15 11:18:24.505991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.506122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.506132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.335 qpair failed and we were unable to recover it. 00:26:27.335 [2024-05-15 11:18:24.506286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.506352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.506362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.335 qpair failed and we were unable to recover it. 00:26:27.335 [2024-05-15 11:18:24.506438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.506579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.506589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.335 qpair failed and we were unable to recover it. 00:26:27.335 [2024-05-15 11:18:24.506724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.506801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.506811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.335 qpair failed and we were unable to recover it. 00:26:27.335 [2024-05-15 11:18:24.506977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.507064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.507074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.335 qpair failed and we were unable to recover it. 00:26:27.335 [2024-05-15 11:18:24.507306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.507436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.507446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.335 qpair failed and we were unable to recover it. 00:26:27.335 [2024-05-15 11:18:24.507530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.507599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.507608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.335 qpair failed and we were unable to recover it. 00:26:27.335 [2024-05-15 11:18:24.507675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.507751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.335 [2024-05-15 11:18:24.507760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.335 qpair failed and we were unable to recover it. 00:26:27.335 [2024-05-15 11:18:24.507840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.507977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.507987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.336 qpair failed and we were unable to recover it. 00:26:27.336 [2024-05-15 11:18:24.508136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.508268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.508278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.336 qpair failed and we were unable to recover it. 00:26:27.336 [2024-05-15 11:18:24.508357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.508452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.508462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.336 qpair failed and we were unable to recover it. 00:26:27.336 [2024-05-15 11:18:24.508538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.508638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.508648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.336 qpair failed and we were unable to recover it. 00:26:27.336 [2024-05-15 11:18:24.508742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.508892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.508902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.336 qpair failed and we were unable to recover it. 00:26:27.336 [2024-05-15 11:18:24.508979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.509056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.509065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.336 qpair failed and we were unable to recover it. 00:26:27.336 [2024-05-15 11:18:24.509134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.509267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.509277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.336 qpair failed and we were unable to recover it. 00:26:27.336 [2024-05-15 11:18:24.509411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.509490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.509500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.336 qpair failed and we were unable to recover it. 00:26:27.336 [2024-05-15 11:18:24.509590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.509720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.509730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.336 qpair failed and we were unable to recover it. 00:26:27.336 [2024-05-15 11:18:24.509874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.509939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.509948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.336 qpair failed and we were unable to recover it. 00:26:27.336 [2024-05-15 11:18:24.510029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.510096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.510110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.336 qpair failed and we were unable to recover it. 00:26:27.336 [2024-05-15 11:18:24.510245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.510392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.510402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.336 qpair failed and we were unable to recover it. 00:26:27.336 [2024-05-15 11:18:24.510539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.510693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.510702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.336 qpair failed and we were unable to recover it. 00:26:27.336 [2024-05-15 11:18:24.510837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.510907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.510916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.336 qpair failed and we were unable to recover it. 00:26:27.336 [2024-05-15 11:18:24.510996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.511066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.511075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.336 qpair failed and we were unable to recover it. 00:26:27.336 [2024-05-15 11:18:24.511156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.511291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.511300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.336 qpair failed and we were unable to recover it. 00:26:27.336 [2024-05-15 11:18:24.511450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.511598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.511607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.336 qpair failed and we were unable to recover it. 00:26:27.336 [2024-05-15 11:18:24.511685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.511756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.511765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.336 qpair failed and we were unable to recover it. 00:26:27.336 [2024-05-15 11:18:24.511940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.512014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.512024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.336 qpair failed and we were unable to recover it. 00:26:27.336 [2024-05-15 11:18:24.512247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.512334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.512344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.336 qpair failed and we were unable to recover it. 00:26:27.336 [2024-05-15 11:18:24.512478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.512570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.512579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.336 qpair failed and we were unable to recover it. 00:26:27.336 [2024-05-15 11:18:24.512657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.512741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.512751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.336 qpair failed and we were unable to recover it. 00:26:27.336 [2024-05-15 11:18:24.512833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.512965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.512977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.336 qpair failed and we were unable to recover it. 00:26:27.336 [2024-05-15 11:18:24.513066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.513185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.513197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.336 qpair failed and we were unable to recover it. 00:26:27.336 [2024-05-15 11:18:24.513284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.513376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.513386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.336 qpair failed and we were unable to recover it. 00:26:27.336 [2024-05-15 11:18:24.513518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.513650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.513659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.336 qpair failed and we were unable to recover it. 00:26:27.336 [2024-05-15 11:18:24.513753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.513883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.336 [2024-05-15 11:18:24.513892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.336 qpair failed and we were unable to recover it. 00:26:27.337 [2024-05-15 11:18:24.514061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.337 [2024-05-15 11:18:24.514145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.337 [2024-05-15 11:18:24.514155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.337 qpair failed and we were unable to recover it. 00:26:27.337 [2024-05-15 11:18:24.514231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.337 [2024-05-15 11:18:24.514366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.337 [2024-05-15 11:18:24.514376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.337 qpair failed and we were unable to recover it. 00:26:27.337 [2024-05-15 11:18:24.514548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.337 [2024-05-15 11:18:24.514639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.337 [2024-05-15 11:18:24.514648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.337 qpair failed and we were unable to recover it. 00:26:27.337 [2024-05-15 11:18:24.514783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.337 [2024-05-15 11:18:24.514859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.337 [2024-05-15 11:18:24.514868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.337 qpair failed and we were unable to recover it. 00:26:27.337 [2024-05-15 11:18:24.515036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.337 [2024-05-15 11:18:24.515238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.337 [2024-05-15 11:18:24.515248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.337 qpair failed and we were unable to recover it. 00:26:27.337 [2024-05-15 11:18:24.515351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.337 [2024-05-15 11:18:24.515419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.337 [2024-05-15 11:18:24.515429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.337 qpair failed and we were unable to recover it. 00:26:27.337 [2024-05-15 11:18:24.515568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.337 [2024-05-15 11:18:24.515629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.337 [2024-05-15 11:18:24.515638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.337 qpair failed and we were unable to recover it. 00:26:27.337 [2024-05-15 11:18:24.515713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.337 [2024-05-15 11:18:24.515839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.337 [2024-05-15 11:18:24.515849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.337 qpair failed and we were unable to recover it. 00:26:27.337 [2024-05-15 11:18:24.515982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.337 [2024-05-15 11:18:24.516060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.337 [2024-05-15 11:18:24.516070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.337 qpair failed and we were unable to recover it. 00:26:27.337 [2024-05-15 11:18:24.516211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.337 [2024-05-15 11:18:24.516300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.337 [2024-05-15 11:18:24.516309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.337 qpair failed and we were unable to recover it. 00:26:27.337 [2024-05-15 11:18:24.516407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.337 [2024-05-15 11:18:24.516479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.337 [2024-05-15 11:18:24.516489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.337 qpair failed and we were unable to recover it. 00:26:27.337 [2024-05-15 11:18:24.516621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.337 [2024-05-15 11:18:24.516757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.337 [2024-05-15 11:18:24.516767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.337 qpair failed and we were unable to recover it. 00:26:27.337 [2024-05-15 11:18:24.516962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.337 [2024-05-15 11:18:24.517030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.337 [2024-05-15 11:18:24.517040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.337 qpair failed and we were unable to recover it. 00:26:27.337 [2024-05-15 11:18:24.517135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.337 [2024-05-15 11:18:24.517273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.337 [2024-05-15 11:18:24.517284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.337 qpair failed and we were unable to recover it. 00:26:27.337 [2024-05-15 11:18:24.517364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.337 [2024-05-15 11:18:24.517501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.337 [2024-05-15 11:18:24.517510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.337 qpair failed and we were unable to recover it. 00:26:27.337 [2024-05-15 11:18:24.517595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.337 [2024-05-15 11:18:24.517817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.337 [2024-05-15 11:18:24.517827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.337 qpair failed and we were unable to recover it. 00:26:27.337 [2024-05-15 11:18:24.517888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.337 [2024-05-15 11:18:24.518021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.337 [2024-05-15 11:18:24.518031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.337 qpair failed and we were unable to recover it. 00:26:27.337 [2024-05-15 11:18:24.518099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.337 [2024-05-15 11:18:24.518189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.337 [2024-05-15 11:18:24.518199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.337 qpair failed and we were unable to recover it. 00:26:27.337 [2024-05-15 11:18:24.518331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.337 [2024-05-15 11:18:24.518391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.337 [2024-05-15 11:18:24.518401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.337 qpair failed and we were unable to recover it. 00:26:27.337 [2024-05-15 11:18:24.518537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.337 [2024-05-15 11:18:24.518682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.337 [2024-05-15 11:18:24.518692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.337 qpair failed and we were unable to recover it. 00:26:27.337 [2024-05-15 11:18:24.518826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.337 [2024-05-15 11:18:24.518892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.337 [2024-05-15 11:18:24.518901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.337 qpair failed and we were unable to recover it. 00:26:27.337 [2024-05-15 11:18:24.518992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.337 [2024-05-15 11:18:24.519088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.337 [2024-05-15 11:18:24.519098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.337 qpair failed and we were unable to recover it. 00:26:27.337 [2024-05-15 11:18:24.519190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.337 [2024-05-15 11:18:24.519272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.337 [2024-05-15 11:18:24.519281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.337 qpair failed and we were unable to recover it. 00:26:27.337 [2024-05-15 11:18:24.519348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.337 [2024-05-15 11:18:24.519436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.337 [2024-05-15 11:18:24.519446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.337 qpair failed and we were unable to recover it. 00:26:27.337 [2024-05-15 11:18:24.519530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.338 [2024-05-15 11:18:24.519613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.338 [2024-05-15 11:18:24.519622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.338 qpair failed and we were unable to recover it. 00:26:27.338 [2024-05-15 11:18:24.519755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.338 [2024-05-15 11:18:24.519832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.338 [2024-05-15 11:18:24.519842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.338 qpair failed and we were unable to recover it. 00:26:27.338 [2024-05-15 11:18:24.519981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.338 [2024-05-15 11:18:24.520056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.338 [2024-05-15 11:18:24.520066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.338 qpair failed and we were unable to recover it. 00:26:27.338 [2024-05-15 11:18:24.520140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.338 [2024-05-15 11:18:24.520206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.338 [2024-05-15 11:18:24.520215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.338 qpair failed and we were unable to recover it. 00:26:27.338 [2024-05-15 11:18:24.520359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.338 [2024-05-15 11:18:24.520437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.338 [2024-05-15 11:18:24.520447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.338 qpair failed and we were unable to recover it. 00:26:27.338 [2024-05-15 11:18:24.520533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.338 [2024-05-15 11:18:24.520620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.338 [2024-05-15 11:18:24.520630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.338 qpair failed and we were unable to recover it. 00:26:27.338 [2024-05-15 11:18:24.520776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.338 [2024-05-15 11:18:24.520907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.338 [2024-05-15 11:18:24.520917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.338 qpair failed and we were unable to recover it. 00:26:27.338 [2024-05-15 11:18:24.521086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.338 [2024-05-15 11:18:24.521233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.338 [2024-05-15 11:18:24.521243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.338 qpair failed and we were unable to recover it. 00:26:27.338 [2024-05-15 11:18:24.521398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.338 [2024-05-15 11:18:24.521548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.338 [2024-05-15 11:18:24.521558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.338 qpair failed and we were unable to recover it. 00:26:27.338 [2024-05-15 11:18:24.521622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.338 [2024-05-15 11:18:24.521710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.338 [2024-05-15 11:18:24.521719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.338 qpair failed and we were unable to recover it. 00:26:27.338 [2024-05-15 11:18:24.521799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.338 [2024-05-15 11:18:24.521931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.338 [2024-05-15 11:18:24.521941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.338 qpair failed and we were unable to recover it. 00:26:27.338 [2024-05-15 11:18:24.522011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.338 [2024-05-15 11:18:24.522154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.338 [2024-05-15 11:18:24.522174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.338 qpair failed and we were unable to recover it. 00:26:27.338 [2024-05-15 11:18:24.522257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.338 [2024-05-15 11:18:24.522482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.338 [2024-05-15 11:18:24.522492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.338 qpair failed and we were unable to recover it. 00:26:27.338 [2024-05-15 11:18:24.522574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.338 [2024-05-15 11:18:24.522664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.338 [2024-05-15 11:18:24.522696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.338 qpair failed and we were unable to recover it. 00:26:27.338 [2024-05-15 11:18:24.522870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.338 [2024-05-15 11:18:24.523046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.338 [2024-05-15 11:18:24.523056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.338 qpair failed and we were unable to recover it. 00:26:27.338 [2024-05-15 11:18:24.523143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.338 [2024-05-15 11:18:24.523285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.338 [2024-05-15 11:18:24.523296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.338 qpair failed and we were unable to recover it. 00:26:27.338 [2024-05-15 11:18:24.523394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.338 [2024-05-15 11:18:24.523490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.338 [2024-05-15 11:18:24.523501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.338 qpair failed and we were unable to recover it. 00:26:27.338 [2024-05-15 11:18:24.523674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.338 [2024-05-15 11:18:24.523821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.338 [2024-05-15 11:18:24.523831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.338 qpair failed and we were unable to recover it. 00:26:27.338 [2024-05-15 11:18:24.524033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.338 [2024-05-15 11:18:24.524114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.338 [2024-05-15 11:18:24.524124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.338 qpair failed and we were unable to recover it. 00:26:27.338 [2024-05-15 11:18:24.524287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.338 [2024-05-15 11:18:24.524434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.338 [2024-05-15 11:18:24.524446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.338 qpair failed and we were unable to recover it. 00:26:27.338 [2024-05-15 11:18:24.524549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.338 [2024-05-15 11:18:24.524639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.338 [2024-05-15 11:18:24.524649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.338 qpair failed and we were unable to recover it. 00:26:27.338 [2024-05-15 11:18:24.524800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.338 [2024-05-15 11:18:24.524934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.338 [2024-05-15 11:18:24.524944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.338 qpair failed and we were unable to recover it. 00:26:27.339 [2024-05-15 11:18:24.525097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.339 [2024-05-15 11:18:24.525194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.339 [2024-05-15 11:18:24.525204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.339 qpair failed and we were unable to recover it. 00:26:27.339 [2024-05-15 11:18:24.525366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.339 [2024-05-15 11:18:24.525444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.339 [2024-05-15 11:18:24.525454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.339 qpair failed and we were unable to recover it. 00:26:27.339 [2024-05-15 11:18:24.525580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.339 [2024-05-15 11:18:24.525668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.339 [2024-05-15 11:18:24.525678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.339 qpair failed and we were unable to recover it. 00:26:27.339 [2024-05-15 11:18:24.525760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.339 [2024-05-15 11:18:24.525903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.339 [2024-05-15 11:18:24.525912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.339 qpair failed and we were unable to recover it. 00:26:27.339 [2024-05-15 11:18:24.526059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.339 [2024-05-15 11:18:24.526139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.339 [2024-05-15 11:18:24.526149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.339 qpair failed and we were unable to recover it. 00:26:27.339 [2024-05-15 11:18:24.526218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.339 [2024-05-15 11:18:24.526361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.339 [2024-05-15 11:18:24.526370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.339 qpair failed and we were unable to recover it. 00:26:27.614 [2024-05-15 11:18:24.526538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.614 [2024-05-15 11:18:24.526647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.614 [2024-05-15 11:18:24.526674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:27.614 qpair failed and we were unable to recover it. 00:26:27.614 [2024-05-15 11:18:24.526866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.614 [2024-05-15 11:18:24.526983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.614 [2024-05-15 11:18:24.527000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.614 qpair failed and we were unable to recover it. 00:26:27.614 [2024-05-15 11:18:24.527096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.614 [2024-05-15 11:18:24.527191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.614 [2024-05-15 11:18:24.527206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.614 qpair failed and we were unable to recover it. 00:26:27.614 [2024-05-15 11:18:24.527384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.614 [2024-05-15 11:18:24.527473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.614 [2024-05-15 11:18:24.527486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.614 qpair failed and we were unable to recover it. 00:26:27.614 [2024-05-15 11:18:24.527637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.614 [2024-05-15 11:18:24.527731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.614 [2024-05-15 11:18:24.527744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.614 qpair failed and we were unable to recover it. 00:26:27.614 [2024-05-15 11:18:24.527826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.614 [2024-05-15 11:18:24.527981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.614 [2024-05-15 11:18:24.527995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.614 qpair failed and we were unable to recover it. 00:26:27.614 [2024-05-15 11:18:24.528088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.614 [2024-05-15 11:18:24.528175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.614 [2024-05-15 11:18:24.528189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.614 qpair failed and we were unable to recover it. 00:26:27.614 [2024-05-15 11:18:24.528264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.614 [2024-05-15 11:18:24.528431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.528444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.615 qpair failed and we were unable to recover it. 00:26:27.615 [2024-05-15 11:18:24.528589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.528808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.528821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.615 qpair failed and we were unable to recover it. 00:26:27.615 [2024-05-15 11:18:24.528909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.529006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.529019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.615 qpair failed and we were unable to recover it. 00:26:27.615 [2024-05-15 11:18:24.529113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.529268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.529282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.615 qpair failed and we were unable to recover it. 00:26:27.615 [2024-05-15 11:18:24.529444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.529527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.529541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.615 qpair failed and we were unable to recover it. 00:26:27.615 [2024-05-15 11:18:24.529771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.529914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.529927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.615 qpair failed and we were unable to recover it. 00:26:27.615 [2024-05-15 11:18:24.530076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.530154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.530174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.615 qpair failed and we were unable to recover it. 00:26:27.615 [2024-05-15 11:18:24.530251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.530424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.530437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.615 qpair failed and we were unable to recover it. 00:26:27.615 [2024-05-15 11:18:24.530547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.530705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.530718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.615 qpair failed and we were unable to recover it. 00:26:27.615 [2024-05-15 11:18:24.530806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.530955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.530968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.615 qpair failed and we were unable to recover it. 00:26:27.615 [2024-05-15 11:18:24.531153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.531381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.531395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.615 qpair failed and we were unable to recover it. 00:26:27.615 [2024-05-15 11:18:24.531567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.531719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.531732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.615 qpair failed and we were unable to recover it. 00:26:27.615 [2024-05-15 11:18:24.531810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.531910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.531923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.615 qpair failed and we were unable to recover it. 00:26:27.615 [2024-05-15 11:18:24.532014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.532100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.532113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.615 qpair failed and we were unable to recover it. 00:26:27.615 [2024-05-15 11:18:24.532216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.532297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.532310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.615 qpair failed and we were unable to recover it. 00:26:27.615 [2024-05-15 11:18:24.532474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.532549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.532562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.615 qpair failed and we were unable to recover it. 00:26:27.615 [2024-05-15 11:18:24.532785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.532885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.532898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.615 qpair failed and we were unable to recover it. 00:26:27.615 [2024-05-15 11:18:24.532989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.533154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.533171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.615 qpair failed and we were unable to recover it. 00:26:27.615 [2024-05-15 11:18:24.533261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.533418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.533431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.615 qpair failed and we were unable to recover it. 00:26:27.615 [2024-05-15 11:18:24.533572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.533722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.533735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.615 qpair failed and we were unable to recover it. 00:26:27.615 [2024-05-15 11:18:24.533818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.533881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.533894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.615 qpair failed and we were unable to recover it. 00:26:27.615 [2024-05-15 11:18:24.534003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.534067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.534080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.615 qpair failed and we were unable to recover it. 00:26:27.615 [2024-05-15 11:18:24.534233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.534404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.534417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.615 qpair failed and we were unable to recover it. 00:26:27.615 [2024-05-15 11:18:24.534560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.534667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.534680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.615 qpair failed and we were unable to recover it. 00:26:27.615 [2024-05-15 11:18:24.534903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.534989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.535002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.615 qpair failed and we were unable to recover it. 00:26:27.615 [2024-05-15 11:18:24.535151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.535342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.535355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.615 qpair failed and we were unable to recover it. 00:26:27.615 [2024-05-15 11:18:24.535436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.535522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.535535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.615 qpair failed and we were unable to recover it. 00:26:27.615 [2024-05-15 11:18:24.535692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.535830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.535844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.615 qpair failed and we were unable to recover it. 00:26:27.615 [2024-05-15 11:18:24.535978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.536117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.536133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.615 qpair failed and we were unable to recover it. 00:26:27.615 [2024-05-15 11:18:24.536289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.536370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.615 [2024-05-15 11:18:24.536383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.615 qpair failed and we were unable to recover it. 00:26:27.616 [2024-05-15 11:18:24.536538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.536613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.536626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.616 qpair failed and we were unable to recover it. 00:26:27.616 [2024-05-15 11:18:24.536778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.536909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.536922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.616 qpair failed and we were unable to recover it. 00:26:27.616 [2024-05-15 11:18:24.537066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.537216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.537245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.616 qpair failed and we were unable to recover it. 00:26:27.616 [2024-05-15 11:18:24.537516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.537649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.537677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.616 qpair failed and we were unable to recover it. 00:26:27.616 [2024-05-15 11:18:24.537801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.537985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.538015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.616 qpair failed and we were unable to recover it. 00:26:27.616 [2024-05-15 11:18:24.538209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.538457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.538470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.616 qpair failed and we were unable to recover it. 00:26:27.616 [2024-05-15 11:18:24.538559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.538719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.538732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.616 qpair failed and we were unable to recover it. 00:26:27.616 [2024-05-15 11:18:24.538944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.539155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.539194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.616 qpair failed and we were unable to recover it. 00:26:27.616 [2024-05-15 11:18:24.539362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.539559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.539574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.616 qpair failed and we were unable to recover it. 00:26:27.616 [2024-05-15 11:18:24.539743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.540004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.540033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.616 qpair failed and we were unable to recover it. 00:26:27.616 [2024-05-15 11:18:24.540202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.540447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.540477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.616 qpair failed and we were unable to recover it. 00:26:27.616 [2024-05-15 11:18:24.540722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.540882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.540911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.616 qpair failed and we were unable to recover it. 00:26:27.616 [2024-05-15 11:18:24.541050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.541234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.541264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.616 qpair failed and we were unable to recover it. 00:26:27.616 [2024-05-15 11:18:24.541393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.541509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.541522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.616 qpair failed and we were unable to recover it. 00:26:27.616 [2024-05-15 11:18:24.541707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.541868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.541881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.616 qpair failed and we were unable to recover it. 00:26:27.616 [2024-05-15 11:18:24.542124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.542299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.542313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.616 qpair failed and we were unable to recover it. 00:26:27.616 [2024-05-15 11:18:24.542470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.542709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.542737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.616 qpair failed and we were unable to recover it. 00:26:27.616 [2024-05-15 11:18:24.542878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.543102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.543131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.616 qpair failed and we were unable to recover it. 00:26:27.616 [2024-05-15 11:18:24.543259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.543351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.543369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.616 qpair failed and we were unable to recover it. 00:26:27.616 [2024-05-15 11:18:24.543542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.543726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.543755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.616 qpair failed and we were unable to recover it. 00:26:27.616 [2024-05-15 11:18:24.543944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.544060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.544088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.616 qpair failed and we were unable to recover it. 00:26:27.616 [2024-05-15 11:18:24.544212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.544470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.544499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.616 qpair failed and we were unable to recover it. 00:26:27.616 [2024-05-15 11:18:24.544690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.544859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.544888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.616 qpair failed and we were unable to recover it. 00:26:27.616 [2024-05-15 11:18:24.545067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.545307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.545337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.616 qpair failed and we were unable to recover it. 00:26:27.616 [2024-05-15 11:18:24.545553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.545683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.545712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.616 qpair failed and we were unable to recover it. 00:26:27.616 [2024-05-15 11:18:24.545891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.545997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.546026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.616 qpair failed and we were unable to recover it. 00:26:27.616 [2024-05-15 11:18:24.546201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.546373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.546402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.616 qpair failed and we were unable to recover it. 00:26:27.616 [2024-05-15 11:18:24.546600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.546680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.546693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.616 qpair failed and we were unable to recover it. 00:26:27.616 [2024-05-15 11:18:24.546786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.546959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.546973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.616 qpair failed and we were unable to recover it. 00:26:27.616 [2024-05-15 11:18:24.547069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.616 [2024-05-15 11:18:24.547202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.547216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.617 qpair failed and we were unable to recover it. 00:26:27.617 [2024-05-15 11:18:24.547377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.547514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.547528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.617 qpair failed and we were unable to recover it. 00:26:27.617 [2024-05-15 11:18:24.547669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.547760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.547773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.617 qpair failed and we were unable to recover it. 00:26:27.617 [2024-05-15 11:18:24.547858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.548018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.548031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.617 qpair failed and we were unable to recover it. 00:26:27.617 [2024-05-15 11:18:24.548113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.548257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.548271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.617 qpair failed and we were unable to recover it. 00:26:27.617 [2024-05-15 11:18:24.548458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.548619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.548632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.617 qpair failed and we were unable to recover it. 00:26:27.617 [2024-05-15 11:18:24.548787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.549030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.549059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.617 qpair failed and we were unable to recover it. 00:26:27.617 [2024-05-15 11:18:24.549228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.549442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.549473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.617 qpair failed and we were unable to recover it. 00:26:27.617 [2024-05-15 11:18:24.549709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.549864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.549877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.617 qpair failed and we were unable to recover it. 00:26:27.617 [2024-05-15 11:18:24.550036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.550174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.550188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.617 qpair failed and we were unable to recover it. 00:26:27.617 [2024-05-15 11:18:24.550282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.550510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.550523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.617 qpair failed and we were unable to recover it. 00:26:27.617 [2024-05-15 11:18:24.550607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.550826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.550854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.617 qpair failed and we were unable to recover it. 00:26:27.617 [2024-05-15 11:18:24.551116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.551294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.551324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.617 qpair failed and we were unable to recover it. 00:26:27.617 [2024-05-15 11:18:24.551450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.551637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.551650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.617 qpair failed and we were unable to recover it. 00:26:27.617 [2024-05-15 11:18:24.551741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.551925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.551938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.617 qpair failed and we were unable to recover it. 00:26:27.617 [2024-05-15 11:18:24.552088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.552323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.552354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.617 qpair failed and we were unable to recover it. 00:26:27.617 [2024-05-15 11:18:24.552571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.552763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.552792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.617 qpair failed and we were unable to recover it. 00:26:27.617 [2024-05-15 11:18:24.552912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.553028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.553057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.617 qpair failed and we were unable to recover it. 00:26:27.617 [2024-05-15 11:18:24.553305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.553492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.553521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.617 qpair failed and we were unable to recover it. 00:26:27.617 [2024-05-15 11:18:24.553782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.554017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.554046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.617 qpair failed and we were unable to recover it. 00:26:27.617 [2024-05-15 11:18:24.554241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.554426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.554455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.617 qpair failed and we were unable to recover it. 00:26:27.617 [2024-05-15 11:18:24.554579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.554673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.554686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.617 qpair failed and we were unable to recover it. 00:26:27.617 [2024-05-15 11:18:24.554774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.554931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.554944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.617 qpair failed and we were unable to recover it. 00:26:27.617 [2024-05-15 11:18:24.555041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.555184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.555198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.617 qpair failed and we were unable to recover it. 00:26:27.617 [2024-05-15 11:18:24.555416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.555549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.555578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.617 qpair failed and we were unable to recover it. 00:26:27.617 [2024-05-15 11:18:24.555770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.555887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.555916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.617 qpair failed and we were unable to recover it. 00:26:27.617 [2024-05-15 11:18:24.556092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.556307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.556337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.617 qpair failed and we were unable to recover it. 00:26:27.617 [2024-05-15 11:18:24.556577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.556842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.556870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.617 qpair failed and we were unable to recover it. 00:26:27.617 [2024-05-15 11:18:24.556984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.557246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.557276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.617 qpair failed and we were unable to recover it. 00:26:27.617 [2024-05-15 11:18:24.557481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.617 [2024-05-15 11:18:24.557736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.557765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.618 qpair failed and we were unable to recover it. 00:26:27.618 [2024-05-15 11:18:24.557990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.558252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.558282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.618 qpair failed and we were unable to recover it. 00:26:27.618 [2024-05-15 11:18:24.558494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.558724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.558753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.618 qpair failed and we were unable to recover it. 00:26:27.618 [2024-05-15 11:18:24.558875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.559133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.559162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.618 qpair failed and we were unable to recover it. 00:26:27.618 [2024-05-15 11:18:24.559438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.559564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.559578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.618 qpair failed and we were unable to recover it. 00:26:27.618 [2024-05-15 11:18:24.559657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.559731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.559744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.618 qpair failed and we were unable to recover it. 00:26:27.618 [2024-05-15 11:18:24.559839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.559989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.560002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.618 qpair failed and we were unable to recover it. 00:26:27.618 [2024-05-15 11:18:24.560240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.560399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.560412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.618 qpair failed and we were unable to recover it. 00:26:27.618 [2024-05-15 11:18:24.560569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.560726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.560740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.618 qpair failed and we were unable to recover it. 00:26:27.618 [2024-05-15 11:18:24.560919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.561086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.561099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.618 qpair failed and we were unable to recover it. 00:26:27.618 [2024-05-15 11:18:24.561255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.561428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.561458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.618 qpair failed and we were unable to recover it. 00:26:27.618 [2024-05-15 11:18:24.561665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.561834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.561863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.618 qpair failed and we were unable to recover it. 00:26:27.618 [2024-05-15 11:18:24.562051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.562172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.562202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.618 qpair failed and we were unable to recover it. 00:26:27.618 [2024-05-15 11:18:24.562376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.562553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.562582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.618 qpair failed and we were unable to recover it. 00:26:27.618 [2024-05-15 11:18:24.562832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.563004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.563034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.618 qpair failed and we were unable to recover it. 00:26:27.618 [2024-05-15 11:18:24.563249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.563483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.563512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.618 qpair failed and we were unable to recover it. 00:26:27.618 [2024-05-15 11:18:24.563792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.563989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.564018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.618 qpair failed and we were unable to recover it. 00:26:27.618 [2024-05-15 11:18:24.564157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.564346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.564375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.618 qpair failed and we were unable to recover it. 00:26:27.618 [2024-05-15 11:18:24.564566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.564697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.564726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.618 qpair failed and we were unable to recover it. 00:26:27.618 [2024-05-15 11:18:24.564922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.565090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.565118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.618 qpair failed and we were unable to recover it. 00:26:27.618 [2024-05-15 11:18:24.565344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.565590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.565604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.618 qpair failed and we were unable to recover it. 00:26:27.618 [2024-05-15 11:18:24.565781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.565920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.565933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.618 qpair failed and we were unable to recover it. 00:26:27.618 [2024-05-15 11:18:24.566020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.566108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.566121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.618 qpair failed and we were unable to recover it. 00:26:27.618 [2024-05-15 11:18:24.566328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.566418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.566431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.618 qpair failed and we were unable to recover it. 00:26:27.618 [2024-05-15 11:18:24.566612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.566748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.566761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.618 qpair failed and we were unable to recover it. 00:26:27.618 [2024-05-15 11:18:24.566990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.567087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.567100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.618 qpair failed and we were unable to recover it. 00:26:27.618 [2024-05-15 11:18:24.567299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.567487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.567516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.618 qpair failed and we were unable to recover it. 00:26:27.618 [2024-05-15 11:18:24.567801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.567985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.568014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.618 qpair failed and we were unable to recover it. 00:26:27.618 [2024-05-15 11:18:24.568213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.568390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.568418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.618 qpair failed and we were unable to recover it. 00:26:27.618 [2024-05-15 11:18:24.568619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.568794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-05-15 11:18:24.568806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.618 qpair failed and we were unable to recover it. 00:26:27.618 [2024-05-15 11:18:24.568903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.619 [2024-05-15 11:18:24.569003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.619 [2024-05-15 11:18:24.569016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.619 qpair failed and we were unable to recover it. 00:26:27.619 [2024-05-15 11:18:24.569104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.619 [2024-05-15 11:18:24.569217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.619 [2024-05-15 11:18:24.569231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.619 qpair failed and we were unable to recover it. 00:26:27.619 [2024-05-15 11:18:24.569369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.619 [2024-05-15 11:18:24.569471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.619 [2024-05-15 11:18:24.569505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.619 qpair failed and we were unable to recover it. 00:26:27.619 [2024-05-15 11:18:24.569750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.619 [2024-05-15 11:18:24.569920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.619 [2024-05-15 11:18:24.569950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.619 qpair failed and we were unable to recover it. 00:26:27.619 [2024-05-15 11:18:24.570081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.619 [2024-05-15 11:18:24.570258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.619 [2024-05-15 11:18:24.570289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.619 qpair failed and we were unable to recover it. 00:26:27.619 [2024-05-15 11:18:24.570520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.619 [2024-05-15 11:18:24.570784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.619 [2024-05-15 11:18:24.570797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.619 qpair failed and we were unable to recover it. 00:26:27.619 [2024-05-15 11:18:24.570948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.619 [2024-05-15 11:18:24.571089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.619 [2024-05-15 11:18:24.571102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.619 qpair failed and we were unable to recover it. 00:26:27.619 [2024-05-15 11:18:24.571272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.619 [2024-05-15 11:18:24.571484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.619 [2024-05-15 11:18:24.571512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.619 qpair failed and we were unable to recover it. 00:26:27.619 [2024-05-15 11:18:24.571708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.619 [2024-05-15 11:18:24.571826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.619 [2024-05-15 11:18:24.571855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.619 qpair failed and we were unable to recover it. 00:26:27.619 [2024-05-15 11:18:24.572037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.619 [2024-05-15 11:18:24.572188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.619 [2024-05-15 11:18:24.572219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.619 qpair failed and we were unable to recover it. 00:26:27.619 [2024-05-15 11:18:24.572344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.619 [2024-05-15 11:18:24.572468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.619 [2024-05-15 11:18:24.572497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.619 qpair failed and we were unable to recover it. 00:26:27.619 [2024-05-15 11:18:24.572692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.619 [2024-05-15 11:18:24.572818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.619 [2024-05-15 11:18:24.572831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.619 qpair failed and we were unable to recover it. 00:26:27.619 [2024-05-15 11:18:24.572908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.619 [2024-05-15 11:18:24.573046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.619 [2024-05-15 11:18:24.573059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.619 qpair failed and we were unable to recover it. 00:26:27.619 [2024-05-15 11:18:24.573205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.619 [2024-05-15 11:18:24.573374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.619 [2024-05-15 11:18:24.573403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.619 qpair failed and we were unable to recover it. 00:26:27.619 [2024-05-15 11:18:24.573537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.619 [2024-05-15 11:18:24.573665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.619 [2024-05-15 11:18:24.573694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.619 qpair failed and we were unable to recover it. 00:26:27.619 [2024-05-15 11:18:24.573887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.619 [2024-05-15 11:18:24.574042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.619 [2024-05-15 11:18:24.574071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.619 qpair failed and we were unable to recover it. 00:26:27.619 [2024-05-15 11:18:24.574191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.619 [2024-05-15 11:18:24.574391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.619 [2024-05-15 11:18:24.574419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.619 qpair failed and we were unable to recover it. 00:26:27.619 [2024-05-15 11:18:24.574612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.619 [2024-05-15 11:18:24.574736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.619 [2024-05-15 11:18:24.574764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.619 qpair failed and we were unable to recover it. 00:26:27.619 [2024-05-15 11:18:24.574949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.619 [2024-05-15 11:18:24.575187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.619 [2024-05-15 11:18:24.575216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.619 qpair failed and we were unable to recover it. 00:26:27.619 [2024-05-15 11:18:24.575410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.619 [2024-05-15 11:18:24.575532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.619 [2024-05-15 11:18:24.575561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.619 qpair failed and we were unable to recover it. 00:26:27.619 [2024-05-15 11:18:24.575743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.619 [2024-05-15 11:18:24.575914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.619 [2024-05-15 11:18:24.575943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.619 qpair failed and we were unable to recover it. 00:26:27.619 [2024-05-15 11:18:24.576123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.619 [2024-05-15 11:18:24.576251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.619 [2024-05-15 11:18:24.576282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.619 qpair failed and we were unable to recover it. 00:26:27.619 [2024-05-15 11:18:24.576533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.619 [2024-05-15 11:18:24.576635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.619 [2024-05-15 11:18:24.576648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.619 qpair failed and we were unable to recover it. 00:26:27.619 [2024-05-15 11:18:24.576738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.619 [2024-05-15 11:18:24.576859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.619 [2024-05-15 11:18:24.576888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.620 qpair failed and we were unable to recover it. 00:26:27.620 [2024-05-15 11:18:24.577085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.577325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.577356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.620 qpair failed and we were unable to recover it. 00:26:27.620 [2024-05-15 11:18:24.577564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.577663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.577676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.620 qpair failed and we were unable to recover it. 00:26:27.620 [2024-05-15 11:18:24.577822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.577895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.577908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.620 qpair failed and we were unable to recover it. 00:26:27.620 [2024-05-15 11:18:24.578010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.578176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.578191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.620 qpair failed and we were unable to recover it. 00:26:27.620 [2024-05-15 11:18:24.578337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.578574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.578603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.620 qpair failed and we were unable to recover it. 00:26:27.620 [2024-05-15 11:18:24.578776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.578893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.578922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.620 qpair failed and we were unable to recover it. 00:26:27.620 [2024-05-15 11:18:24.579115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.579382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.579413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.620 qpair failed and we were unable to recover it. 00:26:27.620 [2024-05-15 11:18:24.579593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.579794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.579807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.620 qpair failed and we were unable to recover it. 00:26:27.620 [2024-05-15 11:18:24.580022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.580205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.580236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.620 qpair failed and we were unable to recover it. 00:26:27.620 [2024-05-15 11:18:24.580363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.580481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.580510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.620 qpair failed and we were unable to recover it. 00:26:27.620 [2024-05-15 11:18:24.580755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.580886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.580915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.620 qpair failed and we were unable to recover it. 00:26:27.620 [2024-05-15 11:18:24.581098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.581283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.581314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.620 qpair failed and we were unable to recover it. 00:26:27.620 [2024-05-15 11:18:24.581523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.581741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.581754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.620 qpair failed and we were unable to recover it. 00:26:27.620 [2024-05-15 11:18:24.581911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.582149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.582188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.620 qpair failed and we were unable to recover it. 00:26:27.620 [2024-05-15 11:18:24.582317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.582419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.582448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.620 qpair failed and we were unable to recover it. 00:26:27.620 [2024-05-15 11:18:24.582625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.582750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.582780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.620 qpair failed and we were unable to recover it. 00:26:27.620 [2024-05-15 11:18:24.582896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.583100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.583130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.620 qpair failed and we were unable to recover it. 00:26:27.620 [2024-05-15 11:18:24.583260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.583441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.583470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.620 qpair failed and we were unable to recover it. 00:26:27.620 [2024-05-15 11:18:24.583660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.583861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.583889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.620 qpair failed and we were unable to recover it. 00:26:27.620 [2024-05-15 11:18:24.584015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.584189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.584219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.620 qpair failed and we were unable to recover it. 00:26:27.620 [2024-05-15 11:18:24.584415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.584532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.584560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.620 qpair failed and we were unable to recover it. 00:26:27.620 [2024-05-15 11:18:24.584737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.584924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.584953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.620 qpair failed and we were unable to recover it. 00:26:27.620 [2024-05-15 11:18:24.585151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.585345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.585374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.620 qpair failed and we were unable to recover it. 00:26:27.620 [2024-05-15 11:18:24.585643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.585825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.585854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.620 qpair failed and we were unable to recover it. 00:26:27.620 [2024-05-15 11:18:24.585976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.586182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.586213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.620 qpair failed and we were unable to recover it. 00:26:27.620 [2024-05-15 11:18:24.586401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.586592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.586621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.620 qpair failed and we were unable to recover it. 00:26:27.620 [2024-05-15 11:18:24.586732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.586845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.586874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.620 qpair failed and we were unable to recover it. 00:26:27.620 [2024-05-15 11:18:24.587072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.587267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.587298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.620 qpair failed and we were unable to recover it. 00:26:27.620 [2024-05-15 11:18:24.587539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.587656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.587670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.620 qpair failed and we were unable to recover it. 00:26:27.620 [2024-05-15 11:18:24.587826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.620 [2024-05-15 11:18:24.587998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.588011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.621 qpair failed and we were unable to recover it. 00:26:27.621 [2024-05-15 11:18:24.588156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.588315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.588329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.621 qpair failed and we were unable to recover it. 00:26:27.621 [2024-05-15 11:18:24.588499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.588597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.588610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.621 qpair failed and we were unable to recover it. 00:26:27.621 [2024-05-15 11:18:24.588775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.588931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.588961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.621 qpair failed and we were unable to recover it. 00:26:27.621 [2024-05-15 11:18:24.589144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.589392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.589422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.621 qpair failed and we were unable to recover it. 00:26:27.621 [2024-05-15 11:18:24.589538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.589700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.589713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.621 qpair failed and we were unable to recover it. 00:26:27.621 [2024-05-15 11:18:24.589880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.590122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.590152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.621 qpair failed and we were unable to recover it. 00:26:27.621 [2024-05-15 11:18:24.590337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.590525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.590554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.621 qpair failed and we were unable to recover it. 00:26:27.621 [2024-05-15 11:18:24.590670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.590926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.590942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.621 qpair failed and we were unable to recover it. 00:26:27.621 [2024-05-15 11:18:24.591020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.591174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.591188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.621 qpair failed and we were unable to recover it. 00:26:27.621 [2024-05-15 11:18:24.591329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.591427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.591440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.621 qpair failed and we were unable to recover it. 00:26:27.621 [2024-05-15 11:18:24.591577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.591664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.591677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.621 qpair failed and we were unable to recover it. 00:26:27.621 [2024-05-15 11:18:24.591820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.592048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.592061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.621 qpair failed and we were unable to recover it. 00:26:27.621 [2024-05-15 11:18:24.592144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.592303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.592317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.621 qpair failed and we were unable to recover it. 00:26:27.621 [2024-05-15 11:18:24.592457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.592604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.592617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.621 qpair failed and we were unable to recover it. 00:26:27.621 [2024-05-15 11:18:24.592698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.592795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.592808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.621 qpair failed and we were unable to recover it. 00:26:27.621 [2024-05-15 11:18:24.592902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.593044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.593072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.621 qpair failed and we were unable to recover it. 00:26:27.621 [2024-05-15 11:18:24.593206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.593330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.593359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.621 qpair failed and we were unable to recover it. 00:26:27.621 [2024-05-15 11:18:24.593562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.593680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.593696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.621 qpair failed and we were unable to recover it. 00:26:27.621 [2024-05-15 11:18:24.593798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.593869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.593882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.621 qpair failed and we were unable to recover it. 00:26:27.621 [2024-05-15 11:18:24.593980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.594064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.594077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.621 qpair failed and we were unable to recover it. 00:26:27.621 [2024-05-15 11:18:24.594150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.594293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.594307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.621 qpair failed and we were unable to recover it. 00:26:27.621 [2024-05-15 11:18:24.594410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.594495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.594508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.621 qpair failed and we were unable to recover it. 00:26:27.621 [2024-05-15 11:18:24.594649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.594731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.594745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.621 qpair failed and we were unable to recover it. 00:26:27.621 [2024-05-15 11:18:24.594825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.594926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.594939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.621 qpair failed and we were unable to recover it. 00:26:27.621 [2024-05-15 11:18:24.595080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.595173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.595187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.621 qpair failed and we were unable to recover it. 00:26:27.621 [2024-05-15 11:18:24.595260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.595337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.595351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.621 qpair failed and we were unable to recover it. 00:26:27.621 [2024-05-15 11:18:24.595440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.595560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.595573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.621 qpair failed and we were unable to recover it. 00:26:27.621 [2024-05-15 11:18:24.595657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.595743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.595761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.621 qpair failed and we were unable to recover it. 00:26:27.621 [2024-05-15 11:18:24.595902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.596040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.621 [2024-05-15 11:18:24.596053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.621 qpair failed and we were unable to recover it. 00:26:27.621 [2024-05-15 11:18:24.596246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.596489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.596517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.622 qpair failed and we were unable to recover it. 00:26:27.622 [2024-05-15 11:18:24.596713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.596942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.596954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.622 qpair failed and we were unable to recover it. 00:26:27.622 [2024-05-15 11:18:24.597117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.597291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.597305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.622 qpair failed and we were unable to recover it. 00:26:27.622 [2024-05-15 11:18:24.597466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.597596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.597609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.622 qpair failed and we were unable to recover it. 00:26:27.622 [2024-05-15 11:18:24.597728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.597886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.597899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.622 qpair failed and we were unable to recover it. 00:26:27.622 [2024-05-15 11:18:24.597997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.598225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.598239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.622 qpair failed and we were unable to recover it. 00:26:27.622 [2024-05-15 11:18:24.598472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.598636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.598650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.622 qpair failed and we were unable to recover it. 00:26:27.622 [2024-05-15 11:18:24.598806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.598894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.598908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.622 qpair failed and we were unable to recover it. 00:26:27.622 [2024-05-15 11:18:24.599069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.599224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.599239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.622 qpair failed and we were unable to recover it. 00:26:27.622 [2024-05-15 11:18:24.599337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.599581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.599610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.622 qpair failed and we were unable to recover it. 00:26:27.622 [2024-05-15 11:18:24.599752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.599923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.599952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.622 qpair failed and we were unable to recover it. 00:26:27.622 [2024-05-15 11:18:24.600141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.600272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.600301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.622 qpair failed and we were unable to recover it. 00:26:27.622 [2024-05-15 11:18:24.600510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.600713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.600726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.622 qpair failed and we were unable to recover it. 00:26:27.622 [2024-05-15 11:18:24.600880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.600969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.600982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.622 qpair failed and we were unable to recover it. 00:26:27.622 [2024-05-15 11:18:24.601072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.601174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.601188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.622 qpair failed and we were unable to recover it. 00:26:27.622 [2024-05-15 11:18:24.601342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.601506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.601519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.622 qpair failed and we were unable to recover it. 00:26:27.622 [2024-05-15 11:18:24.601585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.601813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.601825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.622 qpair failed and we were unable to recover it. 00:26:27.622 [2024-05-15 11:18:24.601906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.602058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.602071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.622 qpair failed and we were unable to recover it. 00:26:27.622 [2024-05-15 11:18:24.602221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.602376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.602389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.622 qpair failed and we were unable to recover it. 00:26:27.622 [2024-05-15 11:18:24.602480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.602651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.602664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.622 qpair failed and we were unable to recover it. 00:26:27.622 [2024-05-15 11:18:24.602774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.602994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.603008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.622 qpair failed and we were unable to recover it. 00:26:27.622 [2024-05-15 11:18:24.603167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.603355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.603384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.622 qpair failed and we were unable to recover it. 00:26:27.622 [2024-05-15 11:18:24.603494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.603662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.603690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.622 qpair failed and we were unable to recover it. 00:26:27.622 [2024-05-15 11:18:24.603811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.604083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.604096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.622 qpair failed and we were unable to recover it. 00:26:27.622 [2024-05-15 11:18:24.604244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.604326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.604340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.622 qpair failed and we were unable to recover it. 00:26:27.622 [2024-05-15 11:18:24.604583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.604681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.604695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.622 qpair failed and we were unable to recover it. 00:26:27.622 [2024-05-15 11:18:24.604770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.604928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.604941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.622 qpair failed and we were unable to recover it. 00:26:27.622 [2024-05-15 11:18:24.605037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.605173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.605187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.622 qpair failed and we were unable to recover it. 00:26:27.622 [2024-05-15 11:18:24.605345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.605445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.605474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.622 qpair failed and we were unable to recover it. 00:26:27.622 [2024-05-15 11:18:24.605586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.605691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.622 [2024-05-15 11:18:24.605720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.622 qpair failed and we were unable to recover it. 00:26:27.623 [2024-05-15 11:18:24.605895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.623 [2024-05-15 11:18:24.606176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.623 [2024-05-15 11:18:24.606206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.623 qpair failed and we were unable to recover it. 00:26:27.623 [2024-05-15 11:18:24.606320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.623 [2024-05-15 11:18:24.606509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.623 [2024-05-15 11:18:24.606538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.623 qpair failed and we were unable to recover it. 00:26:27.623 [2024-05-15 11:18:24.606732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.623 [2024-05-15 11:18:24.606937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.623 [2024-05-15 11:18:24.606951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.623 qpair failed and we were unable to recover it. 00:26:27.623 [2024-05-15 11:18:24.607112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.623 [2024-05-15 11:18:24.607279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.623 [2024-05-15 11:18:24.607292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.623 qpair failed and we were unable to recover it. 00:26:27.623 [2024-05-15 11:18:24.607525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.623 [2024-05-15 11:18:24.607700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.623 [2024-05-15 11:18:24.607729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.623 qpair failed and we were unable to recover it. 00:26:27.623 [2024-05-15 11:18:24.607976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.623 [2024-05-15 11:18:24.608058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.623 [2024-05-15 11:18:24.608072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.623 qpair failed and we were unable to recover it. 00:26:27.623 [2024-05-15 11:18:24.608211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.623 [2024-05-15 11:18:24.608361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.623 [2024-05-15 11:18:24.608374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.623 qpair failed and we were unable to recover it. 00:26:27.623 [2024-05-15 11:18:24.608462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.623 [2024-05-15 11:18:24.608564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.623 [2024-05-15 11:18:24.608577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.623 qpair failed and we were unable to recover it. 00:26:27.623 [2024-05-15 11:18:24.608729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.623 [2024-05-15 11:18:24.608878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.623 [2024-05-15 11:18:24.608891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.623 qpair failed and we were unable to recover it. 00:26:27.623 [2024-05-15 11:18:24.609061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.623 [2024-05-15 11:18:24.609236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.623 [2024-05-15 11:18:24.609267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.623 qpair failed and we were unable to recover it. 00:26:27.623 [2024-05-15 11:18:24.609383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.623 [2024-05-15 11:18:24.609574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.623 [2024-05-15 11:18:24.609603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.623 qpair failed and we were unable to recover it. 00:26:27.623 [2024-05-15 11:18:24.609809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.623 [2024-05-15 11:18:24.609909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.623 [2024-05-15 11:18:24.609922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.623 qpair failed and we were unable to recover it. 00:26:27.623 [2024-05-15 11:18:24.610065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.623 [2024-05-15 11:18:24.610226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.623 [2024-05-15 11:18:24.610240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.623 qpair failed and we were unable to recover it. 00:26:27.623 [2024-05-15 11:18:24.610451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.623 [2024-05-15 11:18:24.610549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.623 [2024-05-15 11:18:24.610585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.623 qpair failed and we were unable to recover it. 00:26:27.623 [2024-05-15 11:18:24.610774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.623 [2024-05-15 11:18:24.610891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.623 [2024-05-15 11:18:24.610920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.623 qpair failed and we were unable to recover it. 00:26:27.623 [2024-05-15 11:18:24.611061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.623 [2024-05-15 11:18:24.611320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.623 [2024-05-15 11:18:24.611351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.623 qpair failed and we were unable to recover it. 00:26:27.623 [2024-05-15 11:18:24.611565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.623 [2024-05-15 11:18:24.611803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.623 [2024-05-15 11:18:24.611817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.623 qpair failed and we were unable to recover it. 00:26:27.623 [2024-05-15 11:18:24.611912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.623 [2024-05-15 11:18:24.612057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.623 [2024-05-15 11:18:24.612071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.623 qpair failed and we were unable to recover it. 00:26:27.623 [2024-05-15 11:18:24.612172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.623 [2024-05-15 11:18:24.612273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.623 [2024-05-15 11:18:24.612287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.623 qpair failed and we were unable to recover it. 00:26:27.623 [2024-05-15 11:18:24.612449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.623 [2024-05-15 11:18:24.612681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.623 [2024-05-15 11:18:24.612694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.623 qpair failed and we were unable to recover it. 00:26:27.623 [2024-05-15 11:18:24.612789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.623 [2024-05-15 11:18:24.612892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.623 [2024-05-15 11:18:24.612906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.623 qpair failed and we were unable to recover it. 00:26:27.623 [2024-05-15 11:18:24.612993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.623 [2024-05-15 11:18:24.613088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.623 [2024-05-15 11:18:24.613101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.623 qpair failed and we were unable to recover it. 00:26:27.623 [2024-05-15 11:18:24.613198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.623 [2024-05-15 11:18:24.613298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.623 [2024-05-15 11:18:24.613335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.623 qpair failed and we were unable to recover it. 00:26:27.623 [2024-05-15 11:18:24.613445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.623 [2024-05-15 11:18:24.613578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.623 [2024-05-15 11:18:24.613606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.623 qpair failed and we were unable to recover it. 00:26:27.623 [2024-05-15 11:18:24.613730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.623 [2024-05-15 11:18:24.613836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.613864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.624 qpair failed and we were unable to recover it. 00:26:27.624 [2024-05-15 11:18:24.613994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.614118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.614146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.624 qpair failed and we were unable to recover it. 00:26:27.624 [2024-05-15 11:18:24.614298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.614428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.614456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.624 qpair failed and we were unable to recover it. 00:26:27.624 [2024-05-15 11:18:24.614593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.614705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.614718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.624 qpair failed and we were unable to recover it. 00:26:27.624 [2024-05-15 11:18:24.614880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.614950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.614963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.624 qpair failed and we were unable to recover it. 00:26:27.624 [2024-05-15 11:18:24.615103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.615256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.615270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.624 qpair failed and we were unable to recover it. 00:26:27.624 [2024-05-15 11:18:24.615369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.615503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.615516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.624 qpair failed and we were unable to recover it. 00:26:27.624 [2024-05-15 11:18:24.615603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.615742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.615755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.624 qpair failed and we were unable to recover it. 00:26:27.624 [2024-05-15 11:18:24.615835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.615982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.615995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.624 qpair failed and we were unable to recover it. 00:26:27.624 [2024-05-15 11:18:24.616140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.616237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.616251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.624 qpair failed and we were unable to recover it. 00:26:27.624 [2024-05-15 11:18:24.616412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.616511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.616524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.624 qpair failed and we were unable to recover it. 00:26:27.624 [2024-05-15 11:18:24.616670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.616833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.616846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.624 qpair failed and we were unable to recover it. 00:26:27.624 [2024-05-15 11:18:24.617003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.617154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.617185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.624 qpair failed and we were unable to recover it. 00:26:27.624 [2024-05-15 11:18:24.617333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.617572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.617586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.624 qpair failed and we were unable to recover it. 00:26:27.624 [2024-05-15 11:18:24.617805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.617939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.617968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.624 qpair failed and we were unable to recover it. 00:26:27.624 [2024-05-15 11:18:24.618117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.618234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.618264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.624 qpair failed and we were unable to recover it. 00:26:27.624 [2024-05-15 11:18:24.618454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.618651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.618679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.624 qpair failed and we were unable to recover it. 00:26:27.624 [2024-05-15 11:18:24.618853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.619022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.619035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.624 qpair failed and we were unable to recover it. 00:26:27.624 [2024-05-15 11:18:24.619195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.619393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.619422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.624 qpair failed and we were unable to recover it. 00:26:27.624 [2024-05-15 11:18:24.619609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.619876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.619905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.624 qpair failed and we were unable to recover it. 00:26:27.624 [2024-05-15 11:18:24.620185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.620305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.620334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.624 qpair failed and we were unable to recover it. 00:26:27.624 [2024-05-15 11:18:24.620577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.620693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.620721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.624 qpair failed and we were unable to recover it. 00:26:27.624 [2024-05-15 11:18:24.620896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.621148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.621215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.624 qpair failed and we were unable to recover it. 00:26:27.624 [2024-05-15 11:18:24.621397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.621582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.621595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.624 qpair failed and we were unable to recover it. 00:26:27.624 [2024-05-15 11:18:24.621820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.621904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.621917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.624 qpair failed and we were unable to recover it. 00:26:27.624 [2024-05-15 11:18:24.622063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.622269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.622283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.624 qpair failed and we were unable to recover it. 00:26:27.624 [2024-05-15 11:18:24.622384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.622472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.622485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.624 qpair failed and we were unable to recover it. 00:26:27.624 [2024-05-15 11:18:24.622619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.622827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.622855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.624 qpair failed and we were unable to recover it. 00:26:27.624 [2024-05-15 11:18:24.622958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.623094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.623122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.624 qpair failed and we were unable to recover it. 00:26:27.624 [2024-05-15 11:18:24.623391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.623531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.624 [2024-05-15 11:18:24.623560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.624 qpair failed and we were unable to recover it. 00:26:27.624 [2024-05-15 11:18:24.623739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.623900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.623929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.625 qpair failed and we were unable to recover it. 00:26:27.625 [2024-05-15 11:18:24.624113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.624311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.624340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.625 qpair failed and we were unable to recover it. 00:26:27.625 [2024-05-15 11:18:24.624535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.624790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.624818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.625 qpair failed and we were unable to recover it. 00:26:27.625 [2024-05-15 11:18:24.625118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.625206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.625220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.625 qpair failed and we were unable to recover it. 00:26:27.625 [2024-05-15 11:18:24.625317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.625416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.625429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.625 qpair failed and we were unable to recover it. 00:26:27.625 [2024-05-15 11:18:24.625537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.625699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.625718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.625 qpair failed and we were unable to recover it. 00:26:27.625 [2024-05-15 11:18:24.625809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.625970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.625988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.625 qpair failed and we were unable to recover it. 00:26:27.625 [2024-05-15 11:18:24.626171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.626255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.626272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.625 qpair failed and we were unable to recover it. 00:26:27.625 [2024-05-15 11:18:24.626483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.626563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.626577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.625 qpair failed and we were unable to recover it. 00:26:27.625 [2024-05-15 11:18:24.626719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.626874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.626888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.625 qpair failed and we were unable to recover it. 00:26:27.625 [2024-05-15 11:18:24.626989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.627073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.627086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.625 qpair failed and we were unable to recover it. 00:26:27.625 [2024-05-15 11:18:24.627231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.627333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.627346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.625 qpair failed and we were unable to recover it. 00:26:27.625 [2024-05-15 11:18:24.627487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.627639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.627652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.625 qpair failed and we were unable to recover it. 00:26:27.625 [2024-05-15 11:18:24.627806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.627953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.627967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.625 qpair failed and we were unable to recover it. 00:26:27.625 [2024-05-15 11:18:24.628044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.628126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.628139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.625 qpair failed and we were unable to recover it. 00:26:27.625 [2024-05-15 11:18:24.628445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.628661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.628675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.625 qpair failed and we were unable to recover it. 00:26:27.625 [2024-05-15 11:18:24.628876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.628968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.628981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.625 qpair failed and we were unable to recover it. 00:26:27.625 [2024-05-15 11:18:24.629139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.629246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.629260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.625 qpair failed and we were unable to recover it. 00:26:27.625 [2024-05-15 11:18:24.629353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.629508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.629521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.625 qpair failed and we were unable to recover it. 00:26:27.625 [2024-05-15 11:18:24.629614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.629833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.629846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.625 qpair failed and we were unable to recover it. 00:26:27.625 [2024-05-15 11:18:24.629949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.630050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.630063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.625 qpair failed and we were unable to recover it. 00:26:27.625 [2024-05-15 11:18:24.630207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.630306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.630320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.625 qpair failed and we were unable to recover it. 00:26:27.625 [2024-05-15 11:18:24.630414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.630494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.630507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.625 qpair failed and we were unable to recover it. 00:26:27.625 [2024-05-15 11:18:24.630581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.630739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.630767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.625 qpair failed and we were unable to recover it. 00:26:27.625 [2024-05-15 11:18:24.630959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.631071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.631099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.625 qpair failed and we were unable to recover it. 00:26:27.625 [2024-05-15 11:18:24.631220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.631350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.631379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.625 qpair failed and we were unable to recover it. 00:26:27.625 [2024-05-15 11:18:24.631501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.631681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.631694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.625 qpair failed and we were unable to recover it. 00:26:27.625 [2024-05-15 11:18:24.631786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.631960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.631973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.625 qpair failed and we were unable to recover it. 00:26:27.625 [2024-05-15 11:18:24.632144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.632219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.632238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.625 qpair failed and we were unable to recover it. 00:26:27.625 [2024-05-15 11:18:24.632340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.632491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.625 [2024-05-15 11:18:24.632504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.625 qpair failed and we were unable to recover it. 00:26:27.626 [2024-05-15 11:18:24.632589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.632830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.632858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.626 qpair failed and we were unable to recover it. 00:26:27.626 [2024-05-15 11:18:24.633100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.633281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.633310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.626 qpair failed and we were unable to recover it. 00:26:27.626 [2024-05-15 11:18:24.633441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.633643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.633657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.626 qpair failed and we were unable to recover it. 00:26:27.626 [2024-05-15 11:18:24.633826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.633962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.633975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.626 qpair failed and we were unable to recover it. 00:26:27.626 [2024-05-15 11:18:24.634123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.634221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.634235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.626 qpair failed and we were unable to recover it. 00:26:27.626 [2024-05-15 11:18:24.634386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.634531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.634545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.626 qpair failed and we were unable to recover it. 00:26:27.626 [2024-05-15 11:18:24.634733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.634927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.634955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.626 qpair failed and we were unable to recover it. 00:26:27.626 [2024-05-15 11:18:24.635076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.635204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.635233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.626 qpair failed and we were unable to recover it. 00:26:27.626 [2024-05-15 11:18:24.635522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.635634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.635662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.626 qpair failed and we were unable to recover it. 00:26:27.626 [2024-05-15 11:18:24.635850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.635988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.636001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.626 qpair failed and we were unable to recover it. 00:26:27.626 [2024-05-15 11:18:24.636074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.636173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.636186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.626 qpair failed and we were unable to recover it. 00:26:27.626 [2024-05-15 11:18:24.636375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.636461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.636474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.626 qpair failed and we were unable to recover it. 00:26:27.626 [2024-05-15 11:18:24.636690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.636923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.636951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.626 qpair failed and we were unable to recover it. 00:26:27.626 [2024-05-15 11:18:24.637073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.637243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.637273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.626 qpair failed and we were unable to recover it. 00:26:27.626 [2024-05-15 11:18:24.637391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.637642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.637671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.626 qpair failed and we were unable to recover it. 00:26:27.626 [2024-05-15 11:18:24.637856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.637966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.637980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.626 qpair failed and we were unable to recover it. 00:26:27.626 [2024-05-15 11:18:24.638138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.638287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.638301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.626 qpair failed and we were unable to recover it. 00:26:27.626 [2024-05-15 11:18:24.638510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.638728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.638756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.626 qpair failed and we were unable to recover it. 00:26:27.626 [2024-05-15 11:18:24.638955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.639069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.639097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.626 qpair failed and we were unable to recover it. 00:26:27.626 [2024-05-15 11:18:24.639337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.639466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.639479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.626 qpair failed and we were unable to recover it. 00:26:27.626 [2024-05-15 11:18:24.639557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.639690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.639702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.626 qpair failed and we were unable to recover it. 00:26:27.626 [2024-05-15 11:18:24.639890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.640077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.640105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.626 qpair failed and we were unable to recover it. 00:26:27.626 [2024-05-15 11:18:24.640323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.640436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.640474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.626 qpair failed and we were unable to recover it. 00:26:27.626 [2024-05-15 11:18:24.640633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.640771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.640784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.626 qpair failed and we were unable to recover it. 00:26:27.626 [2024-05-15 11:18:24.640883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.641038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.641051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.626 qpair failed and we were unable to recover it. 00:26:27.626 [2024-05-15 11:18:24.641203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.641273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.641288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.626 qpair failed and we were unable to recover it. 00:26:27.626 [2024-05-15 11:18:24.641440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.641671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.641700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.626 qpair failed and we were unable to recover it. 00:26:27.626 [2024-05-15 11:18:24.641946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.642079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.642107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.626 qpair failed and we were unable to recover it. 00:26:27.626 [2024-05-15 11:18:24.642291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.642539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.626 [2024-05-15 11:18:24.642569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.626 qpair failed and we were unable to recover it. 00:26:27.626 [2024-05-15 11:18:24.642700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.642853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.642867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.627 qpair failed and we were unable to recover it. 00:26:27.627 [2024-05-15 11:18:24.643021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.643275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.643289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.627 qpair failed and we were unable to recover it. 00:26:27.627 [2024-05-15 11:18:24.643460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.643628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.643657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.627 qpair failed and we were unable to recover it. 00:26:27.627 [2024-05-15 11:18:24.643844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.644079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.644107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.627 qpair failed and we were unable to recover it. 00:26:27.627 [2024-05-15 11:18:24.644239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.644476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.644505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.627 qpair failed and we were unable to recover it. 00:26:27.627 [2024-05-15 11:18:24.644765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.644903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.644916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.627 qpair failed and we were unable to recover it. 00:26:27.627 [2024-05-15 11:18:24.645003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.645090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.645108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.627 qpair failed and we were unable to recover it. 00:26:27.627 [2024-05-15 11:18:24.645194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.645348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.645361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.627 qpair failed and we were unable to recover it. 00:26:27.627 [2024-05-15 11:18:24.645440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.645670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.645682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.627 qpair failed and we were unable to recover it. 00:26:27.627 [2024-05-15 11:18:24.645771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.645907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.645919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.627 qpair failed and we were unable to recover it. 00:26:27.627 [2024-05-15 11:18:24.646002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.646075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.646088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.627 qpair failed and we were unable to recover it. 00:26:27.627 [2024-05-15 11:18:24.646176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.646280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.646293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.627 qpair failed and we were unable to recover it. 00:26:27.627 [2024-05-15 11:18:24.646433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.646517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.646530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.627 qpair failed and we were unable to recover it. 00:26:27.627 [2024-05-15 11:18:24.646683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.646860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.646873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.627 qpair failed and we were unable to recover it. 00:26:27.627 [2024-05-15 11:18:24.646958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.647046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.647059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.627 qpair failed and we were unable to recover it. 00:26:27.627 [2024-05-15 11:18:24.647208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.647349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.647362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.627 qpair failed and we were unable to recover it. 00:26:27.627 [2024-05-15 11:18:24.647506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.647656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.647672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.627 qpair failed and we were unable to recover it. 00:26:27.627 [2024-05-15 11:18:24.647877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.647964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.647977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.627 qpair failed and we were unable to recover it. 00:26:27.627 [2024-05-15 11:18:24.648069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.648152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.648169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.627 qpair failed and we were unable to recover it. 00:26:27.627 [2024-05-15 11:18:24.648258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.648403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.648416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.627 qpair failed and we were unable to recover it. 00:26:27.627 [2024-05-15 11:18:24.648661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.648749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.648762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.627 qpair failed and we were unable to recover it. 00:26:27.627 [2024-05-15 11:18:24.648927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.649012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.649025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.627 qpair failed and we were unable to recover it. 00:26:27.627 [2024-05-15 11:18:24.649110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.649187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.649201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.627 qpair failed and we were unable to recover it. 00:26:27.627 [2024-05-15 11:18:24.649462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.649640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.649653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.627 qpair failed and we were unable to recover it. 00:26:27.627 [2024-05-15 11:18:24.649745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.649815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.649828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.627 qpair failed and we were unable to recover it. 00:26:27.627 [2024-05-15 11:18:24.649916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.650014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.650027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.627 qpair failed and we were unable to recover it. 00:26:27.627 [2024-05-15 11:18:24.650244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.650446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.650479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.627 qpair failed and we were unable to recover it. 00:26:27.627 [2024-05-15 11:18:24.650665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.650790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.650803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.627 qpair failed and we were unable to recover it. 00:26:27.627 [2024-05-15 11:18:24.650965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.651174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.651188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.627 qpair failed and we were unable to recover it. 00:26:27.627 [2024-05-15 11:18:24.651274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.651416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.627 [2024-05-15 11:18:24.651429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.628 qpair failed and we were unable to recover it. 00:26:27.628 [2024-05-15 11:18:24.651661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.628 [2024-05-15 11:18:24.651815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.628 [2024-05-15 11:18:24.651828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.628 qpair failed and we were unable to recover it. 00:26:27.628 [2024-05-15 11:18:24.651999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.628 [2024-05-15 11:18:24.652113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.628 [2024-05-15 11:18:24.652142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.628 qpair failed and we were unable to recover it. 00:26:27.628 [2024-05-15 11:18:24.652338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.628 [2024-05-15 11:18:24.652615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.628 [2024-05-15 11:18:24.652644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.628 qpair failed and we were unable to recover it. 00:26:27.628 [2024-05-15 11:18:24.652783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.628 [2024-05-15 11:18:24.652969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.628 [2024-05-15 11:18:24.652998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.628 qpair failed and we were unable to recover it. 00:26:27.628 [2024-05-15 11:18:24.653249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.628 [2024-05-15 11:18:24.653388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.628 [2024-05-15 11:18:24.653416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.628 qpair failed and we were unable to recover it. 00:26:27.628 [2024-05-15 11:18:24.653657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.628 [2024-05-15 11:18:24.653766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.628 [2024-05-15 11:18:24.653794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.628 qpair failed and we were unable to recover it. 00:26:27.628 [2024-05-15 11:18:24.654053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.628 [2024-05-15 11:18:24.654152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.628 [2024-05-15 11:18:24.654170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.628 qpair failed and we were unable to recover it. 00:26:27.628 [2024-05-15 11:18:24.654327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.628 [2024-05-15 11:18:24.654402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.628 [2024-05-15 11:18:24.654415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.628 qpair failed and we were unable to recover it. 00:26:27.628 [2024-05-15 11:18:24.654558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.628 [2024-05-15 11:18:24.654696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.628 [2024-05-15 11:18:24.654709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.628 qpair failed and we were unable to recover it. 00:26:27.628 [2024-05-15 11:18:24.654846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.628 [2024-05-15 11:18:24.654997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.628 [2024-05-15 11:18:24.655010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.628 qpair failed and we were unable to recover it. 00:26:27.628 [2024-05-15 11:18:24.655224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.628 [2024-05-15 11:18:24.655299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.628 [2024-05-15 11:18:24.655313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.628 qpair failed and we were unable to recover it. 00:26:27.628 [2024-05-15 11:18:24.655462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.628 [2024-05-15 11:18:24.655557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.628 [2024-05-15 11:18:24.655570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.628 qpair failed and we were unable to recover it. 00:26:27.628 [2024-05-15 11:18:24.655657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.628 [2024-05-15 11:18:24.655817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.628 [2024-05-15 11:18:24.655830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.628 qpair failed and we were unable to recover it. 00:26:27.628 [2024-05-15 11:18:24.655921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.628 [2024-05-15 11:18:24.656061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.628 [2024-05-15 11:18:24.656074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.628 qpair failed and we were unable to recover it. 00:26:27.628 [2024-05-15 11:18:24.656175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.628 [2024-05-15 11:18:24.656342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.628 [2024-05-15 11:18:24.656355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.628 qpair failed and we were unable to recover it. 00:26:27.628 [2024-05-15 11:18:24.656524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.628 [2024-05-15 11:18:24.656718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.628 [2024-05-15 11:18:24.656730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.628 qpair failed and we were unable to recover it. 00:26:27.628 [2024-05-15 11:18:24.656828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.628 [2024-05-15 11:18:24.656926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.628 [2024-05-15 11:18:24.656939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.628 qpair failed and we were unable to recover it. 00:26:27.628 [2024-05-15 11:18:24.657103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.628 [2024-05-15 11:18:24.657271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.628 [2024-05-15 11:18:24.657285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.628 qpair failed and we were unable to recover it. 00:26:27.628 [2024-05-15 11:18:24.657389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.628 [2024-05-15 11:18:24.657551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.628 [2024-05-15 11:18:24.657564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.628 qpair failed and we were unable to recover it. 00:26:27.628 [2024-05-15 11:18:24.657772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.628 [2024-05-15 11:18:24.657992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.628 [2024-05-15 11:18:24.658021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.628 qpair failed and we were unable to recover it. 00:26:27.628 [2024-05-15 11:18:24.658196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.628 [2024-05-15 11:18:24.658378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.628 [2024-05-15 11:18:24.658407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.628 qpair failed and we were unable to recover it. 00:26:27.628 [2024-05-15 11:18:24.658599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.628 [2024-05-15 11:18:24.658777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.628 [2024-05-15 11:18:24.658805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.628 qpair failed and we were unable to recover it. 00:26:27.628 [2024-05-15 11:18:24.659069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.628 [2024-05-15 11:18:24.659207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.628 [2024-05-15 11:18:24.659236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.628 qpair failed and we were unable to recover it. 00:26:27.628 [2024-05-15 11:18:24.659370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.628 [2024-05-15 11:18:24.659541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.628 [2024-05-15 11:18:24.659570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.628 qpair failed and we were unable to recover it. 00:26:27.628 [2024-05-15 11:18:24.659752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.628 [2024-05-15 11:18:24.659941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.659969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.629 qpair failed and we were unable to recover it. 00:26:27.629 [2024-05-15 11:18:24.660145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.660319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.660352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.629 qpair failed and we were unable to recover it. 00:26:27.629 [2024-05-15 11:18:24.660561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.660744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.660774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.629 qpair failed and we were unable to recover it. 00:26:27.629 [2024-05-15 11:18:24.660960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.661143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.661196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.629 qpair failed and we were unable to recover it. 00:26:27.629 [2024-05-15 11:18:24.661342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.661601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.661630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.629 qpair failed and we were unable to recover it. 00:26:27.629 [2024-05-15 11:18:24.661737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.661870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.661882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.629 qpair failed and we were unable to recover it. 00:26:27.629 [2024-05-15 11:18:24.662035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.662200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.662214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.629 qpair failed and we were unable to recover it. 00:26:27.629 [2024-05-15 11:18:24.662303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.662443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.662456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.629 qpair failed and we were unable to recover it. 00:26:27.629 [2024-05-15 11:18:24.662604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.662812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.662825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.629 qpair failed and we were unable to recover it. 00:26:27.629 [2024-05-15 11:18:24.662909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.663136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.663149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.629 qpair failed and we were unable to recover it. 00:26:27.629 [2024-05-15 11:18:24.663324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.663530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.663543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.629 qpair failed and we were unable to recover it. 00:26:27.629 [2024-05-15 11:18:24.663688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.663843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.663856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.629 qpair failed and we were unable to recover it. 00:26:27.629 [2024-05-15 11:18:24.664010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.664173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.664187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.629 qpair failed and we were unable to recover it. 00:26:27.629 [2024-05-15 11:18:24.664283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.664438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.664451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.629 qpair failed and we were unable to recover it. 00:26:27.629 [2024-05-15 11:18:24.664537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.664678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.664691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.629 qpair failed and we were unable to recover it. 00:26:27.629 [2024-05-15 11:18:24.664850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.665032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.665072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.629 qpair failed and we were unable to recover it. 00:26:27.629 [2024-05-15 11:18:24.665211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.665337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.665366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.629 qpair failed and we were unable to recover it. 00:26:27.629 [2024-05-15 11:18:24.665479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.665659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.665672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.629 qpair failed and we were unable to recover it. 00:26:27.629 [2024-05-15 11:18:24.665879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.666086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.666114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.629 qpair failed and we were unable to recover it. 00:26:27.629 [2024-05-15 11:18:24.666317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.666493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.666521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.629 qpair failed and we were unable to recover it. 00:26:27.629 [2024-05-15 11:18:24.666703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.666794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.666807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.629 qpair failed and we were unable to recover it. 00:26:27.629 [2024-05-15 11:18:24.667025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.667124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.667137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.629 qpair failed and we were unable to recover it. 00:26:27.629 [2024-05-15 11:18:24.667330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.667579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.667608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.629 qpair failed and we were unable to recover it. 00:26:27.629 [2024-05-15 11:18:24.667743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.667826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.667839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.629 qpair failed and we were unable to recover it. 00:26:27.629 [2024-05-15 11:18:24.667938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.668018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.668031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.629 qpair failed and we were unable to recover it. 00:26:27.629 [2024-05-15 11:18:24.668183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.668287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.668301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.629 qpair failed and we were unable to recover it. 00:26:27.629 [2024-05-15 11:18:24.668454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.668629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.668642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.629 qpair failed and we were unable to recover it. 00:26:27.629 [2024-05-15 11:18:24.668816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.668979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.668992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.629 qpair failed and we were unable to recover it. 00:26:27.629 [2024-05-15 11:18:24.669135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.669234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.669247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.629 qpair failed and we were unable to recover it. 00:26:27.629 [2024-05-15 11:18:24.669530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.669613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.629 [2024-05-15 11:18:24.669626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.629 qpair failed and we were unable to recover it. 00:26:27.630 [2024-05-15 11:18:24.669747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.669927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.669941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.630 qpair failed and we were unable to recover it. 00:26:27.630 [2024-05-15 11:18:24.670101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.670309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.670323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.630 qpair failed and we were unable to recover it. 00:26:27.630 [2024-05-15 11:18:24.670480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.670722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.670736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.630 qpair failed and we were unable to recover it. 00:26:27.630 [2024-05-15 11:18:24.670924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.671083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.671097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.630 qpair failed and we were unable to recover it. 00:26:27.630 [2024-05-15 11:18:24.671351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.671490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.671519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.630 qpair failed and we were unable to recover it. 00:26:27.630 [2024-05-15 11:18:24.671745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.671919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.671948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.630 qpair failed and we were unable to recover it. 00:26:27.630 [2024-05-15 11:18:24.672107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.672319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.672349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.630 qpair failed and we were unable to recover it. 00:26:27.630 [2024-05-15 11:18:24.672463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.672643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.672671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.630 qpair failed and we were unable to recover it. 00:26:27.630 [2024-05-15 11:18:24.672887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.673013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.673042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.630 qpair failed and we were unable to recover it. 00:26:27.630 [2024-05-15 11:18:24.673220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.673394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.673422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.630 qpair failed and we were unable to recover it. 00:26:27.630 [2024-05-15 11:18:24.673687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.673857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.673886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.630 qpair failed and we were unable to recover it. 00:26:27.630 [2024-05-15 11:18:24.674132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.674272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.674285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.630 qpair failed and we were unable to recover it. 00:26:27.630 [2024-05-15 11:18:24.674497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.674674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.674687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.630 qpair failed and we were unable to recover it. 00:26:27.630 [2024-05-15 11:18:24.674842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.674997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.675010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.630 qpair failed and we were unable to recover it. 00:26:27.630 [2024-05-15 11:18:24.675187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.675282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.675295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.630 qpair failed and we were unable to recover it. 00:26:27.630 [2024-05-15 11:18:24.675390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.675493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.675506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.630 qpair failed and we were unable to recover it. 00:26:27.630 [2024-05-15 11:18:24.675587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.675736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.675749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.630 qpair failed and we were unable to recover it. 00:26:27.630 [2024-05-15 11:18:24.675940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.676186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.676216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.630 qpair failed and we were unable to recover it. 00:26:27.630 [2024-05-15 11:18:24.676363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.676550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.676578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.630 qpair failed and we were unable to recover it. 00:26:27.630 [2024-05-15 11:18:24.676834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.677018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.677046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.630 qpair failed and we were unable to recover it. 00:26:27.630 [2024-05-15 11:18:24.677229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.677401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.677430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.630 qpair failed and we were unable to recover it. 00:26:27.630 [2024-05-15 11:18:24.677691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.677925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.677953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.630 qpair failed and we were unable to recover it. 00:26:27.630 [2024-05-15 11:18:24.678077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.678334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.678365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.630 qpair failed and we were unable to recover it. 00:26:27.630 [2024-05-15 11:18:24.678611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.678776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.678805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.630 qpair failed and we were unable to recover it. 00:26:27.630 [2024-05-15 11:18:24.679041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.679292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.679306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.630 qpair failed and we were unable to recover it. 00:26:27.630 [2024-05-15 11:18:24.679519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.679620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.679633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.630 qpair failed and we were unable to recover it. 00:26:27.630 [2024-05-15 11:18:24.679779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.679931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.679944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.630 qpair failed and we were unable to recover it. 00:26:27.630 [2024-05-15 11:18:24.680132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.680280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.680318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.630 qpair failed and we were unable to recover it. 00:26:27.630 [2024-05-15 11:18:24.680443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.680709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.680738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.630 qpair failed and we were unable to recover it. 00:26:27.630 [2024-05-15 11:18:24.681001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.681200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.681230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.630 qpair failed and we were unable to recover it. 00:26:27.630 [2024-05-15 11:18:24.681497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.681612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.630 [2024-05-15 11:18:24.681625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.631 qpair failed and we were unable to recover it. 00:26:27.631 [2024-05-15 11:18:24.681721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.681794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.681807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.631 qpair failed and we were unable to recover it. 00:26:27.631 [2024-05-15 11:18:24.681916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.682001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.682014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.631 qpair failed and we were unable to recover it. 00:26:27.631 [2024-05-15 11:18:24.682096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.682252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.682265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.631 qpair failed and we were unable to recover it. 00:26:27.631 [2024-05-15 11:18:24.682354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.682438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.682451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.631 qpair failed and we were unable to recover it. 00:26:27.631 [2024-05-15 11:18:24.682593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.682677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.682691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.631 qpair failed and we were unable to recover it. 00:26:27.631 [2024-05-15 11:18:24.682836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.682913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.682927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.631 qpair failed and we were unable to recover it. 00:26:27.631 [2024-05-15 11:18:24.683018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.683226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.683240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.631 qpair failed and we were unable to recover it. 00:26:27.631 [2024-05-15 11:18:24.683386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.683470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.683483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.631 qpair failed and we were unable to recover it. 00:26:27.631 [2024-05-15 11:18:24.683558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.683638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.683650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.631 qpair failed and we were unable to recover it. 00:26:27.631 [2024-05-15 11:18:24.683851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.684090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.684119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.631 qpair failed and we were unable to recover it. 00:26:27.631 [2024-05-15 11:18:24.684256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.684429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.684457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.631 qpair failed and we were unable to recover it. 00:26:27.631 [2024-05-15 11:18:24.684594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.684761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.684775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.631 qpair failed and we were unable to recover it. 00:26:27.631 [2024-05-15 11:18:24.684946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.685032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.685045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.631 qpair failed and we were unable to recover it. 00:26:27.631 [2024-05-15 11:18:24.685128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.685236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.685250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.631 qpair failed and we were unable to recover it. 00:26:27.631 [2024-05-15 11:18:24.685337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.685478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.685491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.631 qpair failed and we were unable to recover it. 00:26:27.631 [2024-05-15 11:18:24.685676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.685800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.685829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.631 qpair failed and we were unable to recover it. 00:26:27.631 [2024-05-15 11:18:24.685962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.686218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.686248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.631 qpair failed and we were unable to recover it. 00:26:27.631 [2024-05-15 11:18:24.686382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.686629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.686658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.631 qpair failed and we were unable to recover it. 00:26:27.631 [2024-05-15 11:18:24.686776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.686971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.687000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.631 qpair failed and we were unable to recover it. 00:26:27.631 [2024-05-15 11:18:24.687269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.687380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.687409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.631 qpair failed and we were unable to recover it. 00:26:27.631 [2024-05-15 11:18:24.687521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.687686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.687699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.631 qpair failed and we were unable to recover it. 00:26:27.631 [2024-05-15 11:18:24.687931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.688114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.688142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.631 qpair failed and we were unable to recover it. 00:26:27.631 [2024-05-15 11:18:24.688359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.688542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.688572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.631 qpair failed and we were unable to recover it. 00:26:27.631 [2024-05-15 11:18:24.688686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.688858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.688871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.631 qpair failed and we were unable to recover it. 00:26:27.631 [2024-05-15 11:18:24.688965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.689103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.689117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.631 qpair failed and we were unable to recover it. 00:26:27.631 [2024-05-15 11:18:24.689374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.689587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.689616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.631 qpair failed and we were unable to recover it. 00:26:27.631 [2024-05-15 11:18:24.689793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.689977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.690005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.631 qpair failed and we were unable to recover it. 00:26:27.631 [2024-05-15 11:18:24.690224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.690363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.690392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.631 qpair failed and we were unable to recover it. 00:26:27.631 [2024-05-15 11:18:24.690506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.690744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.690773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.631 qpair failed and we were unable to recover it. 00:26:27.631 [2024-05-15 11:18:24.690912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.691116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.691158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.631 qpair failed and we were unable to recover it. 00:26:27.631 [2024-05-15 11:18:24.691341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.691491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.691521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.631 qpair failed and we were unable to recover it. 00:26:27.631 [2024-05-15 11:18:24.691632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.631 [2024-05-15 11:18:24.691734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.691747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.632 qpair failed and we were unable to recover it. 00:26:27.632 [2024-05-15 11:18:24.691930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.692161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.692202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.632 qpair failed and we were unable to recover it. 00:26:27.632 [2024-05-15 11:18:24.692446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.692629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.692643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.632 qpair failed and we were unable to recover it. 00:26:27.632 [2024-05-15 11:18:24.692732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.692833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.692846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.632 qpair failed and we were unable to recover it. 00:26:27.632 [2024-05-15 11:18:24.692942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.693031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.693044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.632 qpair failed and we were unable to recover it. 00:26:27.632 [2024-05-15 11:18:24.693268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.693403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.693431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.632 qpair failed and we were unable to recover it. 00:26:27.632 [2024-05-15 11:18:24.693548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.693721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.693749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.632 qpair failed and we were unable to recover it. 00:26:27.632 [2024-05-15 11:18:24.693904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.694084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.694097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.632 qpair failed and we were unable to recover it. 00:26:27.632 [2024-05-15 11:18:24.694339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.694492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.694521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.632 qpair failed and we were unable to recover it. 00:26:27.632 [2024-05-15 11:18:24.694658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.694843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.694872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.632 qpair failed and we were unable to recover it. 00:26:27.632 [2024-05-15 11:18:24.695115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.695253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.695283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.632 qpair failed and we were unable to recover it. 00:26:27.632 [2024-05-15 11:18:24.695472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.695674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.695720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.632 qpair failed and we were unable to recover it. 00:26:27.632 [2024-05-15 11:18:24.695850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.696104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.696133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.632 qpair failed and we were unable to recover it. 00:26:27.632 [2024-05-15 11:18:24.696384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.696556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.696585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.632 qpair failed and we were unable to recover it. 00:26:27.632 [2024-05-15 11:18:24.696826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.697092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.697121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.632 qpair failed and we were unable to recover it. 00:26:27.632 [2024-05-15 11:18:24.697240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.697424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.697453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.632 qpair failed and we were unable to recover it. 00:26:27.632 [2024-05-15 11:18:24.697594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.697787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.697800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.632 qpair failed and we were unable to recover it. 00:26:27.632 [2024-05-15 11:18:24.697876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.698080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.698094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.632 qpair failed and we were unable to recover it. 00:26:27.632 [2024-05-15 11:18:24.698232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.698385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.698398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.632 qpair failed and we were unable to recover it. 00:26:27.632 [2024-05-15 11:18:24.698604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.698692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.698705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.632 qpair failed and we were unable to recover it. 00:26:27.632 [2024-05-15 11:18:24.698863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.698947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.698960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.632 qpair failed and we were unable to recover it. 00:26:27.632 [2024-05-15 11:18:24.699051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.699145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.699161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.632 qpair failed and we were unable to recover it. 00:26:27.632 [2024-05-15 11:18:24.699272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.699450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.699463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.632 qpair failed and we were unable to recover it. 00:26:27.632 [2024-05-15 11:18:24.699634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.699779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.699792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.632 qpair failed and we were unable to recover it. 00:26:27.632 [2024-05-15 11:18:24.699880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.700021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.700034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.632 qpair failed and we were unable to recover it. 00:26:27.632 [2024-05-15 11:18:24.700136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.700257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.700271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.632 qpair failed and we were unable to recover it. 00:26:27.632 [2024-05-15 11:18:24.700486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.700593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.700621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.632 qpair failed and we were unable to recover it. 00:26:27.632 [2024-05-15 11:18:24.700868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.700985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.632 [2024-05-15 11:18:24.701016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.633 qpair failed and we were unable to recover it. 00:26:27.633 [2024-05-15 11:18:24.701119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.701254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.701268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.633 qpair failed and we were unable to recover it. 00:26:27.633 [2024-05-15 11:18:24.701345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.701429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.701442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.633 qpair failed and we were unable to recover it. 00:26:27.633 [2024-05-15 11:18:24.701593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.701766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.701780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.633 qpair failed and we were unable to recover it. 00:26:27.633 [2024-05-15 11:18:24.701939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.702029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.702063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.633 qpair failed and we were unable to recover it. 00:26:27.633 [2024-05-15 11:18:24.702241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.702371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.702400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.633 qpair failed and we were unable to recover it. 00:26:27.633 [2024-05-15 11:18:24.702524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.702661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.702689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.633 qpair failed and we were unable to recover it. 00:26:27.633 [2024-05-15 11:18:24.702890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.703052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.703065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.633 qpair failed and we were unable to recover it. 00:26:27.633 [2024-05-15 11:18:24.703173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.703395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.703408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.633 qpair failed and we were unable to recover it. 00:26:27.633 [2024-05-15 11:18:24.703563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.703726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.703739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.633 qpair failed and we were unable to recover it. 00:26:27.633 [2024-05-15 11:18:24.703882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.703958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.703971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.633 qpair failed and we were unable to recover it. 00:26:27.633 [2024-05-15 11:18:24.704069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.704159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.704175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.633 qpair failed and we were unable to recover it. 00:26:27.633 [2024-05-15 11:18:24.704258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.704417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.704431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.633 qpair failed and we were unable to recover it. 00:26:27.633 [2024-05-15 11:18:24.704593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.704751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.704764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.633 qpair failed and we were unable to recover it. 00:26:27.633 [2024-05-15 11:18:24.704851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.704992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.705005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.633 qpair failed and we were unable to recover it. 00:26:27.633 [2024-05-15 11:18:24.705099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.705189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.705207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.633 qpair failed and we were unable to recover it. 00:26:27.633 [2024-05-15 11:18:24.705300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.705387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.705400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.633 qpair failed and we were unable to recover it. 00:26:27.633 [2024-05-15 11:18:24.705606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.705713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.705726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.633 qpair failed and we were unable to recover it. 00:26:27.633 [2024-05-15 11:18:24.705822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.705975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.705989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.633 qpair failed and we were unable to recover it. 00:26:27.633 [2024-05-15 11:18:24.706078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.706290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.706303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.633 qpair failed and we were unable to recover it. 00:26:27.633 [2024-05-15 11:18:24.706446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.706586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.706599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.633 qpair failed and we were unable to recover it. 00:26:27.633 [2024-05-15 11:18:24.706765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.706861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.706874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.633 qpair failed and we were unable to recover it. 00:26:27.633 [2024-05-15 11:18:24.707017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.707234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.707248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.633 qpair failed and we were unable to recover it. 00:26:27.633 [2024-05-15 11:18:24.707324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.707405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.707418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.633 qpair failed and we were unable to recover it. 00:26:27.633 [2024-05-15 11:18:24.707518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.707653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.707666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.633 qpair failed and we were unable to recover it. 00:26:27.633 [2024-05-15 11:18:24.707814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.708027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.708055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.633 qpair failed and we were unable to recover it. 00:26:27.633 [2024-05-15 11:18:24.708186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.708358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.708386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.633 qpair failed and we were unable to recover it. 00:26:27.633 [2024-05-15 11:18:24.708514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.708751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.708780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.633 qpair failed and we were unable to recover it. 00:26:27.633 [2024-05-15 11:18:24.708958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.709178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.709209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.633 qpair failed and we were unable to recover it. 00:26:27.633 [2024-05-15 11:18:24.709349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.709521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.709550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.633 qpair failed and we were unable to recover it. 00:26:27.633 [2024-05-15 11:18:24.709734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.709846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.709874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.633 qpair failed and we were unable to recover it. 00:26:27.633 [2024-05-15 11:18:24.710042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.710295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.710309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.633 qpair failed and we were unable to recover it. 00:26:27.633 [2024-05-15 11:18:24.710475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.710651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.633 [2024-05-15 11:18:24.710665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.633 qpair failed and we were unable to recover it. 00:26:27.634 [2024-05-15 11:18:24.710817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.710981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.710994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.634 qpair failed and we were unable to recover it. 00:26:27.634 [2024-05-15 11:18:24.711161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.711259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.711272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.634 qpair failed and we were unable to recover it. 00:26:27.634 [2024-05-15 11:18:24.711486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.711636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.711649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.634 qpair failed and we were unable to recover it. 00:26:27.634 [2024-05-15 11:18:24.711809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.711909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.711923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.634 qpair failed and we were unable to recover it. 00:26:27.634 [2024-05-15 11:18:24.712018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.712175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.712189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.634 qpair failed and we were unable to recover it. 00:26:27.634 [2024-05-15 11:18:24.712282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.712361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.712374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.634 qpair failed and we were unable to recover it. 00:26:27.634 [2024-05-15 11:18:24.712617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.712698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.712711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.634 qpair failed and we were unable to recover it. 00:26:27.634 [2024-05-15 11:18:24.712888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.713072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.713101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.634 qpair failed and we were unable to recover it. 00:26:27.634 [2024-05-15 11:18:24.713308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.713493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.713522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.634 qpair failed and we were unable to recover it. 00:26:27.634 [2024-05-15 11:18:24.713696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.713847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.713860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.634 qpair failed and we were unable to recover it. 00:26:27.634 [2024-05-15 11:18:24.714025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.714175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.714189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.634 qpair failed and we were unable to recover it. 00:26:27.634 [2024-05-15 11:18:24.714263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.714323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.714335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.634 qpair failed and we were unable to recover it. 00:26:27.634 [2024-05-15 11:18:24.714437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.714578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.714592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.634 qpair failed and we were unable to recover it. 00:26:27.634 [2024-05-15 11:18:24.714700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.714844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.714857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.634 qpair failed and we were unable to recover it. 00:26:27.634 [2024-05-15 11:18:24.714941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.715066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.715080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.634 qpair failed and we were unable to recover it. 00:26:27.634 [2024-05-15 11:18:24.715177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.715321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.715335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.634 qpair failed and we were unable to recover it. 00:26:27.634 [2024-05-15 11:18:24.715487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.715664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.715693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.634 qpair failed and we were unable to recover it. 00:26:27.634 [2024-05-15 11:18:24.715816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.715932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.715961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.634 qpair failed and we were unable to recover it. 00:26:27.634 [2024-05-15 11:18:24.716196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.716304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.716317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.634 qpair failed and we were unable to recover it. 00:26:27.634 [2024-05-15 11:18:24.716439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.716519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.716532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.634 qpair failed and we were unable to recover it. 00:26:27.634 [2024-05-15 11:18:24.716700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.716864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.716892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.634 qpair failed and we were unable to recover it. 00:26:27.634 [2024-05-15 11:18:24.717155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.717338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.717367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.634 qpair failed and we were unable to recover it. 00:26:27.634 [2024-05-15 11:18:24.717559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.717739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.717752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.634 qpair failed and we were unable to recover it. 00:26:27.634 [2024-05-15 11:18:24.717909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.718011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.718024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.634 qpair failed and we were unable to recover it. 00:26:27.634 [2024-05-15 11:18:24.718113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.718263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.718277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.634 qpair failed and we were unable to recover it. 00:26:27.634 [2024-05-15 11:18:24.718397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.718548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.718561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.634 qpair failed and we were unable to recover it. 00:26:27.634 [2024-05-15 11:18:24.718734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.718974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.719003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.634 qpair failed and we were unable to recover it. 00:26:27.634 [2024-05-15 11:18:24.719255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.719439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.719468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.634 qpair failed and we were unable to recover it. 00:26:27.634 [2024-05-15 11:18:24.719632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.719786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.719799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.634 qpair failed and we were unable to recover it. 00:26:27.634 [2024-05-15 11:18:24.719930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.720104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.720117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.634 qpair failed and we were unable to recover it. 00:26:27.634 [2024-05-15 11:18:24.720276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.720375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.720388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.634 qpair failed and we were unable to recover it. 00:26:27.634 [2024-05-15 11:18:24.720555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.720714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.634 [2024-05-15 11:18:24.720742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.634 qpair failed and we were unable to recover it. 00:26:27.635 [2024-05-15 11:18:24.720947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.721054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.721082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.635 qpair failed and we were unable to recover it. 00:26:27.635 [2024-05-15 11:18:24.721269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.721470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.721499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.635 qpair failed and we were unable to recover it. 00:26:27.635 [2024-05-15 11:18:24.721749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.721900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.721928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.635 qpair failed and we were unable to recover it. 00:26:27.635 [2024-05-15 11:18:24.722113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.722253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.722283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.635 qpair failed and we were unable to recover it. 00:26:27.635 [2024-05-15 11:18:24.722475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.722656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.722684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.635 qpair failed and we were unable to recover it. 00:26:27.635 [2024-05-15 11:18:24.722948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.723183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.723212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.635 qpair failed and we were unable to recover it. 00:26:27.635 [2024-05-15 11:18:24.723455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.723717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.723746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.635 qpair failed and we were unable to recover it. 00:26:27.635 [2024-05-15 11:18:24.723994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.724198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.724213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.635 qpair failed and we were unable to recover it. 00:26:27.635 [2024-05-15 11:18:24.724456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.724726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.724739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.635 qpair failed and we were unable to recover it. 00:26:27.635 [2024-05-15 11:18:24.724953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.725128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.725156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.635 qpair failed and we were unable to recover it. 00:26:27.635 [2024-05-15 11:18:24.725445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.725643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.725671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.635 qpair failed and we were unable to recover it. 00:26:27.635 [2024-05-15 11:18:24.725868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.726111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.726140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.635 qpair failed and we were unable to recover it. 00:26:27.635 [2024-05-15 11:18:24.726389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.726652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.726681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.635 qpair failed and we were unable to recover it. 00:26:27.635 [2024-05-15 11:18:24.726871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.727053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.727082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.635 qpair failed and we were unable to recover it. 00:26:27.635 [2024-05-15 11:18:24.727274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.727441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.727469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.635 qpair failed and we were unable to recover it. 00:26:27.635 [2024-05-15 11:18:24.727672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.727924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.727964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.635 qpair failed and we were unable to recover it. 00:26:27.635 [2024-05-15 11:18:24.728107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.728323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.728354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.635 qpair failed and we were unable to recover it. 00:26:27.635 [2024-05-15 11:18:24.728651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.728834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.728862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.635 qpair failed and we were unable to recover it. 00:26:27.635 [2024-05-15 11:18:24.729136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.729379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.729392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.635 qpair failed and we were unable to recover it. 00:26:27.635 [2024-05-15 11:18:24.729598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.729793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.729806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.635 qpair failed and we were unable to recover it. 00:26:27.635 [2024-05-15 11:18:24.729897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.730132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.730160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.635 qpair failed and we were unable to recover it. 00:26:27.635 [2024-05-15 11:18:24.730434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.730695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.730724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.635 qpair failed and we were unable to recover it. 00:26:27.635 [2024-05-15 11:18:24.730963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.731100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.731129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.635 qpair failed and we were unable to recover it. 00:26:27.635 [2024-05-15 11:18:24.731381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.731585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.731614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.635 qpair failed and we were unable to recover it. 00:26:27.635 [2024-05-15 11:18:24.731856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.732026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.732055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.635 qpair failed and we were unable to recover it. 00:26:27.635 [2024-05-15 11:18:24.732233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.732516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.732544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.635 qpair failed and we were unable to recover it. 00:26:27.635 [2024-05-15 11:18:24.732783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.732963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.732991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.635 qpair failed and we were unable to recover it. 00:26:27.635 [2024-05-15 11:18:24.733176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.733431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.733460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.635 qpair failed and we were unable to recover it. 00:26:27.635 [2024-05-15 11:18:24.733701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.733938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.733967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.635 qpair failed and we were unable to recover it. 00:26:27.635 [2024-05-15 11:18:24.734209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.734383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.734411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.635 qpair failed and we were unable to recover it. 00:26:27.635 [2024-05-15 11:18:24.734585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.734851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.734881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.635 qpair failed and we were unable to recover it. 00:26:27.635 [2024-05-15 11:18:24.735145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.735322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.735352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.635 qpair failed and we were unable to recover it. 00:26:27.635 [2024-05-15 11:18:24.735651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.735909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.735922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.635 qpair failed and we were unable to recover it. 00:26:27.635 [2024-05-15 11:18:24.736138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.736301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.736315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.635 qpair failed and we were unable to recover it. 00:26:27.635 [2024-05-15 11:18:24.736399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.635 [2024-05-15 11:18:24.736569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.736599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.636 qpair failed and we were unable to recover it. 00:26:27.636 [2024-05-15 11:18:24.736864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.737065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.737094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.636 qpair failed and we were unable to recover it. 00:26:27.636 [2024-05-15 11:18:24.737326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.737506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.737519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.636 qpair failed and we were unable to recover it. 00:26:27.636 [2024-05-15 11:18:24.737730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.737823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.737836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.636 qpair failed and we were unable to recover it. 00:26:27.636 [2024-05-15 11:18:24.737933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.738116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.738129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.636 qpair failed and we were unable to recover it. 00:26:27.636 [2024-05-15 11:18:24.738396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.738628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.738641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.636 qpair failed and we were unable to recover it. 00:26:27.636 [2024-05-15 11:18:24.738751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.738961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.738974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.636 qpair failed and we were unable to recover it. 00:26:27.636 [2024-05-15 11:18:24.739201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.739497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.739513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.636 qpair failed and we were unable to recover it. 00:26:27.636 [2024-05-15 11:18:24.739720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.739811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.739825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.636 qpair failed and we were unable to recover it. 00:26:27.636 [2024-05-15 11:18:24.740046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.740212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.740230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.636 qpair failed and we were unable to recover it. 00:26:27.636 [2024-05-15 11:18:24.740417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.740520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.740534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.636 qpair failed and we were unable to recover it. 00:26:27.636 [2024-05-15 11:18:24.740722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.740929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.740947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.636 qpair failed and we were unable to recover it. 00:26:27.636 [2024-05-15 11:18:24.741098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.741359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.741373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.636 qpair failed and we were unable to recover it. 00:26:27.636 [2024-05-15 11:18:24.741604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.741847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.741860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.636 qpair failed and we were unable to recover it. 00:26:27.636 [2024-05-15 11:18:24.742029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.742259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.742273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.636 qpair failed and we were unable to recover it. 00:26:27.636 [2024-05-15 11:18:24.742475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.742650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.742664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.636 qpair failed and we were unable to recover it. 00:26:27.636 [2024-05-15 11:18:24.742810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.743040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.743054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.636 qpair failed and we were unable to recover it. 00:26:27.636 [2024-05-15 11:18:24.743275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.743482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.743495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.636 qpair failed and we were unable to recover it. 00:26:27.636 [2024-05-15 11:18:24.743691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.743874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.743888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.636 qpair failed and we were unable to recover it. 00:26:27.636 [2024-05-15 11:18:24.744038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.744201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.744216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.636 qpair failed and we were unable to recover it. 00:26:27.636 [2024-05-15 11:18:24.744448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.744702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.744716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.636 qpair failed and we were unable to recover it. 00:26:27.636 [2024-05-15 11:18:24.744900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.745114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.745127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.636 qpair failed and we were unable to recover it. 00:26:27.636 [2024-05-15 11:18:24.745343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.745496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.745510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.636 qpair failed and we were unable to recover it. 00:26:27.636 [2024-05-15 11:18:24.745679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.745910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.745923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.636 qpair failed and we were unable to recover it. 00:26:27.636 [2024-05-15 11:18:24.746156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.746412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.746426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.636 qpair failed and we were unable to recover it. 00:26:27.636 [2024-05-15 11:18:24.746575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.746793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.746806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.636 qpair failed and we were unable to recover it. 00:26:27.636 [2024-05-15 11:18:24.747004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.747213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.747227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.636 qpair failed and we were unable to recover it. 00:26:27.636 [2024-05-15 11:18:24.747463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.747619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.747633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.636 qpair failed and we were unable to recover it. 00:26:27.636 [2024-05-15 11:18:24.747817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.748035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.748048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.636 qpair failed and we were unable to recover it. 00:26:27.636 [2024-05-15 11:18:24.748220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.748396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.636 [2024-05-15 11:18:24.748409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.636 qpair failed and we were unable to recover it. 00:26:27.636 [2024-05-15 11:18:24.748589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.748814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.748828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.637 qpair failed and we were unable to recover it. 00:26:27.637 [2024-05-15 11:18:24.749048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.749279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.749293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.637 qpair failed and we were unable to recover it. 00:26:27.637 [2024-05-15 11:18:24.749529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.749686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.749699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.637 qpair failed and we were unable to recover it. 00:26:27.637 [2024-05-15 11:18:24.749930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.750079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.750092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.637 qpair failed and we were unable to recover it. 00:26:27.637 [2024-05-15 11:18:24.750237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.750449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.750462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.637 qpair failed and we were unable to recover it. 00:26:27.637 [2024-05-15 11:18:24.750737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.750901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.750915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.637 qpair failed and we were unable to recover it. 00:26:27.637 [2024-05-15 11:18:24.751147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.751313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.751332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.637 qpair failed and we were unable to recover it. 00:26:27.637 [2024-05-15 11:18:24.751559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.751707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.751720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.637 qpair failed and we were unable to recover it. 00:26:27.637 [2024-05-15 11:18:24.751821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.752079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.752092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.637 qpair failed and we were unable to recover it. 00:26:27.637 [2024-05-15 11:18:24.752273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.752518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.752532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.637 qpair failed and we were unable to recover it. 00:26:27.637 [2024-05-15 11:18:24.752779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.753022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.753035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.637 qpair failed and we were unable to recover it. 00:26:27.637 [2024-05-15 11:18:24.753192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.753350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.753363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.637 qpair failed and we were unable to recover it. 00:26:27.637 [2024-05-15 11:18:24.753506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.753647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.753660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.637 qpair failed and we were unable to recover it. 00:26:27.637 [2024-05-15 11:18:24.753886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.754044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.754057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.637 qpair failed and we were unable to recover it. 00:26:27.637 [2024-05-15 11:18:24.754226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.754375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.754388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.637 qpair failed and we were unable to recover it. 00:26:27.637 [2024-05-15 11:18:24.754608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.754753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.754767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.637 qpair failed and we were unable to recover it. 00:26:27.637 [2024-05-15 11:18:24.754918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.755102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.755118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.637 qpair failed and we were unable to recover it. 00:26:27.637 [2024-05-15 11:18:24.755275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.755425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.755438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.637 qpair failed and we were unable to recover it. 00:26:27.637 [2024-05-15 11:18:24.755674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.755830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.755844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.637 qpair failed and we were unable to recover it. 00:26:27.637 [2024-05-15 11:18:24.756042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.756263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.756277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.637 qpair failed and we were unable to recover it. 00:26:27.637 [2024-05-15 11:18:24.756533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.756755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.756768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.637 qpair failed and we were unable to recover it. 00:26:27.637 [2024-05-15 11:18:24.757028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.757203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.757217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.637 qpair failed and we were unable to recover it. 00:26:27.637 [2024-05-15 11:18:24.757396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.757655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.757668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.637 qpair failed and we were unable to recover it. 00:26:27.637 [2024-05-15 11:18:24.757902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.758054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.758066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.637 qpair failed and we were unable to recover it. 00:26:27.637 [2024-05-15 11:18:24.758292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.758473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.758486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.637 qpair failed and we were unable to recover it. 00:26:27.637 [2024-05-15 11:18:24.758722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.758986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.758999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.637 qpair failed and we were unable to recover it. 00:26:27.637 [2024-05-15 11:18:24.759075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.759295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.759312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.637 qpair failed and we were unable to recover it. 00:26:27.637 [2024-05-15 11:18:24.759415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.759556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.759569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.637 qpair failed and we were unable to recover it. 00:26:27.637 [2024-05-15 11:18:24.759798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.759952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.759966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.637 qpair failed and we were unable to recover it. 00:26:27.637 [2024-05-15 11:18:24.760150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.760307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.760321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.637 qpair failed and we were unable to recover it. 00:26:27.637 [2024-05-15 11:18:24.760548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.760782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.760796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.637 qpair failed and we were unable to recover it. 00:26:27.637 [2024-05-15 11:18:24.761087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.761177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.761191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.637 qpair failed and we were unable to recover it. 00:26:27.637 [2024-05-15 11:18:24.761437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.761618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.761631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.637 qpair failed and we were unable to recover it. 00:26:27.637 [2024-05-15 11:18:24.761733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.761941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.761954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.637 qpair failed and we were unable to recover it. 00:26:27.637 [2024-05-15 11:18:24.762050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.762255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.637 [2024-05-15 11:18:24.762269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.637 qpair failed and we were unable to recover it. 00:26:27.638 [2024-05-15 11:18:24.762430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.762679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.762692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.638 qpair failed and we were unable to recover it. 00:26:27.638 [2024-05-15 11:18:24.762958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.763201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.763217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.638 qpair failed and we were unable to recover it. 00:26:27.638 [2024-05-15 11:18:24.763450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.763690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.763703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.638 qpair failed and we were unable to recover it. 00:26:27.638 [2024-05-15 11:18:24.763933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.764083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.764096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.638 qpair failed and we were unable to recover it. 00:26:27.638 [2024-05-15 11:18:24.764276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.764495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.764508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.638 qpair failed and we were unable to recover it. 00:26:27.638 [2024-05-15 11:18:24.764734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.764892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.764905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.638 qpair failed and we were unable to recover it. 00:26:27.638 [2024-05-15 11:18:24.765126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.765386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.765400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.638 qpair failed and we were unable to recover it. 00:26:27.638 [2024-05-15 11:18:24.765574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.765759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.765773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.638 qpair failed and we were unable to recover it. 00:26:27.638 [2024-05-15 11:18:24.765857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.766087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.766100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.638 qpair failed and we were unable to recover it. 00:26:27.638 [2024-05-15 11:18:24.766296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.766396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.766409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.638 qpair failed and we were unable to recover it. 00:26:27.638 [2024-05-15 11:18:24.766651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.766813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.766826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.638 qpair failed and we were unable to recover it. 00:26:27.638 [2024-05-15 11:18:24.767052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.767201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.767215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.638 qpair failed and we were unable to recover it. 00:26:27.638 [2024-05-15 11:18:24.767372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.767616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.767629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.638 qpair failed and we were unable to recover it. 00:26:27.638 [2024-05-15 11:18:24.767731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.767845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.767858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.638 qpair failed and we were unable to recover it. 00:26:27.638 [2024-05-15 11:18:24.767967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.768182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.768196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.638 qpair failed and we were unable to recover it. 00:26:27.638 [2024-05-15 11:18:24.768341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.768584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.768597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.638 qpair failed and we were unable to recover it. 00:26:27.638 [2024-05-15 11:18:24.768752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.768929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.768942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.638 qpair failed and we were unable to recover it. 00:26:27.638 [2024-05-15 11:18:24.769042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.769248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.769261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.638 qpair failed and we were unable to recover it. 00:26:27.638 [2024-05-15 11:18:24.769415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.769657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.769670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.638 qpair failed and we were unable to recover it. 00:26:27.638 [2024-05-15 11:18:24.769830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.770048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.770061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.638 qpair failed and we were unable to recover it. 00:26:27.638 [2024-05-15 11:18:24.770310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.770487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.770500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.638 qpair failed and we were unable to recover it. 00:26:27.638 [2024-05-15 11:18:24.770686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.770839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.770852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.638 qpair failed and we were unable to recover it. 00:26:27.638 [2024-05-15 11:18:24.771037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.771196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.771210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.638 qpair failed and we were unable to recover it. 00:26:27.638 [2024-05-15 11:18:24.771416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.771668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.771682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.638 qpair failed and we were unable to recover it. 00:26:27.638 [2024-05-15 11:18:24.771857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.772053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.772066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.638 qpair failed and we were unable to recover it. 00:26:27.638 [2024-05-15 11:18:24.772250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.772458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.772471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.638 qpair failed and we were unable to recover it. 00:26:27.638 [2024-05-15 11:18:24.772720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.772887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.772901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.638 qpair failed and we were unable to recover it. 00:26:27.638 [2024-05-15 11:18:24.773110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.773321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.773334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.638 qpair failed and we were unable to recover it. 00:26:27.638 [2024-05-15 11:18:24.773565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.773792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.773805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.638 qpair failed and we were unable to recover it. 00:26:27.638 [2024-05-15 11:18:24.774083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.774222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.774236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.638 qpair failed and we were unable to recover it. 00:26:27.638 [2024-05-15 11:18:24.774446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.774603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.774616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.638 qpair failed and we were unable to recover it. 00:26:27.638 [2024-05-15 11:18:24.774756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.775025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.775039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.638 qpair failed and we were unable to recover it. 00:26:27.638 [2024-05-15 11:18:24.775295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.775450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.775463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.638 qpair failed and we were unable to recover it. 00:26:27.638 [2024-05-15 11:18:24.775632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.775806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.775819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.638 qpair failed and we were unable to recover it. 00:26:27.638 [2024-05-15 11:18:24.776074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.776270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.776284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.638 qpair failed and we were unable to recover it. 00:26:27.638 [2024-05-15 11:18:24.776517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.638 [2024-05-15 11:18:24.776662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.776675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.639 qpair failed and we were unable to recover it. 00:26:27.639 [2024-05-15 11:18:24.776785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.776926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.776939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.639 qpair failed and we were unable to recover it. 00:26:27.639 [2024-05-15 11:18:24.777031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.777180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.777193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.639 qpair failed and we were unable to recover it. 00:26:27.639 [2024-05-15 11:18:24.777435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.777683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.777696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.639 qpair failed and we were unable to recover it. 00:26:27.639 [2024-05-15 11:18:24.777929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.778109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.778122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.639 qpair failed and we were unable to recover it. 00:26:27.639 [2024-05-15 11:18:24.778233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.778466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.778479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.639 qpair failed and we were unable to recover it. 00:26:27.639 [2024-05-15 11:18:24.778712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.778871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.778884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.639 qpair failed and we were unable to recover it. 00:26:27.639 [2024-05-15 11:18:24.778989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.779147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.779161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.639 qpair failed and we were unable to recover it. 00:26:27.639 [2024-05-15 11:18:24.779380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.779524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.779537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.639 qpair failed and we were unable to recover it. 00:26:27.639 [2024-05-15 11:18:24.779749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.780001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.780014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.639 qpair failed and we were unable to recover it. 00:26:27.639 [2024-05-15 11:18:24.780238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.780486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.780499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.639 qpair failed and we were unable to recover it. 00:26:27.639 [2024-05-15 11:18:24.780666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.780752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.780764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.639 qpair failed and we were unable to recover it. 00:26:27.639 [2024-05-15 11:18:24.780914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.781137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.781150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.639 qpair failed and we were unable to recover it. 00:26:27.639 [2024-05-15 11:18:24.781261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.781491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.781504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.639 qpair failed and we were unable to recover it. 00:26:27.639 [2024-05-15 11:18:24.781661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.781816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.781830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.639 qpair failed and we were unable to recover it. 00:26:27.639 [2024-05-15 11:18:24.782059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.782291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.782305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.639 qpair failed and we were unable to recover it. 00:26:27.639 [2024-05-15 11:18:24.782536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.782789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.782802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.639 qpair failed and we were unable to recover it. 00:26:27.639 [2024-05-15 11:18:24.782977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.783063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.783076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.639 qpair failed and we were unable to recover it. 00:26:27.639 [2024-05-15 11:18:24.783261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.783492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.783505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.639 qpair failed and we were unable to recover it. 00:26:27.639 [2024-05-15 11:18:24.783764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.784026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.784039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.639 qpair failed and we were unable to recover it. 00:26:27.639 [2024-05-15 11:18:24.784246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.784450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.784463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.639 qpair failed and we were unable to recover it. 00:26:27.639 [2024-05-15 11:18:24.784653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.784875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.784889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.639 qpair failed and we were unable to recover it. 00:26:27.639 [2024-05-15 11:18:24.785120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.785349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.785363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.639 qpair failed and we were unable to recover it. 00:26:27.639 [2024-05-15 11:18:24.785635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.785791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.785805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.639 qpair failed and we were unable to recover it. 00:26:27.639 [2024-05-15 11:18:24.785910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.786128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.786142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.639 qpair failed and we were unable to recover it. 00:26:27.639 [2024-05-15 11:18:24.786330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.786487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.786500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.639 qpair failed and we were unable to recover it. 00:26:27.639 [2024-05-15 11:18:24.786605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.786765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.786778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.639 qpair failed and we were unable to recover it. 00:26:27.639 [2024-05-15 11:18:24.786877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.787024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.787037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.639 qpair failed and we were unable to recover it. 00:26:27.639 [2024-05-15 11:18:24.787217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.787430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.787444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.639 qpair failed and we were unable to recover it. 00:26:27.639 [2024-05-15 11:18:24.787609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.787765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.787778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.639 qpair failed and we were unable to recover it. 00:26:27.639 [2024-05-15 11:18:24.787921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.788152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.788169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.639 qpair failed and we were unable to recover it. 00:26:27.639 [2024-05-15 11:18:24.788245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.788466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.788479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.639 qpair failed and we were unable to recover it. 00:26:27.639 [2024-05-15 11:18:24.788715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.788969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.788983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.639 qpair failed and we were unable to recover it. 00:26:27.639 [2024-05-15 11:18:24.789215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.789421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.789435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.639 qpair failed and we were unable to recover it. 00:26:27.639 [2024-05-15 11:18:24.789577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.789806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.789819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.639 qpair failed and we were unable to recover it. 00:26:27.639 [2024-05-15 11:18:24.789979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.790061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.790074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.639 qpair failed and we were unable to recover it. 00:26:27.639 [2024-05-15 11:18:24.790217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.790438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.639 [2024-05-15 11:18:24.790451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.639 qpair failed and we were unable to recover it. 00:26:27.639 [2024-05-15 11:18:24.790587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.790824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.790841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.640 qpair failed and we were unable to recover it. 00:26:27.640 [2024-05-15 11:18:24.791006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.791225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.791241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.640 qpair failed and we were unable to recover it. 00:26:27.640 [2024-05-15 11:18:24.791465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.791648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.791662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.640 qpair failed and we were unable to recover it. 00:26:27.640 [2024-05-15 11:18:24.791818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.792049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.792063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.640 qpair failed and we were unable to recover it. 00:26:27.640 [2024-05-15 11:18:24.792239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.792472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.792486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.640 qpair failed and we were unable to recover it. 00:26:27.640 [2024-05-15 11:18:24.792647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.792748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.792762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.640 qpair failed and we were unable to recover it. 00:26:27.640 [2024-05-15 11:18:24.792924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.793096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.793109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.640 qpair failed and we were unable to recover it. 00:26:27.640 [2024-05-15 11:18:24.793217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.793386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.793400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.640 qpair failed and we were unable to recover it. 00:26:27.640 [2024-05-15 11:18:24.793589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.793816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.793830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.640 qpair failed and we were unable to recover it. 00:26:27.640 [2024-05-15 11:18:24.794055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.794286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.794300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.640 qpair failed and we were unable to recover it. 00:26:27.640 [2024-05-15 11:18:24.794508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.794688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.794701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.640 qpair failed and we were unable to recover it. 00:26:27.640 [2024-05-15 11:18:24.794948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.795174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.795188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.640 qpair failed and we were unable to recover it. 00:26:27.640 [2024-05-15 11:18:24.795450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.795690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.795704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.640 qpair failed and we were unable to recover it. 00:26:27.640 [2024-05-15 11:18:24.795940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.796106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.796119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.640 qpair failed and we were unable to recover it. 00:26:27.640 [2024-05-15 11:18:24.796206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.796434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.796447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.640 qpair failed and we were unable to recover it. 00:26:27.640 [2024-05-15 11:18:24.796552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.796701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.796713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.640 qpair failed and we were unable to recover it. 00:26:27.640 [2024-05-15 11:18:24.796955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.797133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.797147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.640 qpair failed and we were unable to recover it. 00:26:27.640 [2024-05-15 11:18:24.797404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.797640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.797653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.640 qpair failed and we were unable to recover it. 00:26:27.640 [2024-05-15 11:18:24.797886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.798146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.798160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.640 qpair failed and we were unable to recover it. 00:26:27.640 [2024-05-15 11:18:24.798377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.798552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.798566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.640 qpair failed and we were unable to recover it. 00:26:27.640 [2024-05-15 11:18:24.798778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.798949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.798964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.640 qpair failed and we were unable to recover it. 00:26:27.640 [2024-05-15 11:18:24.799200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.799411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.799425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.640 qpair failed and we were unable to recover it. 00:26:27.640 [2024-05-15 11:18:24.799658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.799857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.799870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.640 qpair failed and we were unable to recover it. 00:26:27.640 [2024-05-15 11:18:24.800013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.800172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.800187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.640 qpair failed and we were unable to recover it. 00:26:27.640 [2024-05-15 11:18:24.800335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.800592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.800605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.640 qpair failed and we were unable to recover it. 00:26:27.640 [2024-05-15 11:18:24.800786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.801039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.801053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.640 qpair failed and we were unable to recover it. 00:26:27.640 [2024-05-15 11:18:24.801303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.801481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.801494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.640 qpair failed and we were unable to recover it. 00:26:27.640 [2024-05-15 11:18:24.801724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.801815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.801828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.640 qpair failed and we were unable to recover it. 00:26:27.640 [2024-05-15 11:18:24.801986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.802136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.802149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.640 qpair failed and we were unable to recover it. 00:26:27.640 [2024-05-15 11:18:24.802352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.640 [2024-05-15 11:18:24.802583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.802597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.641 qpair failed and we were unable to recover it. 00:26:27.641 [2024-05-15 11:18:24.802805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.803031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.803045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.641 qpair failed and we were unable to recover it. 00:26:27.641 [2024-05-15 11:18:24.803216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.803442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.803456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.641 qpair failed and we were unable to recover it. 00:26:27.641 [2024-05-15 11:18:24.803664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.803818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.803831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.641 qpair failed and we were unable to recover it. 00:26:27.641 [2024-05-15 11:18:24.804057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.804210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.804223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.641 qpair failed and we were unable to recover it. 00:26:27.641 [2024-05-15 11:18:24.804448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.804637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.804650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.641 qpair failed and we were unable to recover it. 00:26:27.641 [2024-05-15 11:18:24.804859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.805081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.805095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.641 qpair failed and we were unable to recover it. 00:26:27.641 [2024-05-15 11:18:24.805323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.805490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.805503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.641 qpair failed and we were unable to recover it. 00:26:27.641 [2024-05-15 11:18:24.805710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.805946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.805960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.641 qpair failed and we were unable to recover it. 00:26:27.641 [2024-05-15 11:18:24.806171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.806441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.806454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.641 qpair failed and we were unable to recover it. 00:26:27.641 [2024-05-15 11:18:24.806662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.806906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.806920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.641 qpair failed and we were unable to recover it. 00:26:27.641 [2024-05-15 11:18:24.807128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.807335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.807349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.641 qpair failed and we were unable to recover it. 00:26:27.641 [2024-05-15 11:18:24.807557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.807791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.807804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.641 qpair failed and we were unable to recover it. 00:26:27.641 [2024-05-15 11:18:24.807973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.808123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.808136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.641 qpair failed and we were unable to recover it. 00:26:27.641 [2024-05-15 11:18:24.808366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.808469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.808483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.641 qpair failed and we were unable to recover it. 00:26:27.641 [2024-05-15 11:18:24.808663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.808862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.808875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.641 qpair failed and we were unable to recover it. 00:26:27.641 [2024-05-15 11:18:24.809131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.809292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.809306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.641 qpair failed and we were unable to recover it. 00:26:27.641 [2024-05-15 11:18:24.809464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.809602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.809615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.641 qpair failed and we were unable to recover it. 00:26:27.641 [2024-05-15 11:18:24.809791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.810032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.810047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.641 qpair failed and we were unable to recover it. 00:26:27.641 [2024-05-15 11:18:24.810147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.810296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.810310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.641 qpair failed and we were unable to recover it. 00:26:27.641 [2024-05-15 11:18:24.810406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.810562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.810575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.641 qpair failed and we were unable to recover it. 00:26:27.641 [2024-05-15 11:18:24.810735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.810988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.811001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.641 qpair failed and we were unable to recover it. 00:26:27.641 [2024-05-15 11:18:24.811214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.811419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.811433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.641 qpair failed and we were unable to recover it. 00:26:27.641 [2024-05-15 11:18:24.811662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.811843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.811856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.641 qpair failed and we were unable to recover it. 00:26:27.641 [2024-05-15 11:18:24.812086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.812267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.812281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.641 qpair failed and we were unable to recover it. 00:26:27.641 [2024-05-15 11:18:24.812518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.812688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.812701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.641 qpair failed and we were unable to recover it. 00:26:27.641 [2024-05-15 11:18:24.812867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.813087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.813102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.641 qpair failed and we were unable to recover it. 00:26:27.641 [2024-05-15 11:18:24.813194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.813418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.813433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.641 qpair failed and we were unable to recover it. 00:26:27.641 [2024-05-15 11:18:24.813673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.813911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.813925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.641 qpair failed and we were unable to recover it. 00:26:27.641 [2024-05-15 11:18:24.814085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.814346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.814360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.641 qpair failed and we were unable to recover it. 00:26:27.641 [2024-05-15 11:18:24.814611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.814846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.814860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.641 qpair failed and we were unable to recover it. 00:26:27.641 [2024-05-15 11:18:24.815069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.815244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.815259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.641 qpair failed and we were unable to recover it. 00:26:27.641 [2024-05-15 11:18:24.815469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.815704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.815718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.641 qpair failed and we were unable to recover it. 00:26:27.641 [2024-05-15 11:18:24.815955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.816056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.816069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.641 qpair failed and we were unable to recover it. 00:26:27.641 [2024-05-15 11:18:24.816276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.816435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.816449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.641 qpair failed and we were unable to recover it. 00:26:27.641 [2024-05-15 11:18:24.816606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.816763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.816776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.641 qpair failed and we were unable to recover it. 00:26:27.641 [2024-05-15 11:18:24.816936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.817138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.817153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.641 qpair failed and we were unable to recover it. 00:26:27.641 [2024-05-15 11:18:24.817327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.817570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.817583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.641 qpair failed and we were unable to recover it. 00:26:27.641 [2024-05-15 11:18:24.817799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.817959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.817973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.641 qpair failed and we were unable to recover it. 00:26:27.641 [2024-05-15 11:18:24.818132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.818238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.818252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.641 qpair failed and we were unable to recover it. 00:26:27.641 [2024-05-15 11:18:24.818416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.641 [2024-05-15 11:18:24.818619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.818633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.642 qpair failed and we were unable to recover it. 00:26:27.642 [2024-05-15 11:18:24.818892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.819142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.819155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.642 qpair failed and we were unable to recover it. 00:26:27.642 [2024-05-15 11:18:24.819423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.819650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.819666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.642 qpair failed and we were unable to recover it. 00:26:27.642 [2024-05-15 11:18:24.819769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.820020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.820033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.642 qpair failed and we were unable to recover it. 00:26:27.642 [2024-05-15 11:18:24.820189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.820282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.820295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.642 qpair failed and we were unable to recover it. 00:26:27.642 [2024-05-15 11:18:24.820449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.820698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.820712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.642 qpair failed and we were unable to recover it. 00:26:27.642 [2024-05-15 11:18:24.820900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.821143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.821157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.642 qpair failed and we were unable to recover it. 00:26:27.642 [2024-05-15 11:18:24.821400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.821559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.821573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.642 qpair failed and we were unable to recover it. 00:26:27.642 [2024-05-15 11:18:24.821821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.822067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.822080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.642 qpair failed and we were unable to recover it. 00:26:27.642 [2024-05-15 11:18:24.822322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.822473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.822486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.642 qpair failed and we were unable to recover it. 00:26:27.642 [2024-05-15 11:18:24.822629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.822812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.822826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.642 qpair failed and we were unable to recover it. 00:26:27.642 [2024-05-15 11:18:24.822975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.823074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.823087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.642 qpair failed and we were unable to recover it. 00:26:27.642 [2024-05-15 11:18:24.823183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.823437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.823451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.642 qpair failed and we were unable to recover it. 00:26:27.642 [2024-05-15 11:18:24.823615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.823784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.823797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.642 qpair failed and we were unable to recover it. 00:26:27.642 [2024-05-15 11:18:24.823980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.824074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.824088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.642 qpair failed and we were unable to recover it. 00:26:27.642 [2024-05-15 11:18:24.824312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.824409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.824422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.642 qpair failed and we were unable to recover it. 00:26:27.642 [2024-05-15 11:18:24.824605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.824839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.824853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.642 qpair failed and we were unable to recover it. 00:26:27.642 [2024-05-15 11:18:24.825147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.825365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.825379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.642 qpair failed and we were unable to recover it. 00:26:27.642 [2024-05-15 11:18:24.825586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.825734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.825748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.642 qpair failed and we were unable to recover it. 00:26:27.642 [2024-05-15 11:18:24.825913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.826068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.826081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.642 qpair failed and we were unable to recover it. 00:26:27.642 [2024-05-15 11:18:24.826247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.826498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.826511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.642 qpair failed and we were unable to recover it. 00:26:27.642 [2024-05-15 11:18:24.826656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.826813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.826826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.642 qpair failed and we were unable to recover it. 00:26:27.642 [2024-05-15 11:18:24.827057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.827222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.827236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.642 qpair failed and we were unable to recover it. 00:26:27.642 [2024-05-15 11:18:24.827393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.827575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.827588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.642 qpair failed and we were unable to recover it. 00:26:27.642 [2024-05-15 11:18:24.827810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.827966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.827979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.642 qpair failed and we were unable to recover it. 00:26:27.642 [2024-05-15 11:18:24.828210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.828443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.828456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.642 qpair failed and we were unable to recover it. 00:26:27.642 [2024-05-15 11:18:24.828621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.828851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.828864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.642 qpair failed and we were unable to recover it. 00:26:27.642 [2024-05-15 11:18:24.829048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.829200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.829214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.642 qpair failed and we were unable to recover it. 00:26:27.642 [2024-05-15 11:18:24.829429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.829584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.829598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.642 qpair failed and we were unable to recover it. 00:26:27.642 [2024-05-15 11:18:24.829831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.830063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.830076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.642 qpair failed and we were unable to recover it. 00:26:27.642 [2024-05-15 11:18:24.830308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.830449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.830462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.642 qpair failed and we were unable to recover it. 00:26:27.642 [2024-05-15 11:18:24.830614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.830834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.830848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.642 qpair failed and we were unable to recover it. 00:26:27.642 [2024-05-15 11:18:24.831054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.831289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.831303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.642 qpair failed and we were unable to recover it. 00:26:27.642 [2024-05-15 11:18:24.831490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.831675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.831689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.642 qpair failed and we were unable to recover it. 00:26:27.642 [2024-05-15 11:18:24.831830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.832036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.832049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.642 qpair failed and we were unable to recover it. 00:26:27.642 [2024-05-15 11:18:24.832303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.832567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.832581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.642 qpair failed and we were unable to recover it. 00:26:27.642 [2024-05-15 11:18:24.832752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.832906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.642 [2024-05-15 11:18:24.832919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.642 qpair failed and we were unable to recover it. 00:26:27.643 [2024-05-15 11:18:24.833062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.833270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.833284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.643 qpair failed and we were unable to recover it. 00:26:27.643 [2024-05-15 11:18:24.833519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.833811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.833824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.643 qpair failed and we were unable to recover it. 00:26:27.643 [2024-05-15 11:18:24.834056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.834292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.834306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.643 qpair failed and we were unable to recover it. 00:26:27.643 [2024-05-15 11:18:24.834412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.834638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.834651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.643 qpair failed and we were unable to recover it. 00:26:27.643 [2024-05-15 11:18:24.834735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.834921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.834935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.643 qpair failed and we were unable to recover it. 00:26:27.643 [2024-05-15 11:18:24.835111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.835210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.835224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.643 qpair failed and we were unable to recover it. 00:26:27.643 [2024-05-15 11:18:24.835311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.835537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.835552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.643 qpair failed and we were unable to recover it. 00:26:27.643 [2024-05-15 11:18:24.835705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.835930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.835943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.643 qpair failed and we were unable to recover it. 00:26:27.643 [2024-05-15 11:18:24.836174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.836382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.836395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.643 qpair failed and we were unable to recover it. 00:26:27.643 [2024-05-15 11:18:24.836483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.836637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.836650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.643 qpair failed and we were unable to recover it. 00:26:27.643 [2024-05-15 11:18:24.836823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.836982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.836995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.643 qpair failed and we were unable to recover it. 00:26:27.643 [2024-05-15 11:18:24.837203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.837343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.837356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.643 qpair failed and we were unable to recover it. 00:26:27.643 [2024-05-15 11:18:24.837531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.837742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.837755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.643 qpair failed and we were unable to recover it. 00:26:27.643 [2024-05-15 11:18:24.837965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.838190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.838204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.643 qpair failed and we were unable to recover it. 00:26:27.643 [2024-05-15 11:18:24.838377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.838596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.838609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.643 qpair failed and we were unable to recover it. 00:26:27.643 [2024-05-15 11:18:24.838846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.838995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.839008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.643 qpair failed and we were unable to recover it. 00:26:27.643 [2024-05-15 11:18:24.839181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.839390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.839406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.643 qpair failed and we were unable to recover it. 00:26:27.643 [2024-05-15 11:18:24.839641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.839865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.839878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.643 qpair failed and we were unable to recover it. 00:26:27.643 [2024-05-15 11:18:24.840054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.840273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.840287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.643 qpair failed and we were unable to recover it. 00:26:27.643 [2024-05-15 11:18:24.840498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.840728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.840741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.643 qpair failed and we were unable to recover it. 00:26:27.643 [2024-05-15 11:18:24.840903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.841056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.841070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.643 qpair failed and we were unable to recover it. 00:26:27.643 [2024-05-15 11:18:24.841225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.841432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.841446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.643 qpair failed and we were unable to recover it. 00:26:27.643 [2024-05-15 11:18:24.841685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.841856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.841869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.643 qpair failed and we were unable to recover it. 00:26:27.643 [2024-05-15 11:18:24.842074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.842242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.842255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.643 qpair failed and we were unable to recover it. 00:26:27.643 [2024-05-15 11:18:24.842478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.842636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.842650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.643 qpair failed and we were unable to recover it. 00:26:27.643 [2024-05-15 11:18:24.842873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.843025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.843038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.643 qpair failed and we were unable to recover it. 00:26:27.643 [2024-05-15 11:18:24.843271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.843428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.843442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.643 qpair failed and we were unable to recover it. 00:26:27.643 [2024-05-15 11:18:24.843626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.843832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.843845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.643 qpair failed and we were unable to recover it. 00:26:27.643 [2024-05-15 11:18:24.844072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.844302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.844316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.643 qpair failed and we were unable to recover it. 00:26:27.643 [2024-05-15 11:18:24.844402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.844556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.844569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.643 qpair failed and we were unable to recover it. 00:26:27.643 [2024-05-15 11:18:24.844729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.844957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.844970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.643 qpair failed and we were unable to recover it. 00:26:27.643 [2024-05-15 11:18:24.845191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.845349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.845362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.643 qpair failed and we were unable to recover it. 00:26:27.643 [2024-05-15 11:18:24.845588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.845682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.643 [2024-05-15 11:18:24.845695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.643 qpair failed and we were unable to recover it. 00:26:27.643 [2024-05-15 11:18:24.845901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.846114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.846127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.644 qpair failed and we were unable to recover it. 00:26:27.644 [2024-05-15 11:18:24.846314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.846486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.846500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.644 qpair failed and we were unable to recover it. 00:26:27.644 [2024-05-15 11:18:24.846669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.846881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.846895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.644 qpair failed and we were unable to recover it. 00:26:27.644 [2024-05-15 11:18:24.846987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.847072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.847086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.644 qpair failed and we were unable to recover it. 00:26:27.644 [2024-05-15 11:18:24.847298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.847394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.847407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.644 qpair failed and we were unable to recover it. 00:26:27.644 [2024-05-15 11:18:24.847509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.847648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.847662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.644 qpair failed and we were unable to recover it. 00:26:27.644 [2024-05-15 11:18:24.847891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.848033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.848047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.644 qpair failed and we were unable to recover it. 00:26:27.644 [2024-05-15 11:18:24.848210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.848463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.848477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.644 qpair failed and we were unable to recover it. 00:26:27.644 [2024-05-15 11:18:24.848637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.848728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.848741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.644 qpair failed and we were unable to recover it. 00:26:27.644 [2024-05-15 11:18:24.849002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.849227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.849240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.644 qpair failed and we were unable to recover it. 00:26:27.644 [2024-05-15 11:18:24.849476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.849634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.849647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.644 qpair failed and we were unable to recover it. 00:26:27.644 [2024-05-15 11:18:24.849806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.849961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.849974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.644 qpair failed and we were unable to recover it. 00:26:27.644 [2024-05-15 11:18:24.850245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.850386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.850400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.644 qpair failed and we were unable to recover it. 00:26:27.644 [2024-05-15 11:18:24.850502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.850588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.850602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.644 qpair failed and we were unable to recover it. 00:26:27.644 [2024-05-15 11:18:24.850757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.850962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.850975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.644 qpair failed and we were unable to recover it. 00:26:27.644 [2024-05-15 11:18:24.851133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.851291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.851305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.644 qpair failed and we were unable to recover it. 00:26:27.644 [2024-05-15 11:18:24.851456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.851595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.851608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.644 qpair failed and we were unable to recover it. 00:26:27.644 [2024-05-15 11:18:24.851752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.851983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.851997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.644 qpair failed and we were unable to recover it. 00:26:27.644 [2024-05-15 11:18:24.852102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.852315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.852329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.644 qpair failed and we were unable to recover it. 00:26:27.644 [2024-05-15 11:18:24.852485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.852649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.852663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.644 qpair failed and we were unable to recover it. 00:26:27.644 [2024-05-15 11:18:24.852869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.853075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.853089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.644 qpair failed and we were unable to recover it. 00:26:27.644 [2024-05-15 11:18:24.853391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.853621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.853635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.644 qpair failed and we were unable to recover it. 00:26:27.644 [2024-05-15 11:18:24.853777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.853865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.853879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.644 qpair failed and we were unable to recover it. 00:26:27.644 [2024-05-15 11:18:24.853988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.854091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.854105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.644 qpair failed and we were unable to recover it. 00:26:27.644 [2024-05-15 11:18:24.854251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.854413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.854429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.644 qpair failed and we were unable to recover it. 00:26:27.644 [2024-05-15 11:18:24.854618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.854803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.854817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.644 qpair failed and we were unable to recover it. 00:26:27.644 [2024-05-15 11:18:24.854990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.855240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.855260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.644 qpair failed and we were unable to recover it. 00:26:27.644 [2024-05-15 11:18:24.855443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.855677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.855691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.644 qpair failed and we were unable to recover it. 00:26:27.644 [2024-05-15 11:18:24.855778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.855869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.855882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.644 qpair failed and we were unable to recover it. 00:26:27.644 [2024-05-15 11:18:24.856065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.856275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.856290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.644 qpair failed and we were unable to recover it. 00:26:27.644 [2024-05-15 11:18:24.856530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.856682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.856695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.644 qpair failed and we were unable to recover it. 00:26:27.644 [2024-05-15 11:18:24.856908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.857085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.857099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.644 qpair failed and we were unable to recover it. 00:26:27.644 [2024-05-15 11:18:24.857208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.857383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.857397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.644 qpair failed and we were unable to recover it. 00:26:27.644 [2024-05-15 11:18:24.857576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.857742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.857764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.644 qpair failed and we were unable to recover it. 00:26:27.644 [2024-05-15 11:18:24.857936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.858096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.858110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.644 qpair failed and we were unable to recover it. 00:26:27.644 [2024-05-15 11:18:24.858297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.858475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.858489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.644 qpair failed and we were unable to recover it. 00:26:27.644 [2024-05-15 11:18:24.858699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.858797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.858810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.644 qpair failed and we were unable to recover it. 00:26:27.644 [2024-05-15 11:18:24.859041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.859128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.859141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.644 qpair failed and we were unable to recover it. 00:26:27.644 [2024-05-15 11:18:24.859355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.859568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.859590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.644 qpair failed and we were unable to recover it. 00:26:27.644 [2024-05-15 11:18:24.859909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.860128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.860141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.644 qpair failed and we were unable to recover it. 00:26:27.644 [2024-05-15 11:18:24.860308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.860544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.860559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.644 qpair failed and we were unable to recover it. 00:26:27.644 [2024-05-15 11:18:24.860784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.860944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.644 [2024-05-15 11:18:24.860962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.645 qpair failed and we were unable to recover it. 00:26:27.918 [2024-05-15 11:18:24.861186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.918 [2024-05-15 11:18:24.861346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.918 [2024-05-15 11:18:24.861360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.918 qpair failed and we were unable to recover it. 00:26:27.918 [2024-05-15 11:18:24.861581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.918 [2024-05-15 11:18:24.861832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.918 [2024-05-15 11:18:24.861846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.918 qpair failed and we were unable to recover it. 00:26:27.918 [2024-05-15 11:18:24.862107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.918 [2024-05-15 11:18:24.862262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.918 [2024-05-15 11:18:24.862276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.918 qpair failed and we were unable to recover it. 00:26:27.918 [2024-05-15 11:18:24.862446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.918 [2024-05-15 11:18:24.862602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.918 [2024-05-15 11:18:24.862615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.918 qpair failed and we were unable to recover it. 00:26:27.918 [2024-05-15 11:18:24.862844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.918 [2024-05-15 11:18:24.863049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.863062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-05-15 11:18:24.863155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.863315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.863329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-05-15 11:18:24.863489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.863588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.863602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-05-15 11:18:24.863705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.863783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.863797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-05-15 11:18:24.863973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.864226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.864241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-05-15 11:18:24.864414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.864564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.864577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-05-15 11:18:24.864795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.865021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.865034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-05-15 11:18:24.865191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.865427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.865442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-05-15 11:18:24.865653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.865912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.865926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-05-15 11:18:24.866163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.866326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.866340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-05-15 11:18:24.866489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.866643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.866657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-05-15 11:18:24.866824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.866924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.866937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-05-15 11:18:24.867132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.867378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.867392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-05-15 11:18:24.867627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.867892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.867905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-05-15 11:18:24.868137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.868371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.868385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-05-15 11:18:24.868557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.868786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.868799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-05-15 11:18:24.868886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.869090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.869104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-05-15 11:18:24.869315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.869537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.869550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-05-15 11:18:24.869756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.869905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.869918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-05-15 11:18:24.870137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.870320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.870335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-05-15 11:18:24.870556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.870704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.870717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-05-15 11:18:24.870870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.871081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.871094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-05-15 11:18:24.871247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.871430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.871444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-05-15 11:18:24.871698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.871998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.872011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-05-15 11:18:24.872162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.872392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.872406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-05-15 11:18:24.872559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.872799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.872812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-05-15 11:18:24.873024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.873114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.873128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-05-15 11:18:24.873342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.873573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.873586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-05-15 11:18:24.873747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.873838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.873852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-05-15 11:18:24.874001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.874150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.874173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-05-15 11:18:24.874433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-05-15 11:18:24.874599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.874612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-05-15 11:18:24.874782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.874937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.874950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-05-15 11:18:24.875189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.875442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.875456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-05-15 11:18:24.875662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.875835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.875848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-05-15 11:18:24.876078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.876321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.876335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-05-15 11:18:24.876598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.876779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.876792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-05-15 11:18:24.876973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.877224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.877238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-05-15 11:18:24.877444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.877589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.877603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-05-15 11:18:24.877809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.878061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.878075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-05-15 11:18:24.878161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.878331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.878344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-05-15 11:18:24.878603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.878703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.878716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-05-15 11:18:24.878972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.879062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.879075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-05-15 11:18:24.879224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.879461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.879475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-05-15 11:18:24.879732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.879906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.879919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-05-15 11:18:24.880149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.880241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.880255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-05-15 11:18:24.880406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.880584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.880598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-05-15 11:18:24.880806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.881024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.881038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-05-15 11:18:24.881185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.881437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.881450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-05-15 11:18:24.881676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.881849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.881863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-05-15 11:18:24.882044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.882253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.882267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-05-15 11:18:24.882415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.882621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.882635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-05-15 11:18:24.882874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.883186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.883201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-05-15 11:18:24.883461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.883660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.883673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-05-15 11:18:24.883882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.884094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.884107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-05-15 11:18:24.884215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.884375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.884389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-05-15 11:18:24.884571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.884847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.884860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-05-15 11:18:24.885012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.885234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.885248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-05-15 11:18:24.885478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.885641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.885655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-05-15 11:18:24.885866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.886037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.886050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-05-15 11:18:24.886296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.886461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.886474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-05-15 11:18:24.886626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-05-15 11:18:24.886858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.886871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-05-15 11:18:24.887095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.887327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.887340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-05-15 11:18:24.887571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.887825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.887838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-05-15 11:18:24.888001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.888250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.888264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-05-15 11:18:24.888496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.888765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.888777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-05-15 11:18:24.888941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.889081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.889094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-05-15 11:18:24.889311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.889559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.889572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-05-15 11:18:24.889821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.889962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.889975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-05-15 11:18:24.890184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.890389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.890402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-05-15 11:18:24.890563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.890793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.890805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-05-15 11:18:24.891024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.891236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.891271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-05-15 11:18:24.891545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.891809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.891838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-05-15 11:18:24.892056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.892289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.892320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-05-15 11:18:24.892581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.892815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.892829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-05-15 11:18:24.893088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.893244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.893257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-05-15 11:18:24.893488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.893639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.893669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-05-15 11:18:24.893948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.894159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.894198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-05-15 11:18:24.894387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.894532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.894544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-05-15 11:18:24.894776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.894957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.894986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-05-15 11:18:24.895205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.895340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.895371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-05-15 11:18:24.895634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.895761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.895794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-05-15 11:18:24.895997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.896187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.896218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-05-15 11:18:24.896426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.896625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.896653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-05-15 11:18:24.896830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.897044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.897073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-05-15 11:18:24.897259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.897493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.897522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-05-15 11:18:24.897764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.897958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.897987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-05-15 11:18:24.898198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.898354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.898382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-05-15 11:18:24.898506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.898742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.898771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-05-15 11:18:24.899040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.899291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.899305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-05-15 11:18:24.899513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.899671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-05-15 11:18:24.899684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-05-15 11:18:24.899892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-05-15 11:18:24.900147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-05-15 11:18:24.900187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-05-15 11:18:24.900317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-05-15 11:18:24.900579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-05-15 11:18:24.900608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-05-15 11:18:24.900906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-05-15 11:18:24.901094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-05-15 11:18:24.901123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-05-15 11:18:24.901404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-05-15 11:18:24.901594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-05-15 11:18:24.901623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-05-15 11:18:24.901817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-05-15 11:18:24.902050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-05-15 11:18:24.902079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-05-15 11:18:24.902265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-05-15 11:18:24.902529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-05-15 11:18:24.902558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-05-15 11:18:24.902824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-05-15 11:18:24.903059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-05-15 11:18:24.903096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-05-15 11:18:24.903331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-05-15 11:18:24.903540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-05-15 11:18:24.903568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-05-15 11:18:24.903757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-05-15 11:18:24.903935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-05-15 11:18:24.903965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-05-15 11:18:24.904140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-05-15 11:18:24.904362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-05-15 11:18:24.904392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-05-15 11:18:24.904602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-05-15 11:18:24.904768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-05-15 11:18:24.904796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-05-15 11:18:24.905067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-05-15 11:18:24.905303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-05-15 11:18:24.905333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-05-15 11:18:24.905574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-05-15 11:18:24.905827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-05-15 11:18:24.905841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-05-15 11:18:24.906092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-05-15 11:18:24.906249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-05-15 11:18:24.906263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-05-15 11:18:24.906491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-05-15 11:18:24.906651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-05-15 11:18:24.906680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-05-15 11:18:24.906861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-05-15 11:18:24.907071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-05-15 11:18:24.907100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-05-15 11:18:24.907378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-05-15 11:18:24.907636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-05-15 11:18:24.907649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-05-15 11:18:24.907809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-05-15 11:18:24.908013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-05-15 11:18:24.908042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-05-15 11:18:24.908225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-05-15 11:18:24.908394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-05-15 11:18:24.908423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-05-15 11:18:24.908598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-05-15 11:18:24.908763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-05-15 11:18:24.908794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-05-15 11:18:24.909066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-05-15 11:18:24.909262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-05-15 11:18:24.909291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-05-15 11:18:24.909415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-05-15 11:18:24.909680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-05-15 11:18:24.909694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-05-15 11:18:24.909933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-05-15 11:18:24.910122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-05-15 11:18:24.910151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-05-15 11:18:24.910363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-05-15 11:18:24.910622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-05-15 11:18:24.910640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-05-15 11:18:24.910886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-05-15 11:18:24.911119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-05-15 11:18:24.911136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-05-15 11:18:24.911389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-05-15 11:18:24.911601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-05-15 11:18:24.911620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-05-15 11:18:24.911733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-05-15 11:18:24.911889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-05-15 11:18:24.911903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-05-15 11:18:24.912059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-05-15 11:18:24.912239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-05-15 11:18:24.912254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-05-15 11:18:24.912345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-05-15 11:18:24.912521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-05-15 11:18:24.912535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-05-15 11:18:24.912692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-05-15 11:18:24.912929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-05-15 11:18:24.912957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-05-15 11:18:24.913222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-05-15 11:18:24.913489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-05-15 11:18:24.913517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-05-15 11:18:24.913773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-05-15 11:18:24.914000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-05-15 11:18:24.914017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-05-15 11:18:24.914272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-05-15 11:18:24.914426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-05-15 11:18:24.914439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-05-15 11:18:24.914596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-05-15 11:18:24.914825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-05-15 11:18:24.914854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-05-15 11:18:24.915101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-05-15 11:18:24.915292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-05-15 11:18:24.915322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-05-15 11:18:24.915588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-05-15 11:18:24.915828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-05-15 11:18:24.915842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-05-15 11:18:24.916049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-05-15 11:18:24.916280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-05-15 11:18:24.916294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-05-15 11:18:24.916436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-05-15 11:18:24.916676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-05-15 11:18:24.916690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-05-15 11:18:24.916924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-05-15 11:18:24.917186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-05-15 11:18:24.917215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-05-15 11:18:24.917458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-05-15 11:18:24.917708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-05-15 11:18:24.917721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-05-15 11:18:24.917928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-05-15 11:18:24.918160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-05-15 11:18:24.918201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-05-15 11:18:24.918416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-05-15 11:18:24.918621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-05-15 11:18:24.918649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-05-15 11:18:24.918901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-05-15 11:18:24.919144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-05-15 11:18:24.919182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-05-15 11:18:24.919435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-05-15 11:18:24.919618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-05-15 11:18:24.919648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-05-15 11:18:24.919910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-05-15 11:18:24.920103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-05-15 11:18:24.920132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-05-15 11:18:24.920280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-05-15 11:18:24.920548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-05-15 11:18:24.920578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-05-15 11:18:24.920851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-05-15 11:18:24.921041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-05-15 11:18:24.921070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-05-15 11:18:24.921308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-05-15 11:18:24.921524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-05-15 11:18:24.921553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-05-15 11:18:24.921753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-05-15 11:18:24.922025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-05-15 11:18:24.922054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-05-15 11:18:24.922321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-05-15 11:18:24.922442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.922482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-05-15 11:18:24.922586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.922696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.922709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-05-15 11:18:24.922876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.923065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.923094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-05-15 11:18:24.923361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.923571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.923585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-05-15 11:18:24.923803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.924011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.924040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-05-15 11:18:24.924302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.924419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.924447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-05-15 11:18:24.924715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.924931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.924944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-05-15 11:18:24.925150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.925401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.925415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-05-15 11:18:24.925611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.925915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.925944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-05-15 11:18:24.926187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.926379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.926408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-05-15 11:18:24.926625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.926785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.926799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-05-15 11:18:24.927011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.927246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.927261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-05-15 11:18:24.927539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.927642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.927656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-05-15 11:18:24.927866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.928121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.928150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-05-15 11:18:24.928406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.928602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.928615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-05-15 11:18:24.928857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.929036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.929050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-05-15 11:18:24.929309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.929506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.929535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-05-15 11:18:24.929781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.930020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.930050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-05-15 11:18:24.930184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.930448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.930478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-05-15 11:18:24.930607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.930842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.930856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-05-15 11:18:24.931017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.931124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.931138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-05-15 11:18:24.931369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.931582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.931612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-05-15 11:18:24.931880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.932054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.932083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-05-15 11:18:24.932305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.932466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.932479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-05-15 11:18:24.932740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.932932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.932962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-05-15 11:18:24.933159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.933341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.933371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-05-15 11:18:24.933587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.933831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.933844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-05-15 11:18:24.934061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.934219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.934234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-05-15 11:18:24.934325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.934562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.934590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-05-15 11:18:24.934836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.935082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-05-15 11:18:24.935110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-05-15 11:18:24.935367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.935616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.935645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-05-15 11:18:24.935908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.936198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.936230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-05-15 11:18:24.936477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.936762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.936776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-05-15 11:18:24.936933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.937146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.937189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-05-15 11:18:24.937387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.937625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.937654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-05-15 11:18:24.937910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.938098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.938126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-05-15 11:18:24.938323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.938559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.938590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-05-15 11:18:24.938859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.939072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.939102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-05-15 11:18:24.939311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.939595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.939609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-05-15 11:18:24.939866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.940005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.940018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-05-15 11:18:24.940245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.940415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.940429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-05-15 11:18:24.940669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.940867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.940881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-05-15 11:18:24.941063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.941276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.941290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-05-15 11:18:24.941509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.941629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.941659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-05-15 11:18:24.941784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.942049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.942078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-05-15 11:18:24.942326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.942513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.942541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-05-15 11:18:24.942782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.942936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.942949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-05-15 11:18:24.943044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.943222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.943236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-05-15 11:18:24.943436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.943735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.943764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-05-15 11:18:24.944049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.944183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.944213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-05-15 11:18:24.944485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.944663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.944676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-05-15 11:18:24.944857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.945112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.945141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-05-15 11:18:24.945371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.945625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.945655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-05-15 11:18:24.945849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.945964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.945992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-05-15 11:18:24.946267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.946514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.946543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-05-15 11:18:24.946738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.946937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.946965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-05-15 11:18:24.947198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.947400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.947430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-05-15 11:18:24.947675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.947919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.947948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-05-15 11:18:24.948173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.948439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.948469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-05-15 11:18:24.948736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.948879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-05-15 11:18:24.948909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.926 [2024-05-15 11:18:24.949099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.949292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.949322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-05-15 11:18:24.949442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.949685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.949699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-05-15 11:18:24.949908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.950072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.950101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-05-15 11:18:24.950306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.950497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.950527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-05-15 11:18:24.950644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.950735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.950749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-05-15 11:18:24.950887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.951097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.951110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-05-15 11:18:24.951274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.951430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.951460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-05-15 11:18:24.951648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.951760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.951788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-05-15 11:18:24.952061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.952239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.952270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-05-15 11:18:24.952475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.952611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.952624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-05-15 11:18:24.952703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.952918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.952931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-05-15 11:18:24.953022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.953180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.953194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-05-15 11:18:24.953296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.953479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.953493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-05-15 11:18:24.953635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.953786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.953800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-05-15 11:18:24.953875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.954089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.954105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-05-15 11:18:24.954200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.954453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.954483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-05-15 11:18:24.954727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.955004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.955033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-05-15 11:18:24.955247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.955488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.955517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-05-15 11:18:24.955807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.955937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.955965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-05-15 11:18:24.956145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.956424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.956455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-05-15 11:18:24.956655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.956757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.956771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-05-15 11:18:24.956933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.957093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.957107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-05-15 11:18:24.957266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.957428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.957441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-05-15 11:18:24.957548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.957704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.957718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-05-15 11:18:24.957889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.958060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.958094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-05-15 11:18:24.958323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.958463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.958476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-05-15 11:18:24.958561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.958653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.958668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-05-15 11:18:24.958752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.958842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.958855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-05-15 11:18:24.959064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.959211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.959226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-05-15 11:18:24.959371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.959587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-05-15 11:18:24.959600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.927 [2024-05-15 11:18:24.959691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-05-15 11:18:24.959847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-05-15 11:18:24.959860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-05-15 11:18:24.960023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-05-15 11:18:24.960109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-05-15 11:18:24.960123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-05-15 11:18:24.960223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-05-15 11:18:24.960315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-05-15 11:18:24.960328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-05-15 11:18:24.960541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-05-15 11:18:24.960647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-05-15 11:18:24.960660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-05-15 11:18:24.960802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-05-15 11:18:24.960902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-05-15 11:18:24.960916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-05-15 11:18:24.961017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-05-15 11:18:24.961236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-05-15 11:18:24.961251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-05-15 11:18:24.961472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-05-15 11:18:24.961626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-05-15 11:18:24.961640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-05-15 11:18:24.961725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-05-15 11:18:24.961895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-05-15 11:18:24.961908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-05-15 11:18:24.962131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-05-15 11:18:24.962317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-05-15 11:18:24.962348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-05-15 11:18:24.962524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-05-15 11:18:24.962661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-05-15 11:18:24.962690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-05-15 11:18:24.962911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-05-15 11:18:24.963015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-05-15 11:18:24.963062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-05-15 11:18:24.963211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-05-15 11:18:24.963490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-05-15 11:18:24.963518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-05-15 11:18:24.963642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-05-15 11:18:24.963779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-05-15 11:18:24.963793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-05-15 11:18:24.963989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-05-15 11:18:24.964157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-05-15 11:18:24.964197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-05-15 11:18:24.964374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-05-15 11:18:24.964450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-05-15 11:18:24.964463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-05-15 11:18:24.964607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-05-15 11:18:24.964766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-05-15 11:18:24.964780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-05-15 11:18:24.964937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-05-15 11:18:24.965146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-05-15 11:18:24.965160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-05-15 11:18:24.965256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-05-15 11:18:24.965409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-05-15 11:18:24.965423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-05-15 11:18:24.965524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-05-15 11:18:24.965602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-05-15 11:18:24.965615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-05-15 11:18:24.965704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-05-15 11:18:24.965859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-05-15 11:18:24.965873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-05-15 11:18:24.965957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-05-15 11:18:24.966109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-05-15 11:18:24.966123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-05-15 11:18:24.966212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-05-15 11:18:24.966297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-05-15 11:18:24.966310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-05-15 11:18:24.966566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-05-15 11:18:24.966706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-05-15 11:18:24.966719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-05-15 11:18:24.966868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-05-15 11:18:24.966946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-05-15 11:18:24.966959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-05-15 11:18:24.967134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-05-15 11:18:24.967275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-05-15 11:18:24.967288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-05-15 11:18:24.967443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.967539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.967553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-05-15 11:18:24.967711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.967803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.967816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-05-15 11:18:24.967968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.968064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.968077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-05-15 11:18:24.968155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.968256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.968270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-05-15 11:18:24.968350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.968428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.968442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-05-15 11:18:24.968521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.968699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.968728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-05-15 11:18:24.968836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.968952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.968981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-05-15 11:18:24.969285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.969477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.969491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-05-15 11:18:24.969719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.969882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.969910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-05-15 11:18:24.970227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.970343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.970372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-05-15 11:18:24.970640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.970828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.970862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-05-15 11:18:24.971151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.971488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.971518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-05-15 11:18:24.971808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.971989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.972018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-05-15 11:18:24.972266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.972389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.972418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-05-15 11:18:24.972691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.972921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.972934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-05-15 11:18:24.973031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.973176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.973190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-05-15 11:18:24.973422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.973607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.973636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-05-15 11:18:24.973846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.974021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.974050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-05-15 11:18:24.974198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.974483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.974511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-05-15 11:18:24.974708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.974932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.974946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-05-15 11:18:24.975189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.975465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.975495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-05-15 11:18:24.975770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.976009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.976038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-05-15 11:18:24.976281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.976468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.976497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-05-15 11:18:24.976673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.976921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.976950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-05-15 11:18:24.977151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.977443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.977473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-05-15 11:18:24.977729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.977991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.978020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-05-15 11:18:24.978211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.978385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.978414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-05-15 11:18:24.978604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.978846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.978875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-05-15 11:18:24.979083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.979208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.979238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-05-15 11:18:24.979410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-05-15 11:18:24.979645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.979674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-05-15 11:18:24.979870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.980157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.980214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-05-15 11:18:24.980493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.980735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.980770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-05-15 11:18:24.980926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.981190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.981220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-05-15 11:18:24.981347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.981609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.981638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-05-15 11:18:24.981913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.982013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.982027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-05-15 11:18:24.982279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.982518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.982547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-05-15 11:18:24.982737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.983026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.983056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-05-15 11:18:24.983272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.983537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.983566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-05-15 11:18:24.983741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.983971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.983985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-05-15 11:18:24.984196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.984373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.984402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-05-15 11:18:24.984644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.984909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.984922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-05-15 11:18:24.985171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.985322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.985336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-05-15 11:18:24.985578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.985819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.985848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-05-15 11:18:24.986047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.986355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.986386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-05-15 11:18:24.986679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.986865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.986893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-05-15 11:18:24.987171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.987363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.987391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-05-15 11:18:24.987662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.987789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.987802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-05-15 11:18:24.987962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.988181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.988195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-05-15 11:18:24.988367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.988518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.988531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-05-15 11:18:24.988767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.989027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.989040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-05-15 11:18:24.989254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.989426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.989439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-05-15 11:18:24.989663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.989757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.989769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-05-15 11:18:24.989977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.990119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.990132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-05-15 11:18:24.990274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.990508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.990521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-05-15 11:18:24.990751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.990961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.990974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-05-15 11:18:24.991158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.991310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.991324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-05-15 11:18:24.991491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.991654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.991667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-05-15 11:18:24.991822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.992044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.992058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-05-15 11:18:24.992292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.992572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-05-15 11:18:24.992585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-05-15 11:18:24.992811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:24.993020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:24.993034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-05-15 11:18:24.993213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:24.993374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:24.993388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-05-15 11:18:24.993478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:24.993708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:24.993724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-05-15 11:18:24.993868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:24.994073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:24.994087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-05-15 11:18:24.994248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:24.994405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:24.994418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-05-15 11:18:24.994573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:24.994736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:24.994750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-05-15 11:18:24.994959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:24.995177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:24.995192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-05-15 11:18:24.995440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:24.995651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:24.995664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-05-15 11:18:24.995813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:24.996008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:24.996020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-05-15 11:18:24.996201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:24.996390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:24.996406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-05-15 11:18:24.996516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:24.996726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:24.996739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-05-15 11:18:24.996976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:24.997190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:24.997204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-05-15 11:18:24.997421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:24.997674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:24.997688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-05-15 11:18:24.997790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:24.997976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:24.997990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-05-15 11:18:24.998147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:24.998263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:24.998277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-05-15 11:18:24.998489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:24.998630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:24.998647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-05-15 11:18:24.998799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:24.999032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:24.999045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-05-15 11:18:24.999149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:24.999323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:24.999338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-05-15 11:18:24.999502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:24.999655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:24.999668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-05-15 11:18:24.999902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:25.000081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:25.000095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-05-15 11:18:25.000333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:25.000514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:25.000530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-05-15 11:18:25.000673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:25.000883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:25.000896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-05-15 11:18:25.001131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:25.001222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:25.001237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-05-15 11:18:25.001453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:25.001565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:25.001578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-05-15 11:18:25.001721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:25.001888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:25.001901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-05-15 11:18:25.002070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:25.002258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:25.002272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-05-15 11:18:25.002431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:25.002590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:25.002605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-05-15 11:18:25.002701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:25.002851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:25.002865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-05-15 11:18:25.003009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:25.003230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:25.003245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-05-15 11:18:25.003421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:25.003582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-05-15 11:18:25.003597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-05-15 11:18:25.003859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.004092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.004105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-05-15 11:18:25.004315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.004539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.004553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-05-15 11:18:25.004796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.004956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.004970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-05-15 11:18:25.005082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.005182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.005197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-05-15 11:18:25.005386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.005595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.005610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-05-15 11:18:25.005818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.006069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.006083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-05-15 11:18:25.006337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.006527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.006541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-05-15 11:18:25.006688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.006929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.006943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-05-15 11:18:25.007105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.007316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.007330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-05-15 11:18:25.007506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.007764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.007777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-05-15 11:18:25.007883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.008025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.008038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-05-15 11:18:25.008291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.008387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.008400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-05-15 11:18:25.008663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.008874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.008888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-05-15 11:18:25.009120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.009289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.009305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-05-15 11:18:25.009513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.009664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.009678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-05-15 11:18:25.009912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.010070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.010084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-05-15 11:18:25.010248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.010361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.010374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-05-15 11:18:25.010612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.010768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.010781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-05-15 11:18:25.010884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.011040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.011054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-05-15 11:18:25.011218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.011330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.011343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-05-15 11:18:25.011522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.011696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.011709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-05-15 11:18:25.011796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.012026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.012039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-05-15 11:18:25.012250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.012348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.012361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-05-15 11:18:25.012573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.012730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.012746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-05-15 11:18:25.012967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.013151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.013178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-05-15 11:18:25.013308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.013560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.013574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-05-15 11:18:25.013791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.013970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.013983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-05-15 11:18:25.014132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.014367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.014382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-05-15 11:18:25.014609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.014820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.014833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-05-15 11:18:25.015066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.015361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-05-15 11:18:25.015376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.932 [2024-05-15 11:18:25.015589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-05-15 11:18:25.015827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-05-15 11:18:25.015840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-05-15 11:18:25.016024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-05-15 11:18:25.016268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-05-15 11:18:25.016283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-05-15 11:18:25.016530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-05-15 11:18:25.016697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-05-15 11:18:25.016711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-05-15 11:18:25.016947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-05-15 11:18:25.017170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-05-15 11:18:25.017184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-05-15 11:18:25.017275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-05-15 11:18:25.017487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-05-15 11:18:25.017501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-05-15 11:18:25.017759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-05-15 11:18:25.017923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-05-15 11:18:25.017937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-05-15 11:18:25.018045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-05-15 11:18:25.018199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-05-15 11:18:25.018214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-05-15 11:18:25.018430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-05-15 11:18:25.018670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-05-15 11:18:25.018684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-05-15 11:18:25.018926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-05-15 11:18:25.019146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-05-15 11:18:25.019160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-05-15 11:18:25.019394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-05-15 11:18:25.019556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-05-15 11:18:25.019570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-05-15 11:18:25.019797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-05-15 11:18:25.020060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-05-15 11:18:25.020073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-05-15 11:18:25.020219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-05-15 11:18:25.020431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-05-15 11:18:25.020445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-05-15 11:18:25.020599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-05-15 11:18:25.020688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-05-15 11:18:25.020701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-05-15 11:18:25.020848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-05-15 11:18:25.021065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-05-15 11:18:25.021078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-05-15 11:18:25.021290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-05-15 11:18:25.021573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-05-15 11:18:25.021590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-05-15 11:18:25.021827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-05-15 11:18:25.021998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-05-15 11:18:25.022011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-05-15 11:18:25.022102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-05-15 11:18:25.022314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-05-15 11:18:25.022330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-05-15 11:18:25.022493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-05-15 11:18:25.022644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-05-15 11:18:25.022658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-05-15 11:18:25.022926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-05-15 11:18:25.023175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-05-15 11:18:25.023189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-05-15 11:18:25.023404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-05-15 11:18:25.023591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-05-15 11:18:25.023605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-05-15 11:18:25.023837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-05-15 11:18:25.024039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-05-15 11:18:25.024052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-05-15 11:18:25.024265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-05-15 11:18:25.024429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-05-15 11:18:25.024443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-05-15 11:18:25.024702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-05-15 11:18:25.024933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-05-15 11:18:25.024947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-05-15 11:18:25.025114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-05-15 11:18:25.025275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-05-15 11:18:25.025291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-05-15 11:18:25.025455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-05-15 11:18:25.025644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.025662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-05-15 11:18:25.025912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.026214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.026234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-05-15 11:18:25.026463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.026698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.026713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-05-15 11:18:25.026822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.026998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.027012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-05-15 11:18:25.027215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.027500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.027513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-05-15 11:18:25.027727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.027944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.027957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-05-15 11:18:25.028113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.028273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.028287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-05-15 11:18:25.028524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.028744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.028758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-05-15 11:18:25.028989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.029225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.029240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-05-15 11:18:25.029464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.029569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.029582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-05-15 11:18:25.029813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.029965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.029980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-05-15 11:18:25.030207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.030417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.030431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-05-15 11:18:25.030602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.030832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.030846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-05-15 11:18:25.031085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.031270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.031284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-05-15 11:18:25.031570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.031718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.031732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-05-15 11:18:25.031965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.032105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.032119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-05-15 11:18:25.032339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.032504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.032518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-05-15 11:18:25.032680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.032915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.032928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-05-15 11:18:25.033030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.033127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.033140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-05-15 11:18:25.033324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.033464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.033478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-05-15 11:18:25.033681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.033840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.033854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-05-15 11:18:25.034074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.034173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.034187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-05-15 11:18:25.034292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.034447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.034460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-05-15 11:18:25.034621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.034774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.034787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-05-15 11:18:25.034944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.035045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.035058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-05-15 11:18:25.035156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.035370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.035384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-05-15 11:18:25.035478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.035736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.035749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-05-15 11:18:25.035991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.036113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.036126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-05-15 11:18:25.036335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.036583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.036596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-05-15 11:18:25.036805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.037034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-05-15 11:18:25.037048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.934 [2024-05-15 11:18:25.037290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.037397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.037410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.934 qpair failed and we were unable to recover it. 00:26:27.934 [2024-05-15 11:18:25.037571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.037780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.037793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.934 qpair failed and we were unable to recover it. 00:26:27.934 [2024-05-15 11:18:25.038016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.038243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.038257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.934 qpair failed and we were unable to recover it. 00:26:27.934 [2024-05-15 11:18:25.038494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.038704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.038717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.934 qpair failed and we were unable to recover it. 00:26:27.934 [2024-05-15 11:18:25.038926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.039175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.039188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.934 qpair failed and we were unable to recover it. 00:26:27.934 [2024-05-15 11:18:25.039366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.039600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.039613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.934 qpair failed and we were unable to recover it. 00:26:27.934 [2024-05-15 11:18:25.039825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.040033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.040047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.934 qpair failed and we were unable to recover it. 00:26:27.934 [2024-05-15 11:18:25.040213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.040382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.040395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.934 qpair failed and we were unable to recover it. 00:26:27.934 [2024-05-15 11:18:25.040538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.040801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.040814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.934 qpair failed and we were unable to recover it. 00:26:27.934 [2024-05-15 11:18:25.041067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.041214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.041228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.934 qpair failed and we were unable to recover it. 00:26:27.934 [2024-05-15 11:18:25.041423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.041691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.041704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.934 qpair failed and we were unable to recover it. 00:26:27.934 [2024-05-15 11:18:25.041915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.042175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.042189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.934 qpair failed and we were unable to recover it. 00:26:27.934 [2024-05-15 11:18:25.042415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.042628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.042641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.934 qpair failed and we were unable to recover it. 00:26:27.934 [2024-05-15 11:18:25.042736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.042962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.042976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.934 qpair failed and we were unable to recover it. 00:26:27.934 [2024-05-15 11:18:25.043208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.043295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.043309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.934 qpair failed and we were unable to recover it. 00:26:27.934 [2024-05-15 11:18:25.043469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.043710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.043724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.934 qpair failed and we were unable to recover it. 00:26:27.934 [2024-05-15 11:18:25.043885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.044101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.044114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.934 qpair failed and we were unable to recover it. 00:26:27.934 [2024-05-15 11:18:25.044348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.044522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.044535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.934 qpair failed and we were unable to recover it. 00:26:27.934 [2024-05-15 11:18:25.044717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.044908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.044937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.934 qpair failed and we were unable to recover it. 00:26:27.934 [2024-05-15 11:18:25.045070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.045311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.045342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.934 qpair failed and we were unable to recover it. 00:26:27.934 [2024-05-15 11:18:25.045533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.045846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.045862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.934 qpair failed and we were unable to recover it. 00:26:27.934 [2024-05-15 11:18:25.046003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.046236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.046250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.934 qpair failed and we were unable to recover it. 00:26:27.934 [2024-05-15 11:18:25.046422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.046636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.046650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.934 qpair failed and we were unable to recover it. 00:26:27.934 [2024-05-15 11:18:25.046794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.046934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.046948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.934 qpair failed and we were unable to recover it. 00:26:27.934 [2024-05-15 11:18:25.047188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.047404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.047417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.934 qpair failed and we were unable to recover it. 00:26:27.934 [2024-05-15 11:18:25.047520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.047792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.047820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.934 qpair failed and we were unable to recover it. 00:26:27.934 [2024-05-15 11:18:25.048066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.048236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.048266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.934 qpair failed and we were unable to recover it. 00:26:27.934 [2024-05-15 11:18:25.048536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.048730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.048759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.934 qpair failed and we were unable to recover it. 00:26:27.934 [2024-05-15 11:18:25.049001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.049262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.049292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.934 qpair failed and we were unable to recover it. 00:26:27.934 [2024-05-15 11:18:25.049545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-05-15 11:18:25.049726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.049754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.935 qpair failed and we were unable to recover it. 00:26:27.935 [2024-05-15 11:18:25.050021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.050281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.050317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.935 qpair failed and we were unable to recover it. 00:26:27.935 [2024-05-15 11:18:25.050590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.050820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.050848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.935 qpair failed and we were unable to recover it. 00:26:27.935 [2024-05-15 11:18:25.051103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.051367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.051397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.935 qpair failed and we were unable to recover it. 00:26:27.935 [2024-05-15 11:18:25.051666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.051903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.051931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.935 qpair failed and we were unable to recover it. 00:26:27.935 [2024-05-15 11:18:25.052140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.052415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.052445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.935 qpair failed and we were unable to recover it. 00:26:27.935 [2024-05-15 11:18:25.052641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.052878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.052906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.935 qpair failed and we were unable to recover it. 00:26:27.935 [2024-05-15 11:18:25.053174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.053335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.053348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.935 qpair failed and we were unable to recover it. 00:26:27.935 [2024-05-15 11:18:25.053558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.053768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.053797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.935 qpair failed and we were unable to recover it. 00:26:27.935 [2024-05-15 11:18:25.053994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.054175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.054206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.935 qpair failed and we were unable to recover it. 00:26:27.935 [2024-05-15 11:18:25.054406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.054671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.054699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.935 qpair failed and we were unable to recover it. 00:26:27.935 [2024-05-15 11:18:25.054939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.055124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.055157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.935 qpair failed and we were unable to recover it. 00:26:27.935 [2024-05-15 11:18:25.055355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.055564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.055593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.935 qpair failed and we were unable to recover it. 00:26:27.935 [2024-05-15 11:18:25.055782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.056019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.056032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.935 qpair failed and we were unable to recover it. 00:26:27.935 [2024-05-15 11:18:25.056270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.056507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.056520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.935 qpair failed and we were unable to recover it. 00:26:27.935 [2024-05-15 11:18:25.056790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.056969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.056982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.935 qpair failed and we were unable to recover it. 00:26:27.935 [2024-05-15 11:18:25.057216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.057445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.057459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.935 qpair failed and we were unable to recover it. 00:26:27.935 [2024-05-15 11:18:25.057638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.057791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.057805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.935 qpair failed and we were unable to recover it. 00:26:27.935 [2024-05-15 11:18:25.057982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.058169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.058183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.935 qpair failed and we were unable to recover it. 00:26:27.935 [2024-05-15 11:18:25.058424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.058588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.058616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.935 qpair failed and we were unable to recover it. 00:26:27.935 [2024-05-15 11:18:25.058889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.059180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.059210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.935 qpair failed and we were unable to recover it. 00:26:27.935 [2024-05-15 11:18:25.059388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.059672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.059686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.935 qpair failed and we were unable to recover it. 00:26:27.935 [2024-05-15 11:18:25.059920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.060181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.060212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.935 qpair failed and we were unable to recover it. 00:26:27.935 [2024-05-15 11:18:25.060389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.060598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.060627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.935 qpair failed and we were unable to recover it. 00:26:27.935 [2024-05-15 11:18:25.060890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.061092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.061124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.935 qpair failed and we were unable to recover it. 00:26:27.935 [2024-05-15 11:18:25.061328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.061516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.061545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.935 qpair failed and we were unable to recover it. 00:26:27.935 [2024-05-15 11:18:25.061831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.062133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.062161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.935 qpair failed and we were unable to recover it. 00:26:27.935 [2024-05-15 11:18:25.062452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.062724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.062753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.935 qpair failed and we were unable to recover it. 00:26:27.935 [2024-05-15 11:18:25.062984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.063234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.063265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.935 qpair failed and we were unable to recover it. 00:26:27.935 [2024-05-15 11:18:25.063542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.063745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.935 [2024-05-15 11:18:25.063759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.935 qpair failed and we were unable to recover it. 00:26:27.935 [2024-05-15 11:18:25.063998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.936 [2024-05-15 11:18:25.064251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.936 [2024-05-15 11:18:25.064265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.936 qpair failed and we were unable to recover it. 00:26:27.936 [2024-05-15 11:18:25.064504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.936 [2024-05-15 11:18:25.064742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.936 [2024-05-15 11:18:25.064756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.936 qpair failed and we were unable to recover it. 00:26:27.936 [2024-05-15 11:18:25.064903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.936 [2024-05-15 11:18:25.065011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.936 [2024-05-15 11:18:25.065024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.936 qpair failed and we were unable to recover it. 00:26:27.936 [2024-05-15 11:18:25.065189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.936 [2024-05-15 11:18:25.065291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.936 [2024-05-15 11:18:25.065304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.936 qpair failed and we were unable to recover it. 00:26:27.936 [2024-05-15 11:18:25.065466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.936 [2024-05-15 11:18:25.065701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.936 [2024-05-15 11:18:25.065730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.936 qpair failed and we were unable to recover it. 00:26:27.936 [2024-05-15 11:18:25.065996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.936 [2024-05-15 11:18:25.066261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.936 [2024-05-15 11:18:25.066291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.936 qpair failed and we were unable to recover it. 00:26:27.936 [2024-05-15 11:18:25.066489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.936 [2024-05-15 11:18:25.066681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.936 [2024-05-15 11:18:25.066710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.936 qpair failed and we were unable to recover it. 00:26:27.936 [2024-05-15 11:18:25.066915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.936 [2024-05-15 11:18:25.067145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.936 [2024-05-15 11:18:25.067183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.936 qpair failed and we were unable to recover it. 00:26:27.936 [2024-05-15 11:18:25.067460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.936 [2024-05-15 11:18:25.067648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.936 [2024-05-15 11:18:25.067676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.936 qpair failed and we were unable to recover it. 00:26:27.936 [2024-05-15 11:18:25.067966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.936 [2024-05-15 11:18:25.068136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.936 [2024-05-15 11:18:25.068174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.936 qpair failed and we were unable to recover it. 00:26:27.936 [2024-05-15 11:18:25.068320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.936 [2024-05-15 11:18:25.068587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.936 [2024-05-15 11:18:25.068615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.936 qpair failed and we were unable to recover it. 00:26:27.936 [2024-05-15 11:18:25.068806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.936 [2024-05-15 11:18:25.068982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.936 [2024-05-15 11:18:25.069010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.936 qpair failed and we were unable to recover it. 00:26:27.936 [2024-05-15 11:18:25.069221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.936 [2024-05-15 11:18:25.069510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.936 [2024-05-15 11:18:25.069540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.936 qpair failed and we were unable to recover it. 00:26:27.936 [2024-05-15 11:18:25.069791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.936 [2024-05-15 11:18:25.069962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.936 [2024-05-15 11:18:25.069975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.936 qpair failed and we were unable to recover it. 00:26:27.936 [2024-05-15 11:18:25.070157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.936 [2024-05-15 11:18:25.070376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.936 [2024-05-15 11:18:25.070405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.936 qpair failed and we were unable to recover it. 00:26:27.936 [2024-05-15 11:18:25.070680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.936 [2024-05-15 11:18:25.070943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.936 [2024-05-15 11:18:25.070973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.936 qpair failed and we were unable to recover it. 00:26:27.936 [2024-05-15 11:18:25.071245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.936 [2024-05-15 11:18:25.071486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.936 [2024-05-15 11:18:25.071515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.936 qpair failed and we were unable to recover it. 00:26:27.936 [2024-05-15 11:18:25.071810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.936 [2024-05-15 11:18:25.072074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.936 [2024-05-15 11:18:25.072102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.936 qpair failed and we were unable to recover it. 00:26:27.936 [2024-05-15 11:18:25.072301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.936 [2024-05-15 11:18:25.072493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.936 [2024-05-15 11:18:25.072521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.936 qpair failed and we were unable to recover it. 00:26:27.936 [2024-05-15 11:18:25.072789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.936 [2024-05-15 11:18:25.072948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.936 [2024-05-15 11:18:25.072961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.936 qpair failed and we were unable to recover it. 00:26:27.936 [2024-05-15 11:18:25.073255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.936 [2024-05-15 11:18:25.073509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.936 [2024-05-15 11:18:25.073538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.936 qpair failed and we were unable to recover it. 00:26:27.936 [2024-05-15 11:18:25.073780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.936 [2024-05-15 11:18:25.073923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.936 [2024-05-15 11:18:25.073937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.936 qpair failed and we were unable to recover it. 00:26:27.936 [2024-05-15 11:18:25.074184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.936 [2024-05-15 11:18:25.074464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.936 [2024-05-15 11:18:25.074493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.936 qpair failed and we were unable to recover it. 00:26:27.936 [2024-05-15 11:18:25.074753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.936 [2024-05-15 11:18:25.074949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.936 [2024-05-15 11:18:25.074978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.936 qpair failed and we were unable to recover it. 00:26:27.936 [2024-05-15 11:18:25.075272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.936 [2024-05-15 11:18:25.075481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.936 [2024-05-15 11:18:25.075511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.936 qpair failed and we were unable to recover it. 00:26:27.936 [2024-05-15 11:18:25.075702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.075906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.075935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.937 qpair failed and we were unable to recover it. 00:26:27.937 [2024-05-15 11:18:25.076204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.076403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.076432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.937 qpair failed and we were unable to recover it. 00:26:27.937 [2024-05-15 11:18:25.076677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.076860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.076889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.937 qpair failed and we were unable to recover it. 00:26:27.937 [2024-05-15 11:18:25.077095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.077278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.077309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.937 qpair failed and we were unable to recover it. 00:26:27.937 [2024-05-15 11:18:25.077599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.077868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.077896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.937 qpair failed and we were unable to recover it. 00:26:27.937 [2024-05-15 11:18:25.078078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.078319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.078350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.937 qpair failed and we were unable to recover it. 00:26:27.937 [2024-05-15 11:18:25.078570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.078858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.078888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.937 qpair failed and we were unable to recover it. 00:26:27.937 [2024-05-15 11:18:25.079039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.079240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.079269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.937 qpair failed and we were unable to recover it. 00:26:27.937 [2024-05-15 11:18:25.079463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.079711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.079740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.937 qpair failed and we were unable to recover it. 00:26:27.937 [2024-05-15 11:18:25.080008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.080182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.080211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.937 qpair failed and we were unable to recover it. 00:26:27.937 [2024-05-15 11:18:25.080456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.080640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.080669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.937 qpair failed and we were unable to recover it. 00:26:27.937 [2024-05-15 11:18:25.080935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.081204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.081235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.937 qpair failed and we were unable to recover it. 00:26:27.937 [2024-05-15 11:18:25.081422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.081686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.081715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.937 qpair failed and we were unable to recover it. 00:26:27.937 [2024-05-15 11:18:25.081979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.082182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.082196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.937 qpair failed and we were unable to recover it. 00:26:27.937 [2024-05-15 11:18:25.082354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.082604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.082633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.937 qpair failed and we were unable to recover it. 00:26:27.937 [2024-05-15 11:18:25.082904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.083182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.083212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.937 qpair failed and we were unable to recover it. 00:26:27.937 [2024-05-15 11:18:25.083476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.083745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.083774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.937 qpair failed and we were unable to recover it. 00:26:27.937 [2024-05-15 11:18:25.083978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.084263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.084293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.937 qpair failed and we were unable to recover it. 00:26:27.937 [2024-05-15 11:18:25.084564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.084824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.084853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.937 qpair failed and we were unable to recover it. 00:26:27.937 [2024-05-15 11:18:25.085128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.085338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.085369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.937 qpair failed and we were unable to recover it. 00:26:27.937 [2024-05-15 11:18:25.085639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.085763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.085777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.937 qpair failed and we were unable to recover it. 00:26:27.937 [2024-05-15 11:18:25.085965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.086205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.086220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.937 qpair failed and we were unable to recover it. 00:26:27.937 [2024-05-15 11:18:25.086374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.086495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.086538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.937 qpair failed and we were unable to recover it. 00:26:27.937 [2024-05-15 11:18:25.086759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.087034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.087063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.937 qpair failed and we were unable to recover it. 00:26:27.937 [2024-05-15 11:18:25.087332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.087599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.087628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.937 qpair failed and we were unable to recover it. 00:26:27.937 [2024-05-15 11:18:25.087849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.088120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.088149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.937 qpair failed and we were unable to recover it. 00:26:27.937 [2024-05-15 11:18:25.088436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.088627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.088655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.937 qpair failed and we were unable to recover it. 00:26:27.937 [2024-05-15 11:18:25.088926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.089149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.089186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.937 qpair failed and we were unable to recover it. 00:26:27.937 [2024-05-15 11:18:25.089448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.089692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.089721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.937 qpair failed and we were unable to recover it. 00:26:27.937 [2024-05-15 11:18:25.090000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.937 [2024-05-15 11:18:25.090221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.090235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.938 qpair failed and we were unable to recover it. 00:26:27.938 [2024-05-15 11:18:25.090403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.090635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.090648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.938 qpair failed and we were unable to recover it. 00:26:27.938 [2024-05-15 11:18:25.090874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.091070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.091099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.938 qpair failed and we were unable to recover it. 00:26:27.938 [2024-05-15 11:18:25.091307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.091509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.091538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.938 qpair failed and we were unable to recover it. 00:26:27.938 [2024-05-15 11:18:25.091760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.092041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.092070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.938 qpair failed and we were unable to recover it. 00:26:27.938 [2024-05-15 11:18:25.092195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.092383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.092412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.938 qpair failed and we were unable to recover it. 00:26:27.938 [2024-05-15 11:18:25.092679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.092961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.092989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.938 qpair failed and we were unable to recover it. 00:26:27.938 [2024-05-15 11:18:25.093196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.093503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.093532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.938 qpair failed and we were unable to recover it. 00:26:27.938 [2024-05-15 11:18:25.093823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.093973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.093987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.938 qpair failed and we were unable to recover it. 00:26:27.938 [2024-05-15 11:18:25.094220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.094408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.094438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.938 qpair failed and we were unable to recover it. 00:26:27.938 [2024-05-15 11:18:25.094698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.094946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.094975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.938 qpair failed and we were unable to recover it. 00:26:27.938 [2024-05-15 11:18:25.095196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.095442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.095470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.938 qpair failed and we were unable to recover it. 00:26:27.938 [2024-05-15 11:18:25.095746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.096011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.096039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.938 qpair failed and we were unable to recover it. 00:26:27.938 [2024-05-15 11:18:25.096257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.096433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.096462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.938 qpair failed and we were unable to recover it. 00:26:27.938 [2024-05-15 11:18:25.096585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.096849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.096878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.938 qpair failed and we were unable to recover it. 00:26:27.938 [2024-05-15 11:18:25.097175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.097336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.097349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.938 qpair failed and we were unable to recover it. 00:26:27.938 [2024-05-15 11:18:25.097586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.097746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.097760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.938 qpair failed and we were unable to recover it. 00:26:27.938 [2024-05-15 11:18:25.097923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.098175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.098206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.938 qpair failed and we were unable to recover it. 00:26:27.938 [2024-05-15 11:18:25.098425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.098702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.098730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.938 qpair failed and we were unable to recover it. 00:26:27.938 [2024-05-15 11:18:25.098853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.099042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.099071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.938 qpair failed and we were unable to recover it. 00:26:27.938 [2024-05-15 11:18:25.099341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.099532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.099561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.938 qpair failed and we were unable to recover it. 00:26:27.938 [2024-05-15 11:18:25.099840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.100102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.100116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.938 qpair failed and we were unable to recover it. 00:26:27.938 [2024-05-15 11:18:25.100351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.100589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.100603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.938 qpair failed and we were unable to recover it. 00:26:27.938 [2024-05-15 11:18:25.100824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.100991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.101020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.938 qpair failed and we were unable to recover it. 00:26:27.938 [2024-05-15 11:18:25.101291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.101496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.101524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.938 qpair failed and we were unable to recover it. 00:26:27.938 [2024-05-15 11:18:25.101800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.102041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.102054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.938 qpair failed and we were unable to recover it. 00:26:27.938 [2024-05-15 11:18:25.102287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.102446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.102475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.938 qpair failed and we were unable to recover it. 00:26:27.938 [2024-05-15 11:18:25.102683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.102927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.102956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.938 qpair failed and we were unable to recover it. 00:26:27.938 [2024-05-15 11:18:25.103215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.103431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.103444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.938 qpair failed and we were unable to recover it. 00:26:27.938 [2024-05-15 11:18:25.103681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.103787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.938 [2024-05-15 11:18:25.103816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.938 qpair failed and we were unable to recover it. 00:26:27.939 [2024-05-15 11:18:25.104019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.104209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.104239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.939 qpair failed and we were unable to recover it. 00:26:27.939 [2024-05-15 11:18:25.104538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.104740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.104753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.939 qpair failed and we were unable to recover it. 00:26:27.939 [2024-05-15 11:18:25.104993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.105201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.105232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.939 qpair failed and we were unable to recover it. 00:26:27.939 [2024-05-15 11:18:25.105485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.105726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.105754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.939 qpair failed and we were unable to recover it. 00:26:27.939 [2024-05-15 11:18:25.106023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.106271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.106302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.939 qpair failed and we were unable to recover it. 00:26:27.939 [2024-05-15 11:18:25.106552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.106746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.106776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.939 qpair failed and we were unable to recover it. 00:26:27.939 [2024-05-15 11:18:25.107068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.107290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.107305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.939 qpair failed and we were unable to recover it. 00:26:27.939 [2024-05-15 11:18:25.107543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.107811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.107824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.939 qpair failed and we were unable to recover it. 00:26:27.939 [2024-05-15 11:18:25.108057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.108277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.108307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.939 qpair failed and we were unable to recover it. 00:26:27.939 [2024-05-15 11:18:25.108488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.108712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.108740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.939 qpair failed and we were unable to recover it. 00:26:27.939 [2024-05-15 11:18:25.109032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.109184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.109198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.939 qpair failed and we were unable to recover it. 00:26:27.939 [2024-05-15 11:18:25.109434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.109601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.109631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.939 qpair failed and we were unable to recover it. 00:26:27.939 [2024-05-15 11:18:25.109905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.110162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.110200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.939 qpair failed and we were unable to recover it. 00:26:27.939 [2024-05-15 11:18:25.110449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.110634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.110663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.939 qpair failed and we were unable to recover it. 00:26:27.939 [2024-05-15 11:18:25.110931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.111103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.111119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.939 qpair failed and we were unable to recover it. 00:26:27.939 [2024-05-15 11:18:25.111366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.111571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.111601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.939 qpair failed and we were unable to recover it. 00:26:27.939 [2024-05-15 11:18:25.111877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.112056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.112085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.939 qpair failed and we were unable to recover it. 00:26:27.939 [2024-05-15 11:18:25.112267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.112527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.112556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.939 qpair failed and we were unable to recover it. 00:26:27.939 [2024-05-15 11:18:25.112760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.113018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.113048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.939 qpair failed and we were unable to recover it. 00:26:27.939 [2024-05-15 11:18:25.113306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.113595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.113623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.939 qpair failed and we were unable to recover it. 00:26:27.939 [2024-05-15 11:18:25.113927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.114220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.114235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.939 qpair failed and we were unable to recover it. 00:26:27.939 [2024-05-15 11:18:25.114461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.114704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.114734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.939 qpair failed and we were unable to recover it. 00:26:27.939 [2024-05-15 11:18:25.114927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.115181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.115212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.939 qpair failed and we were unable to recover it. 00:26:27.939 [2024-05-15 11:18:25.115486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.115757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.115786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.939 qpair failed and we were unable to recover it. 00:26:27.939 [2024-05-15 11:18:25.115977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.116217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.116247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.939 qpair failed and we were unable to recover it. 00:26:27.939 [2024-05-15 11:18:25.116448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.116648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.116677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.939 qpair failed and we were unable to recover it. 00:26:27.939 [2024-05-15 11:18:25.116938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.117152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.117198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.939 qpair failed and we were unable to recover it. 00:26:27.939 [2024-05-15 11:18:25.117450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.117646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.117675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.939 qpair failed and we were unable to recover it. 00:26:27.939 [2024-05-15 11:18:25.117863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.118077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.118111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.939 qpair failed and we were unable to recover it. 00:26:27.939 [2024-05-15 11:18:25.118383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.118587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.939 [2024-05-15 11:18:25.118626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.940 qpair failed and we were unable to recover it. 00:26:27.940 [2024-05-15 11:18:25.118793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.118960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.118994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.940 qpair failed and we were unable to recover it. 00:26:27.940 [2024-05-15 11:18:25.119190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.119454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.119483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.940 qpair failed and we were unable to recover it. 00:26:27.940 [2024-05-15 11:18:25.119748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.119945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.119975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.940 qpair failed and we were unable to recover it. 00:26:27.940 [2024-05-15 11:18:25.120225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.120495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.120509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.940 qpair failed and we were unable to recover it. 00:26:27.940 [2024-05-15 11:18:25.120727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.120999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.121029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.940 qpair failed and we were unable to recover it. 00:26:27.940 [2024-05-15 11:18:25.121279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.121546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.121576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.940 qpair failed and we were unable to recover it. 00:26:27.940 [2024-05-15 11:18:25.121870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.122124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.122153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.940 qpair failed and we were unable to recover it. 00:26:27.940 [2024-05-15 11:18:25.122429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.122719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.122748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.940 qpair failed and we were unable to recover it. 00:26:27.940 [2024-05-15 11:18:25.122956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.123229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.123266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.940 qpair failed and we were unable to recover it. 00:26:27.940 [2024-05-15 11:18:25.123475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.123740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.123769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.940 qpair failed and we were unable to recover it. 00:26:27.940 [2024-05-15 11:18:25.124064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.124324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.124338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.940 qpair failed and we were unable to recover it. 00:26:27.940 [2024-05-15 11:18:25.124576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.124791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.124804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.940 qpair failed and we were unable to recover it. 00:26:27.940 [2024-05-15 11:18:25.125034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.125334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.125364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.940 qpair failed and we were unable to recover it. 00:26:27.940 [2024-05-15 11:18:25.125570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.125827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.125857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.940 qpair failed and we were unable to recover it. 00:26:27.940 [2024-05-15 11:18:25.126152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.126439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.126469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.940 qpair failed and we were unable to recover it. 00:26:27.940 [2024-05-15 11:18:25.126776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.127072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.127086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.940 qpair failed and we were unable to recover it. 00:26:27.940 [2024-05-15 11:18:25.127322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.127543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.127573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.940 qpair failed and we were unable to recover it. 00:26:27.940 [2024-05-15 11:18:25.127846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.128138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.128178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.940 qpair failed and we were unable to recover it. 00:26:27.940 [2024-05-15 11:18:25.128386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.128581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.128615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.940 qpair failed and we were unable to recover it. 00:26:27.940 [2024-05-15 11:18:25.128884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.129122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.129135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.940 qpair failed and we were unable to recover it. 00:26:27.940 [2024-05-15 11:18:25.129315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.129480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.129493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.940 qpair failed and we were unable to recover it. 00:26:27.940 [2024-05-15 11:18:25.129678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.129889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.129902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.940 qpair failed and we were unable to recover it. 00:26:27.940 [2024-05-15 11:18:25.130150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.130444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.130475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.940 qpair failed and we were unable to recover it. 00:26:27.940 [2024-05-15 11:18:25.130675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.130918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.130947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.940 qpair failed and we were unable to recover it. 00:26:27.940 [2024-05-15 11:18:25.131155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.131451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.131481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.940 qpair failed and we were unable to recover it. 00:26:27.940 [2024-05-15 11:18:25.131718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.131965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.132006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.940 qpair failed and we were unable to recover it. 00:26:27.940 [2024-05-15 11:18:25.132252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.132430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.132460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.940 qpair failed and we were unable to recover it. 00:26:27.940 [2024-05-15 11:18:25.132711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.132902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.132932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.940 qpair failed and we were unable to recover it. 00:26:27.940 [2024-05-15 11:18:25.133206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.133379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.133395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.940 qpair failed and we were unable to recover it. 00:26:27.940 [2024-05-15 11:18:25.133649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.940 [2024-05-15 11:18:25.133759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.941 [2024-05-15 11:18:25.133772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.941 qpair failed and we were unable to recover it. 00:26:27.941 [2024-05-15 11:18:25.133933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.941 [2024-05-15 11:18:25.134181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.941 [2024-05-15 11:18:25.134212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.941 qpair failed and we were unable to recover it. 00:26:27.941 [2024-05-15 11:18:25.134501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.941 [2024-05-15 11:18:25.134692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.941 [2024-05-15 11:18:25.134721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.941 qpair failed and we were unable to recover it. 00:26:27.941 [2024-05-15 11:18:25.134997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.941 [2024-05-15 11:18:25.135221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.941 [2024-05-15 11:18:25.135234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.941 qpair failed and we were unable to recover it. 00:26:27.941 [2024-05-15 11:18:25.135419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.941 [2024-05-15 11:18:25.135643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.941 [2024-05-15 11:18:25.135673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.941 qpair failed and we were unable to recover it. 00:26:27.941 [2024-05-15 11:18:25.135962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.941 [2024-05-15 11:18:25.136186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.941 [2024-05-15 11:18:25.136216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.941 qpair failed and we were unable to recover it. 00:26:27.941 [2024-05-15 11:18:25.136470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.941 [2024-05-15 11:18:25.136713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.941 [2024-05-15 11:18:25.136742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.941 qpair failed and we were unable to recover it. 00:26:27.941 [2024-05-15 11:18:25.136939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.941 [2024-05-15 11:18:25.137188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.941 [2024-05-15 11:18:25.137219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.941 qpair failed and we were unable to recover it. 00:26:27.941 [2024-05-15 11:18:25.137429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.941 [2024-05-15 11:18:25.137621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.941 [2024-05-15 11:18:25.137650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.941 qpair failed and we were unable to recover it. 00:26:27.941 [2024-05-15 11:18:25.137853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.941 [2024-05-15 11:18:25.138046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.941 [2024-05-15 11:18:25.138076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.941 qpair failed and we were unable to recover it. 00:26:27.941 [2024-05-15 11:18:25.138281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.941 [2024-05-15 11:18:25.138469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.941 [2024-05-15 11:18:25.138482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.941 qpair failed and we were unable to recover it. 00:26:27.941 [2024-05-15 11:18:25.138726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.941 [2024-05-15 11:18:25.138953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.941 [2024-05-15 11:18:25.138983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.941 qpair failed and we were unable to recover it. 00:26:27.941 [2024-05-15 11:18:25.139206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.941 [2024-05-15 11:18:25.139476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.941 [2024-05-15 11:18:25.139505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.941 qpair failed and we were unable to recover it. 00:26:27.941 [2024-05-15 11:18:25.139757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.941 [2024-05-15 11:18:25.140051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.941 [2024-05-15 11:18:25.140080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.941 qpair failed and we were unable to recover it. 00:26:27.941 [2024-05-15 11:18:25.140296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.941 [2024-05-15 11:18:25.140595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.941 [2024-05-15 11:18:25.140625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.941 qpair failed and we were unable to recover it. 00:26:27.941 [2024-05-15 11:18:25.140915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.941 [2024-05-15 11:18:25.141158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.941 [2024-05-15 11:18:25.141188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.941 qpair failed and we were unable to recover it. 00:26:27.941 [2024-05-15 11:18:25.141340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.941 [2024-05-15 11:18:25.141555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.941 [2024-05-15 11:18:25.141569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.941 qpair failed and we were unable to recover it. 00:26:27.941 [2024-05-15 11:18:25.141813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.941 [2024-05-15 11:18:25.141948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.941 [2024-05-15 11:18:25.141977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.941 qpair failed and we were unable to recover it. 00:26:27.941 [2024-05-15 11:18:25.142251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.941 [2024-05-15 11:18:25.142501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.941 [2024-05-15 11:18:25.142531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.941 qpair failed and we were unable to recover it. 00:26:27.941 [2024-05-15 11:18:25.142811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.941 [2024-05-15 11:18:25.143000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.941 [2024-05-15 11:18:25.143029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.941 qpair failed and we were unable to recover it. 00:26:27.941 [2024-05-15 11:18:25.143278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.941 [2024-05-15 11:18:25.143440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.941 [2024-05-15 11:18:25.143469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.941 qpair failed and we were unable to recover it. 00:26:27.941 [2024-05-15 11:18:25.143726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.941 [2024-05-15 11:18:25.143859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.941 [2024-05-15 11:18:25.143888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.941 qpair failed and we were unable to recover it. 00:26:27.941 [2024-05-15 11:18:25.144173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.941 [2024-05-15 11:18:25.144432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.941 [2024-05-15 11:18:25.144447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.941 qpair failed and we were unable to recover it. 00:26:27.941 [2024-05-15 11:18:25.144615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.941 [2024-05-15 11:18:25.144726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.941 [2024-05-15 11:18:25.144740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.941 qpair failed and we were unable to recover it. 00:26:27.941 [2024-05-15 11:18:25.144917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.941 [2024-05-15 11:18:25.145161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.941 [2024-05-15 11:18:25.145200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.941 qpair failed and we were unable to recover it. 00:26:27.941 [2024-05-15 11:18:25.145424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.145688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.145717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.942 qpair failed and we were unable to recover it. 00:26:27.942 [2024-05-15 11:18:25.145937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.146112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.146141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.942 qpair failed and we were unable to recover it. 00:26:27.942 [2024-05-15 11:18:25.146361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.146662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.146691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.942 qpair failed and we were unable to recover it. 00:26:27.942 [2024-05-15 11:18:25.146942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.147206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.147236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.942 qpair failed and we were unable to recover it. 00:26:27.942 [2024-05-15 11:18:25.147506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.147683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.147712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.942 qpair failed and we were unable to recover it. 00:26:27.942 [2024-05-15 11:18:25.148019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.148214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.148245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.942 qpair failed and we were unable to recover it. 00:26:27.942 [2024-05-15 11:18:25.148438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.148684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.148714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.942 qpair failed and we were unable to recover it. 00:26:27.942 [2024-05-15 11:18:25.148908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.149017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.149030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.942 qpair failed and we were unable to recover it. 00:26:27.942 [2024-05-15 11:18:25.149230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.149398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.149413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.942 qpair failed and we were unable to recover it. 00:26:27.942 [2024-05-15 11:18:25.149601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.149816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.149830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.942 qpair failed and we were unable to recover it. 00:26:27.942 [2024-05-15 11:18:25.150023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.150265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.150280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.942 qpair failed and we were unable to recover it. 00:26:27.942 [2024-05-15 11:18:25.150454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.150630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.150660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.942 qpair failed and we were unable to recover it. 00:26:27.942 [2024-05-15 11:18:25.150879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.151147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.151188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.942 qpair failed and we were unable to recover it. 00:26:27.942 [2024-05-15 11:18:25.151332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.151473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.151502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.942 qpair failed and we were unable to recover it. 00:26:27.942 [2024-05-15 11:18:25.151731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.151931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.151961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.942 qpair failed and we were unable to recover it. 00:26:27.942 [2024-05-15 11:18:25.152240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.152481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.152495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.942 qpair failed and we were unable to recover it. 00:26:27.942 [2024-05-15 11:18:25.152763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.153000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.153013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.942 qpair failed and we were unable to recover it. 00:26:27.942 [2024-05-15 11:18:25.153255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.153420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.153434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.942 qpair failed and we were unable to recover it. 00:26:27.942 [2024-05-15 11:18:25.153534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.153769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.153783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.942 qpair failed and we were unable to recover it. 00:26:27.942 [2024-05-15 11:18:25.154004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.154149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.154162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.942 qpair failed and we were unable to recover it. 00:26:27.942 [2024-05-15 11:18:25.154408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.154610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.154639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.942 qpair failed and we were unable to recover it. 00:26:27.942 [2024-05-15 11:18:25.154782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.155008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.155036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.942 qpair failed and we were unable to recover it. 00:26:27.942 [2024-05-15 11:18:25.155226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.155413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.155426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.942 qpair failed and we were unable to recover it. 00:26:27.942 [2024-05-15 11:18:25.155696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.155979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.156009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.942 qpair failed and we were unable to recover it. 00:26:27.942 [2024-05-15 11:18:25.156230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.156524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.156553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.942 qpair failed and we were unable to recover it. 00:26:27.942 [2024-05-15 11:18:25.156704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.156821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.156850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.942 qpair failed and we were unable to recover it. 00:26:27.942 [2024-05-15 11:18:25.157122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.157362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.157376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.942 qpair failed and we were unable to recover it. 00:26:27.942 [2024-05-15 11:18:25.157621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.157897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.157911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.942 qpair failed and we were unable to recover it. 00:26:27.942 [2024-05-15 11:18:25.158189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.158383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.158396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.942 qpair failed and we were unable to recover it. 00:26:27.942 [2024-05-15 11:18:25.158642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.158831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.942 [2024-05-15 11:18:25.158845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.942 qpair failed and we were unable to recover it. 00:26:27.943 [2024-05-15 11:18:25.159083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.943 [2024-05-15 11:18:25.159248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.943 [2024-05-15 11:18:25.159262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.943 qpair failed and we were unable to recover it. 00:26:27.943 [2024-05-15 11:18:25.159506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.943 [2024-05-15 11:18:25.159750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.943 [2024-05-15 11:18:25.159779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.943 qpair failed and we were unable to recover it. 00:26:27.943 [2024-05-15 11:18:25.159979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.943 [2024-05-15 11:18:25.160270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.943 [2024-05-15 11:18:25.160285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.943 qpair failed and we were unable to recover it. 00:26:27.943 [2024-05-15 11:18:25.160527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.943 [2024-05-15 11:18:25.160718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.943 [2024-05-15 11:18:25.160747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.943 qpair failed and we were unable to recover it. 00:26:27.943 [2024-05-15 11:18:25.160999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.943 [2024-05-15 11:18:25.161305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.943 [2024-05-15 11:18:25.161339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.943 qpair failed and we were unable to recover it. 00:26:27.943 [2024-05-15 11:18:25.161654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.943 [2024-05-15 11:18:25.161885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.943 [2024-05-15 11:18:25.161915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.943 qpair failed and we were unable to recover it. 00:26:27.943 [2024-05-15 11:18:25.162117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.943 [2024-05-15 11:18:25.162401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.943 [2024-05-15 11:18:25.162432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.943 qpair failed and we were unable to recover it. 00:26:27.943 [2024-05-15 11:18:25.162653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.943 [2024-05-15 11:18:25.162842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.943 [2024-05-15 11:18:25.162871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.943 qpair failed and we were unable to recover it. 00:26:27.943 [2024-05-15 11:18:25.163149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.943 [2024-05-15 11:18:25.163432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.943 [2024-05-15 11:18:25.163462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.943 qpair failed and we were unable to recover it. 00:26:27.943 [2024-05-15 11:18:25.163657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.943 [2024-05-15 11:18:25.163763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.943 [2024-05-15 11:18:25.163777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.943 qpair failed and we were unable to recover it. 00:26:27.943 [2024-05-15 11:18:25.163994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.943 [2024-05-15 11:18:25.164140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.943 [2024-05-15 11:18:25.164154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.943 qpair failed and we were unable to recover it. 00:26:27.943 [2024-05-15 11:18:25.164334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.943 [2024-05-15 11:18:25.164521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.943 [2024-05-15 11:18:25.164534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.943 qpair failed and we were unable to recover it. 00:26:27.943 [2024-05-15 11:18:25.164788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.943 [2024-05-15 11:18:25.165089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.943 [2024-05-15 11:18:25.165118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.943 qpair failed and we were unable to recover it. 00:26:27.943 [2024-05-15 11:18:25.165360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.943 [2024-05-15 11:18:25.165636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.943 [2024-05-15 11:18:25.165665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.943 qpair failed and we were unable to recover it. 00:26:27.943 [2024-05-15 11:18:25.165880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.943 [2024-05-15 11:18:25.166125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.943 [2024-05-15 11:18:25.166154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.943 qpair failed and we were unable to recover it. 00:26:27.943 [2024-05-15 11:18:25.166359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.943 [2024-05-15 11:18:25.166655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.943 [2024-05-15 11:18:25.166684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.943 qpair failed and we were unable to recover it. 00:26:27.943 [2024-05-15 11:18:25.166995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.943 [2024-05-15 11:18:25.167273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.943 [2024-05-15 11:18:25.167287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.943 qpair failed and we were unable to recover it. 00:26:27.943 [2024-05-15 11:18:25.167520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.943 [2024-05-15 11:18:25.167684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.943 [2024-05-15 11:18:25.167698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:27.943 qpair failed and we were unable to recover it. 00:26:27.943 [2024-05-15 11:18:25.167866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.943 [2024-05-15 11:18:25.168015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-05-15 11:18:25.168043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.219 qpair failed and we were unable to recover it. 00:26:28.219 [2024-05-15 11:18:25.168320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-05-15 11:18:25.168580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-05-15 11:18:25.168609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.219 qpair failed and we were unable to recover it. 00:26:28.219 [2024-05-15 11:18:25.168883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-05-15 11:18:25.169096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-05-15 11:18:25.169110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.219 qpair failed and we were unable to recover it. 00:26:28.219 [2024-05-15 11:18:25.169352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-05-15 11:18:25.169522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-05-15 11:18:25.169535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.219 qpair failed and we were unable to recover it. 00:26:28.219 [2024-05-15 11:18:25.169754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-05-15 11:18:25.169991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-05-15 11:18:25.170005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.219 qpair failed and we were unable to recover it. 00:26:28.219 [2024-05-15 11:18:25.170186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-05-15 11:18:25.170429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-05-15 11:18:25.170442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.219 qpair failed and we were unable to recover it. 00:26:28.219 [2024-05-15 11:18:25.170685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-05-15 11:18:25.170946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-05-15 11:18:25.170959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.219 qpair failed and we were unable to recover it. 00:26:28.219 [2024-05-15 11:18:25.171122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-05-15 11:18:25.171376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-05-15 11:18:25.171406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.219 qpair failed and we were unable to recover it. 00:26:28.219 [2024-05-15 11:18:25.171696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-05-15 11:18:25.171955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-05-15 11:18:25.171984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.219 qpair failed and we were unable to recover it. 00:26:28.219 [2024-05-15 11:18:25.172271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-05-15 11:18:25.172501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-05-15 11:18:25.172515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.219 qpair failed and we were unable to recover it. 00:26:28.219 [2024-05-15 11:18:25.172703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-05-15 11:18:25.172929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-05-15 11:18:25.172957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.219 qpair failed and we were unable to recover it. 00:26:28.219 [2024-05-15 11:18:25.173140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-05-15 11:18:25.173417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-05-15 11:18:25.173447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.219 qpair failed and we were unable to recover it. 00:26:28.219 [2024-05-15 11:18:25.173700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-05-15 11:18:25.173920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-05-15 11:18:25.173950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.219 qpair failed and we were unable to recover it. 00:26:28.219 [2024-05-15 11:18:25.174204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-05-15 11:18:25.174339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-05-15 11:18:25.174370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.219 qpair failed and we were unable to recover it. 00:26:28.219 [2024-05-15 11:18:25.174660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-05-15 11:18:25.174837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-05-15 11:18:25.174851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.219 qpair failed and we were unable to recover it. 00:26:28.219 [2024-05-15 11:18:25.175096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-05-15 11:18:25.175287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-05-15 11:18:25.175302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-05-15 11:18:25.175451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-05-15 11:18:25.175621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-05-15 11:18:25.175634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-05-15 11:18:25.175745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-05-15 11:18:25.175905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-05-15 11:18:25.175919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-05-15 11:18:25.176091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-05-15 11:18:25.176335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-05-15 11:18:25.176349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-05-15 11:18:25.176571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-05-15 11:18:25.176838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-05-15 11:18:25.176852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-05-15 11:18:25.176962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-05-15 11:18:25.177182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-05-15 11:18:25.177197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-05-15 11:18:25.177442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-05-15 11:18:25.177713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-05-15 11:18:25.177741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-05-15 11:18:25.177987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-05-15 11:18:25.178098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-05-15 11:18:25.178112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-05-15 11:18:25.178261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-05-15 11:18:25.178426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-05-15 11:18:25.178441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-05-15 11:18:25.178623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-05-15 11:18:25.178882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-05-15 11:18:25.178911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-05-15 11:18:25.179211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-05-15 11:18:25.179507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-05-15 11:18:25.179536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-05-15 11:18:25.179754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-05-15 11:18:25.180006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-05-15 11:18:25.180034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-05-15 11:18:25.180335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-05-15 11:18:25.180590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-05-15 11:18:25.180619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-05-15 11:18:25.180885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-05-15 11:18:25.181150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-05-15 11:18:25.181175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-05-15 11:18:25.181345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-05-15 11:18:25.181635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-05-15 11:18:25.181663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-05-15 11:18:25.181850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-05-15 11:18:25.182048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-05-15 11:18:25.182077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-05-15 11:18:25.182270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-05-15 11:18:25.182356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-05-15 11:18:25.182369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-05-15 11:18:25.182586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-05-15 11:18:25.182736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-05-15 11:18:25.182764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-05-15 11:18:25.182892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-05-15 11:18:25.183174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-05-15 11:18:25.183205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-05-15 11:18:25.183423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-05-15 11:18:25.183668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-05-15 11:18:25.183682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-05-15 11:18:25.183929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-05-15 11:18:25.184151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-05-15 11:18:25.184191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-05-15 11:18:25.184391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-05-15 11:18:25.184686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-05-15 11:18:25.184715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-05-15 11:18:25.184918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-05-15 11:18:25.185201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-05-15 11:18:25.185232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-05-15 11:18:25.185434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-05-15 11:18:25.185653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-05-15 11:18:25.185683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-05-15 11:18:25.185961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-05-15 11:18:25.186231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-05-15 11:18:25.186262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-05-15 11:18:25.186481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-05-15 11:18:25.186752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-05-15 11:18:25.186781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-05-15 11:18:25.187041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-05-15 11:18:25.187224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-05-15 11:18:25.187239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-05-15 11:18:25.187489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-05-15 11:18:25.187731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-05-15 11:18:25.187745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-05-15 11:18:25.187829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-05-15 11:18:25.188028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-05-15 11:18:25.188042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-05-15 11:18:25.188135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-05-15 11:18:25.188377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-05-15 11:18:25.188392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-05-15 11:18:25.188556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-05-15 11:18:25.188790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-05-15 11:18:25.188819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-05-15 11:18:25.189119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-05-15 11:18:25.189356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-05-15 11:18:25.189387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-05-15 11:18:25.189675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-05-15 11:18:25.189953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-05-15 11:18:25.189982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-05-15 11:18:25.190243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-05-15 11:18:25.190413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-05-15 11:18:25.190426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-05-15 11:18:25.190578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-05-15 11:18:25.190740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-05-15 11:18:25.190754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-05-15 11:18:25.190940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-05-15 11:18:25.191157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-05-15 11:18:25.191176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-05-15 11:18:25.191367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-05-15 11:18:25.191528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-05-15 11:18:25.191543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-05-15 11:18:25.191714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-05-15 11:18:25.191863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-05-15 11:18:25.191877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-05-15 11:18:25.192051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-05-15 11:18:25.192226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-05-15 11:18:25.192240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-05-15 11:18:25.192417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-05-15 11:18:25.192586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-05-15 11:18:25.192600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-05-15 11:18:25.192851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-05-15 11:18:25.193007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-05-15 11:18:25.193021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-05-15 11:18:25.193212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-05-15 11:18:25.193398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-05-15 11:18:25.193412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-05-15 11:18:25.193644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-05-15 11:18:25.193899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-05-15 11:18:25.193915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-05-15 11:18:25.194179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-05-15 11:18:25.194418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-05-15 11:18:25.194432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-05-15 11:18:25.194677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-05-15 11:18:25.194798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-05-15 11:18:25.194813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-05-15 11:18:25.195005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-05-15 11:18:25.195170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-05-15 11:18:25.195184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-05-15 11:18:25.195359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-05-15 11:18:25.195583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-05-15 11:18:25.195596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-05-15 11:18:25.195832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-05-15 11:18:25.195936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-05-15 11:18:25.195950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-05-15 11:18:25.196118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-05-15 11:18:25.196362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-05-15 11:18:25.196376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-05-15 11:18:25.196663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-05-15 11:18:25.196859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-05-15 11:18:25.196873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-05-15 11:18:25.197116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-05-15 11:18:25.197338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-05-15 11:18:25.197352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-05-15 11:18:25.197621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-05-15 11:18:25.197774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-05-15 11:18:25.197788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-05-15 11:18:25.198037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-05-15 11:18:25.198221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-05-15 11:18:25.198263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-05-15 11:18:25.198449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-05-15 11:18:25.198598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-05-15 11:18:25.198612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-05-15 11:18:25.198863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-05-15 11:18:25.199091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-05-15 11:18:25.199105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-05-15 11:18:25.199256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-05-15 11:18:25.199453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-05-15 11:18:25.199467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-05-15 11:18:25.199663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-05-15 11:18:25.199911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-05-15 11:18:25.199926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-05-15 11:18:25.200094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-05-15 11:18:25.200331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-05-15 11:18:25.200346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-05-15 11:18:25.200592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-05-15 11:18:25.200765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-05-15 11:18:25.200780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-05-15 11:18:25.200959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-05-15 11:18:25.201231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-05-15 11:18:25.201246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-05-15 11:18:25.201439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-05-15 11:18:25.201628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-05-15 11:18:25.201643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-05-15 11:18:25.201869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-05-15 11:18:25.202022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-05-15 11:18:25.202036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-05-15 11:18:25.202136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-05-15 11:18:25.202306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-05-15 11:18:25.202324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.223 [2024-05-15 11:18:25.202498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-05-15 11:18:25.202661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-05-15 11:18:25.202675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.223 [2024-05-15 11:18:25.202839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-05-15 11:18:25.203015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-05-15 11:18:25.203029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.223 [2024-05-15 11:18:25.203253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-05-15 11:18:25.203479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-05-15 11:18:25.203494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.223 [2024-05-15 11:18:25.203745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-05-15 11:18:25.203998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-05-15 11:18:25.204028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.223 [2024-05-15 11:18:25.204256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-05-15 11:18:25.204457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-05-15 11:18:25.204487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.223 [2024-05-15 11:18:25.204685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-05-15 11:18:25.204961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-05-15 11:18:25.204992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.223 [2024-05-15 11:18:25.205196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-05-15 11:18:25.205471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-05-15 11:18:25.205485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.223 [2024-05-15 11:18:25.205716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-05-15 11:18:25.205972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-05-15 11:18:25.206001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.223 [2024-05-15 11:18:25.206276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-05-15 11:18:25.206514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-05-15 11:18:25.206528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.223 [2024-05-15 11:18:25.206698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-05-15 11:18:25.206853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-05-15 11:18:25.206871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.223 [2024-05-15 11:18:25.207068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-05-15 11:18:25.207293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-05-15 11:18:25.207308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.223 [2024-05-15 11:18:25.207441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-05-15 11:18:25.207558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-05-15 11:18:25.207572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.223 [2024-05-15 11:18:25.207760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-05-15 11:18:25.208035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-05-15 11:18:25.208067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.223 [2024-05-15 11:18:25.208345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-05-15 11:18:25.208547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-05-15 11:18:25.208576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.223 [2024-05-15 11:18:25.208717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-05-15 11:18:25.208918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-05-15 11:18:25.208948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.223 [2024-05-15 11:18:25.209146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-05-15 11:18:25.209316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-05-15 11:18:25.209331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.223 [2024-05-15 11:18:25.209604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-05-15 11:18:25.209824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-05-15 11:18:25.209853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.223 [2024-05-15 11:18:25.210110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-05-15 11:18:25.210225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-05-15 11:18:25.210240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.223 [2024-05-15 11:18:25.210464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-05-15 11:18:25.210638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-05-15 11:18:25.210653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.223 [2024-05-15 11:18:25.210808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-05-15 11:18:25.211052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-05-15 11:18:25.211081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.224 [2024-05-15 11:18:25.211348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-05-15 11:18:25.211633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-05-15 11:18:25.211662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-05-15 11:18:25.211820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-05-15 11:18:25.212098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-05-15 11:18:25.212127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-05-15 11:18:25.212401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-05-15 11:18:25.212680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-05-15 11:18:25.212709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-05-15 11:18:25.212857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-05-15 11:18:25.213126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-05-15 11:18:25.213156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-05-15 11:18:25.213356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-05-15 11:18:25.213586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-05-15 11:18:25.213600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-05-15 11:18:25.213826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-05-15 11:18:25.214004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-05-15 11:18:25.214033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-05-15 11:18:25.214300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-05-15 11:18:25.214520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-05-15 11:18:25.214548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-05-15 11:18:25.214825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-05-15 11:18:25.215091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-05-15 11:18:25.215106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-05-15 11:18:25.215283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-05-15 11:18:25.215482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-05-15 11:18:25.215511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-05-15 11:18:25.215711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-05-15 11:18:25.215912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-05-15 11:18:25.215940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-05-15 11:18:25.216225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-05-15 11:18:25.216383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-05-15 11:18:25.216397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-05-15 11:18:25.216507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-05-15 11:18:25.216752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-05-15 11:18:25.216766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-05-15 11:18:25.216937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-05-15 11:18:25.217103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-05-15 11:18:25.217116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-05-15 11:18:25.217210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-05-15 11:18:25.217398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-05-15 11:18:25.217428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-05-15 11:18:25.217616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-05-15 11:18:25.217852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-05-15 11:18:25.217881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-05-15 11:18:25.218160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-05-15 11:18:25.218369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-05-15 11:18:25.218399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-05-15 11:18:25.218608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-05-15 11:18:25.218806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-05-15 11:18:25.218835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-05-15 11:18:25.219040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-05-15 11:18:25.219208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-05-15 11:18:25.219240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-05-15 11:18:25.219427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-05-15 11:18:25.219622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-05-15 11:18:25.219652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-05-15 11:18:25.219787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-05-15 11:18:25.219975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-05-15 11:18:25.219989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-05-15 11:18:25.220249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-05-15 11:18:25.220423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-05-15 11:18:25.220437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.225 qpair failed and we were unable to recover it. 00:26:28.225 [2024-05-15 11:18:25.220684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-05-15 11:18:25.220927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-05-15 11:18:25.220941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.225 qpair failed and we were unable to recover it. 00:26:28.225 [2024-05-15 11:18:25.221118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-05-15 11:18:25.221327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-05-15 11:18:25.221356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.225 qpair failed and we were unable to recover it. 00:26:28.225 [2024-05-15 11:18:25.221578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-05-15 11:18:25.221894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-05-15 11:18:25.221923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.225 qpair failed and we were unable to recover it. 00:26:28.225 [2024-05-15 11:18:25.222187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-05-15 11:18:25.222328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-05-15 11:18:25.222357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.225 qpair failed and we were unable to recover it. 00:26:28.225 [2024-05-15 11:18:25.222559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-05-15 11:18:25.222754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-05-15 11:18:25.222783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.225 qpair failed and we were unable to recover it. 00:26:28.225 [2024-05-15 11:18:25.222986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-05-15 11:18:25.223214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-05-15 11:18:25.223245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.225 qpair failed and we were unable to recover it. 00:26:28.225 [2024-05-15 11:18:25.223436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-05-15 11:18:25.223711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-05-15 11:18:25.223740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.225 qpair failed and we were unable to recover it. 00:26:28.225 [2024-05-15 11:18:25.224021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-05-15 11:18:25.224296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-05-15 11:18:25.224310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.225 qpair failed and we were unable to recover it. 00:26:28.225 [2024-05-15 11:18:25.224556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-05-15 11:18:25.224756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-05-15 11:18:25.224786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.225 qpair failed and we were unable to recover it. 00:26:28.225 [2024-05-15 11:18:25.225013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-05-15 11:18:25.225134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-05-15 11:18:25.225148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.225 qpair failed and we were unable to recover it. 00:26:28.225 [2024-05-15 11:18:25.225408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-05-15 11:18:25.225520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-05-15 11:18:25.225549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.225 qpair failed and we were unable to recover it. 00:26:28.225 [2024-05-15 11:18:25.225809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-05-15 11:18:25.226057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-05-15 11:18:25.226096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.225 qpair failed and we were unable to recover it. 00:26:28.225 [2024-05-15 11:18:25.226433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-05-15 11:18:25.226690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-05-15 11:18:25.226706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.225 qpair failed and we were unable to recover it. 00:26:28.225 [2024-05-15 11:18:25.226897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-05-15 11:18:25.227059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-05-15 11:18:25.227073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.225 qpair failed and we were unable to recover it. 00:26:28.226 [2024-05-15 11:18:25.227248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-05-15 11:18:25.227366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-05-15 11:18:25.227380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-05-15 11:18:25.227603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-05-15 11:18:25.227841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-05-15 11:18:25.227881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-05-15 11:18:25.228150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-05-15 11:18:25.228342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-05-15 11:18:25.228372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-05-15 11:18:25.228598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-05-15 11:18:25.228726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-05-15 11:18:25.228756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-05-15 11:18:25.229037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-05-15 11:18:25.229238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-05-15 11:18:25.229269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-05-15 11:18:25.229481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-05-15 11:18:25.229705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-05-15 11:18:25.229734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-05-15 11:18:25.230008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-05-15 11:18:25.230211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-05-15 11:18:25.230252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-05-15 11:18:25.230495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-05-15 11:18:25.230777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-05-15 11:18:25.230791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-05-15 11:18:25.231019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-05-15 11:18:25.231253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-05-15 11:18:25.231268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-05-15 11:18:25.231451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-05-15 11:18:25.231674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-05-15 11:18:25.231703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-05-15 11:18:25.231990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-05-15 11:18:25.232120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-05-15 11:18:25.232149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-05-15 11:18:25.232437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-05-15 11:18:25.232663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-05-15 11:18:25.232692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-05-15 11:18:25.232922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-05-15 11:18:25.233185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-05-15 11:18:25.233216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-05-15 11:18:25.233501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-05-15 11:18:25.233752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-05-15 11:18:25.233781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-05-15 11:18:25.234058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-05-15 11:18:25.234252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-05-15 11:18:25.234284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-05-15 11:18:25.234567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-05-15 11:18:25.234851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-05-15 11:18:25.234880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-05-15 11:18:25.235080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-05-15 11:18:25.235250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-05-15 11:18:25.235281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-05-15 11:18:25.235429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-05-15 11:18:25.235683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-05-15 11:18:25.235712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-05-15 11:18:25.236052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-05-15 11:18:25.236289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-05-15 11:18:25.236333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-05-15 11:18:25.236581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-05-15 11:18:25.236849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-05-15 11:18:25.236862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-05-15 11:18:25.237098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-05-15 11:18:25.237282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-05-15 11:18:25.237296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.227 [2024-05-15 11:18:25.237542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-05-15 11:18:25.237716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-05-15 11:18:25.237730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-05-15 11:18:25.237844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-05-15 11:18:25.238005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-05-15 11:18:25.238019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-05-15 11:18:25.238181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-05-15 11:18:25.238280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-05-15 11:18:25.238294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-05-15 11:18:25.238518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-05-15 11:18:25.238616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-05-15 11:18:25.238631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-05-15 11:18:25.238798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-05-15 11:18:25.239021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-05-15 11:18:25.239035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-05-15 11:18:25.239260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-05-15 11:18:25.239559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-05-15 11:18:25.239574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-05-15 11:18:25.239729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-05-15 11:18:25.239903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-05-15 11:18:25.239917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-05-15 11:18:25.240185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-05-15 11:18:25.240375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-05-15 11:18:25.240390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-05-15 11:18:25.240631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-05-15 11:18:25.240803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-05-15 11:18:25.240817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-05-15 11:18:25.241043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-05-15 11:18:25.241277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-05-15 11:18:25.241291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-05-15 11:18:25.241556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-05-15 11:18:25.241750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-05-15 11:18:25.241763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-05-15 11:18:25.242009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-05-15 11:18:25.242199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-05-15 11:18:25.242214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-05-15 11:18:25.242463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-05-15 11:18:25.242683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-05-15 11:18:25.242697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-05-15 11:18:25.242940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-05-15 11:18:25.243208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-05-15 11:18:25.243223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-05-15 11:18:25.243482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-05-15 11:18:25.243724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-05-15 11:18:25.243738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-05-15 11:18:25.243958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-05-15 11:18:25.244229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-05-15 11:18:25.244243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-05-15 11:18:25.244408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-05-15 11:18:25.244595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-05-15 11:18:25.244609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-05-15 11:18:25.244783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-05-15 11:18:25.245028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-05-15 11:18:25.245042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-05-15 11:18:25.245267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-05-15 11:18:25.245440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-05-15 11:18:25.245454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-05-15 11:18:25.245699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-05-15 11:18:25.245913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-05-15 11:18:25.245926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-05-15 11:18:25.246178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-05-15 11:18:25.246452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-05-15 11:18:25.246466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-05-15 11:18:25.246619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-05-15 11:18:25.246841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-05-15 11:18:25.246855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-05-15 11:18:25.247018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-05-15 11:18:25.247287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-05-15 11:18:25.247302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-05-15 11:18:25.247477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-05-15 11:18:25.247672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-05-15 11:18:25.247686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-05-15 11:18:25.247912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-05-15 11:18:25.248091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-05-15 11:18:25.248105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-05-15 11:18:25.248353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-05-15 11:18:25.248505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-05-15 11:18:25.248520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-05-15 11:18:25.248689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-05-15 11:18:25.248911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-05-15 11:18:25.248925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-05-15 11:18:25.249156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-05-15 11:18:25.249414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-05-15 11:18:25.249429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-05-15 11:18:25.249615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-05-15 11:18:25.249855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-05-15 11:18:25.249869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-05-15 11:18:25.250073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-05-15 11:18:25.250318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-05-15 11:18:25.250332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-05-15 11:18:25.250520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-05-15 11:18:25.250778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-05-15 11:18:25.250791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-05-15 11:18:25.251043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-05-15 11:18:25.251261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-05-15 11:18:25.251276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-05-15 11:18:25.251448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-05-15 11:18:25.251541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-05-15 11:18:25.251555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-05-15 11:18:25.251655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-05-15 11:18:25.251876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-05-15 11:18:25.251890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-05-15 11:18:25.252161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-05-15 11:18:25.252337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-05-15 11:18:25.252351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-05-15 11:18:25.252572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-05-15 11:18:25.252803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-05-15 11:18:25.252817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-05-15 11:18:25.252966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-05-15 11:18:25.253216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-05-15 11:18:25.253231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-05-15 11:18:25.253407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-05-15 11:18:25.253649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-05-15 11:18:25.253663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-05-15 11:18:25.253912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-05-15 11:18:25.254130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-05-15 11:18:25.254144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-05-15 11:18:25.254430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-05-15 11:18:25.254675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-05-15 11:18:25.254689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-05-15 11:18:25.254918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-05-15 11:18:25.255185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-05-15 11:18:25.255200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-05-15 11:18:25.255448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-05-15 11:18:25.255717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-05-15 11:18:25.255731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-05-15 11:18:25.255905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-05-15 11:18:25.256092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-05-15 11:18:25.256106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-05-15 11:18:25.256327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-05-15 11:18:25.256423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-05-15 11:18:25.256437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-05-15 11:18:25.256551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-05-15 11:18:25.256803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-05-15 11:18:25.256817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-05-15 11:18:25.257039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-05-15 11:18:25.257319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-05-15 11:18:25.257333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-05-15 11:18:25.257488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-05-15 11:18:25.257734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-05-15 11:18:25.257749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-05-15 11:18:25.257970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-05-15 11:18:25.258072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-05-15 11:18:25.258086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-05-15 11:18:25.258271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-05-15 11:18:25.258369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-05-15 11:18:25.258383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-05-15 11:18:25.258632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-05-15 11:18:25.258726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-05-15 11:18:25.258740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-05-15 11:18:25.258822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-05-15 11:18:25.259013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-05-15 11:18:25.259027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-05-15 11:18:25.259276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-05-15 11:18:25.259448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-05-15 11:18:25.259462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-05-15 11:18:25.259708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-05-15 11:18:25.259956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-05-15 11:18:25.259970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-05-15 11:18:25.260162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-05-15 11:18:25.260338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-05-15 11:18:25.260352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-05-15 11:18:25.260515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-05-15 11:18:25.260792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-05-15 11:18:25.260806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-05-15 11:18:25.261060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-05-15 11:18:25.261284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-05-15 11:18:25.261300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-05-15 11:18:25.261485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-05-15 11:18:25.261731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-05-15 11:18:25.261746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-05-15 11:18:25.261895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-05-15 11:18:25.262083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-05-15 11:18:25.262097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-05-15 11:18:25.262267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-05-15 11:18:25.262514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-05-15 11:18:25.262528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-05-15 11:18:25.262780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-05-15 11:18:25.263031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-05-15 11:18:25.263045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-05-15 11:18:25.263295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-05-15 11:18:25.263476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-05-15 11:18:25.263490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-05-15 11:18:25.263676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-05-15 11:18:25.263849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-05-15 11:18:25.263862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-05-15 11:18:25.264083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-05-15 11:18:25.264188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-05-15 11:18:25.264202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-05-15 11:18:25.264306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-05-15 11:18:25.264533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-05-15 11:18:25.264547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.230 qpair failed and we were unable to recover it. 00:26:28.230 [2024-05-15 11:18:25.264711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-05-15 11:18:25.264959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-05-15 11:18:25.264975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.230 qpair failed and we were unable to recover it. 00:26:28.230 [2024-05-15 11:18:25.265249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-05-15 11:18:25.265334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-05-15 11:18:25.265348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.230 qpair failed and we were unable to recover it. 00:26:28.230 [2024-05-15 11:18:25.265535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-05-15 11:18:25.265783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-05-15 11:18:25.265796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.230 qpair failed and we were unable to recover it. 00:26:28.230 [2024-05-15 11:18:25.266048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-05-15 11:18:25.266291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-05-15 11:18:25.266306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.230 qpair failed and we were unable to recover it. 00:26:28.230 [2024-05-15 11:18:25.266568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-05-15 11:18:25.266814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-05-15 11:18:25.266827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.230 qpair failed and we were unable to recover it. 00:26:28.230 [2024-05-15 11:18:25.267000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-05-15 11:18:25.267238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-05-15 11:18:25.267252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.230 qpair failed and we were unable to recover it. 00:26:28.230 [2024-05-15 11:18:25.267419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-05-15 11:18:25.267640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-05-15 11:18:25.267653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.230 qpair failed and we were unable to recover it. 00:26:28.230 [2024-05-15 11:18:25.267883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-05-15 11:18:25.268064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-05-15 11:18:25.268078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.230 qpair failed and we were unable to recover it. 00:26:28.230 [2024-05-15 11:18:25.268241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-05-15 11:18:25.268395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-05-15 11:18:25.268410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.230 qpair failed and we were unable to recover it. 00:26:28.230 [2024-05-15 11:18:25.268560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-05-15 11:18:25.268731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-05-15 11:18:25.268745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.230 qpair failed and we were unable to recover it. 00:26:28.230 [2024-05-15 11:18:25.269010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-05-15 11:18:25.269088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-05-15 11:18:25.269105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.230 qpair failed and we were unable to recover it. 00:26:28.230 [2024-05-15 11:18:25.269356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-05-15 11:18:25.269523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-05-15 11:18:25.269536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.230 qpair failed and we were unable to recover it. 00:26:28.230 [2024-05-15 11:18:25.269808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-05-15 11:18:25.269996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-05-15 11:18:25.270009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.230 qpair failed and we were unable to recover it. 00:26:28.230 [2024-05-15 11:18:25.270252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-05-15 11:18:25.270448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-05-15 11:18:25.270461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.230 qpair failed and we were unable to recover it. 00:26:28.230 [2024-05-15 11:18:25.270706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-05-15 11:18:25.270925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-05-15 11:18:25.270939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.230 qpair failed and we were unable to recover it. 00:26:28.230 [2024-05-15 11:18:25.271229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-05-15 11:18:25.271477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-05-15 11:18:25.271490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.230 qpair failed and we were unable to recover it. 00:26:28.230 [2024-05-15 11:18:25.271642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-05-15 11:18:25.271797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-05-15 11:18:25.271811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.230 qpair failed and we were unable to recover it. 00:26:28.230 [2024-05-15 11:18:25.272107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-05-15 11:18:25.272364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-05-15 11:18:25.272379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.230 qpair failed and we were unable to recover it. 00:26:28.230 [2024-05-15 11:18:25.272608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-05-15 11:18:25.272862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-05-15 11:18:25.272875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.230 qpair failed and we were unable to recover it. 00:26:28.230 [2024-05-15 11:18:25.273045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-05-15 11:18:25.273287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-05-15 11:18:25.273302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.230 qpair failed and we were unable to recover it. 00:26:28.230 [2024-05-15 11:18:25.273552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-05-15 11:18:25.273807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-05-15 11:18:25.273823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.231 qpair failed and we were unable to recover it. 00:26:28.231 [2024-05-15 11:18:25.274072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-05-15 11:18:25.274292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-05-15 11:18:25.274306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.231 qpair failed and we were unable to recover it. 00:26:28.231 [2024-05-15 11:18:25.274555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-05-15 11:18:25.274826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-05-15 11:18:25.274839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.231 qpair failed and we were unable to recover it. 00:26:28.231 [2024-05-15 11:18:25.274992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-05-15 11:18:25.275173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-05-15 11:18:25.275187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.231 qpair failed and we were unable to recover it. 00:26:28.231 [2024-05-15 11:18:25.275379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-05-15 11:18:25.275598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-05-15 11:18:25.275612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.231 qpair failed and we were unable to recover it. 00:26:28.231 [2024-05-15 11:18:25.275844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-05-15 11:18:25.276039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-05-15 11:18:25.276053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.231 qpair failed and we were unable to recover it. 00:26:28.231 [2024-05-15 11:18:25.276274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-05-15 11:18:25.276503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-05-15 11:18:25.276517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.231 qpair failed and we were unable to recover it. 00:26:28.231 [2024-05-15 11:18:25.276741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-05-15 11:18:25.276843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-05-15 11:18:25.276857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.231 qpair failed and we were unable to recover it. 00:26:28.231 [2024-05-15 11:18:25.277026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-05-15 11:18:25.277247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-05-15 11:18:25.277261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.231 qpair failed and we were unable to recover it. 00:26:28.231 [2024-05-15 11:18:25.277467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-05-15 11:18:25.277652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-05-15 11:18:25.277666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.231 qpair failed and we were unable to recover it. 00:26:28.231 [2024-05-15 11:18:25.277889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-05-15 11:18:25.278126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-05-15 11:18:25.278140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.231 qpair failed and we were unable to recover it. 00:26:28.231 [2024-05-15 11:18:25.278319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-05-15 11:18:25.278562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-05-15 11:18:25.278576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.231 qpair failed and we were unable to recover it. 00:26:28.231 [2024-05-15 11:18:25.278765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-05-15 11:18:25.279009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-05-15 11:18:25.279023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.231 qpair failed and we were unable to recover it. 00:26:28.231 [2024-05-15 11:18:25.279206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-05-15 11:18:25.279325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-05-15 11:18:25.279339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.231 qpair failed and we were unable to recover it. 00:26:28.231 [2024-05-15 11:18:25.279581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-05-15 11:18:25.279680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-05-15 11:18:25.279693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.231 qpair failed and we were unable to recover it. 00:26:28.231 [2024-05-15 11:18:25.279848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-05-15 11:18:25.280096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-05-15 11:18:25.280110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.231 qpair failed and we were unable to recover it. 00:26:28.231 [2024-05-15 11:18:25.280262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-05-15 11:18:25.280491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-05-15 11:18:25.280505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.232 qpair failed and we were unable to recover it. 00:26:28.232 [2024-05-15 11:18:25.280757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-05-15 11:18:25.280941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-05-15 11:18:25.280955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.232 qpair failed and we were unable to recover it. 00:26:28.232 [2024-05-15 11:18:25.281218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-05-15 11:18:25.281459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-05-15 11:18:25.281473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.232 qpair failed and we were unable to recover it. 00:26:28.232 [2024-05-15 11:18:25.281653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-05-15 11:18:25.281897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-05-15 11:18:25.281911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.232 qpair failed and we were unable to recover it. 00:26:28.232 [2024-05-15 11:18:25.282132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-05-15 11:18:25.282364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-05-15 11:18:25.282378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.232 qpair failed and we were unable to recover it. 00:26:28.232 [2024-05-15 11:18:25.282675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-05-15 11:18:25.282862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-05-15 11:18:25.282875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.232 qpair failed and we were unable to recover it. 00:26:28.232 [2024-05-15 11:18:25.283071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-05-15 11:18:25.283287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-05-15 11:18:25.283302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.232 qpair failed and we were unable to recover it. 00:26:28.232 [2024-05-15 11:18:25.283468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-05-15 11:18:25.283637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-05-15 11:18:25.283651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.232 qpair failed and we were unable to recover it. 00:26:28.232 [2024-05-15 11:18:25.283750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-05-15 11:18:25.283944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-05-15 11:18:25.283957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.232 qpair failed and we were unable to recover it. 00:26:28.232 [2024-05-15 11:18:25.284050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-05-15 11:18:25.284201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-05-15 11:18:25.284217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.232 qpair failed and we were unable to recover it. 00:26:28.232 [2024-05-15 11:18:25.284473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-05-15 11:18:25.284565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-05-15 11:18:25.284579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.232 qpair failed and we were unable to recover it. 00:26:28.232 [2024-05-15 11:18:25.284672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-05-15 11:18:25.284923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-05-15 11:18:25.284937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.232 qpair failed and we were unable to recover it. 00:26:28.232 [2024-05-15 11:18:25.285041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-05-15 11:18:25.285206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-05-15 11:18:25.285221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.232 qpair failed and we were unable to recover it. 00:26:28.232 [2024-05-15 11:18:25.285443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-05-15 11:18:25.285601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-05-15 11:18:25.285615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.232 qpair failed and we were unable to recover it. 00:26:28.232 [2024-05-15 11:18:25.285846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-05-15 11:18:25.286095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-05-15 11:18:25.286109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.232 qpair failed and we were unable to recover it. 00:26:28.232 [2024-05-15 11:18:25.286365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-05-15 11:18:25.286549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-05-15 11:18:25.286563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.232 qpair failed and we were unable to recover it. 00:26:28.232 [2024-05-15 11:18:25.286733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-05-15 11:18:25.286884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-05-15 11:18:25.286898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.232 qpair failed and we were unable to recover it. 00:26:28.232 [2024-05-15 11:18:25.287130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-05-15 11:18:25.287305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-05-15 11:18:25.287319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.232 qpair failed and we were unable to recover it. 00:26:28.232 [2024-05-15 11:18:25.287418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-05-15 11:18:25.287575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-05-15 11:18:25.287589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.232 qpair failed and we were unable to recover it. 00:26:28.232 [2024-05-15 11:18:25.287761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-05-15 11:18:25.287949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-05-15 11:18:25.287964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.232 qpair failed and we were unable to recover it. 00:26:28.233 [2024-05-15 11:18:25.288137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-05-15 11:18:25.288380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-05-15 11:18:25.288395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.233 qpair failed and we were unable to recover it. 00:26:28.233 [2024-05-15 11:18:25.288628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-05-15 11:18:25.288721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-05-15 11:18:25.288736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.233 qpair failed and we were unable to recover it. 00:26:28.233 [2024-05-15 11:18:25.288983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-05-15 11:18:25.289220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-05-15 11:18:25.289234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.233 qpair failed and we were unable to recover it. 00:26:28.233 [2024-05-15 11:18:25.289425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-05-15 11:18:25.289523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-05-15 11:18:25.289537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.233 qpair failed and we were unable to recover it. 00:26:28.233 [2024-05-15 11:18:25.289757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-05-15 11:18:25.289921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-05-15 11:18:25.289936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.233 qpair failed and we were unable to recover it. 00:26:28.233 [2024-05-15 11:18:25.290129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-05-15 11:18:25.290283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-05-15 11:18:25.290297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.233 qpair failed and we were unable to recover it. 00:26:28.233 [2024-05-15 11:18:25.290469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-05-15 11:18:25.290659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-05-15 11:18:25.290673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.233 qpair failed and we were unable to recover it. 00:26:28.233 [2024-05-15 11:18:25.290898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-05-15 11:18:25.291064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-05-15 11:18:25.291078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.233 qpair failed and we were unable to recover it. 00:26:28.233 [2024-05-15 11:18:25.291186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-05-15 11:18:25.291267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-05-15 11:18:25.291282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.233 qpair failed and we were unable to recover it. 00:26:28.233 [2024-05-15 11:18:25.291540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-05-15 11:18:25.291717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-05-15 11:18:25.291731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.233 qpair failed and we were unable to recover it. 00:26:28.233 [2024-05-15 11:18:25.291916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-05-15 11:18:25.292151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-05-15 11:18:25.292169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.233 qpair failed and we were unable to recover it. 00:26:28.233 [2024-05-15 11:18:25.292343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-05-15 11:18:25.292528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-05-15 11:18:25.292542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.233 qpair failed and we were unable to recover it. 00:26:28.233 [2024-05-15 11:18:25.292695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-05-15 11:18:25.292914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-05-15 11:18:25.292928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.233 qpair failed and we were unable to recover it. 00:26:28.233 [2024-05-15 11:18:25.293045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-05-15 11:18:25.293289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-05-15 11:18:25.293304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.233 qpair failed and we were unable to recover it. 00:26:28.233 [2024-05-15 11:18:25.293506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-05-15 11:18:25.293722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-05-15 11:18:25.293736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.233 qpair failed and we were unable to recover it. 00:26:28.233 [2024-05-15 11:18:25.293913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-05-15 11:18:25.294100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-05-15 11:18:25.294114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.233 qpair failed and we were unable to recover it. 00:26:28.233 [2024-05-15 11:18:25.294409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-05-15 11:18:25.294641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-05-15 11:18:25.294654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.233 qpair failed and we were unable to recover it. 00:26:28.233 [2024-05-15 11:18:25.294878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-05-15 11:18:25.295035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-05-15 11:18:25.295049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.233 qpair failed and we were unable to recover it. 00:26:28.233 [2024-05-15 11:18:25.295314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-05-15 11:18:25.295490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-05-15 11:18:25.295504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.233 qpair failed and we were unable to recover it. 00:26:28.233 [2024-05-15 11:18:25.295689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-05-15 11:18:25.295849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-05-15 11:18:25.295863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.233 qpair failed and we were unable to recover it. 00:26:28.233 [2024-05-15 11:18:25.296086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-05-15 11:18:25.296278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-05-15 11:18:25.296292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.233 qpair failed and we were unable to recover it. 00:26:28.233 [2024-05-15 11:18:25.296452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-05-15 11:18:25.296636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-05-15 11:18:25.296650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.233 qpair failed and we were unable to recover it. 00:26:28.233 [2024-05-15 11:18:25.296821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-05-15 11:18:25.297053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-05-15 11:18:25.297067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-05-15 11:18:25.297238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-05-15 11:18:25.297483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-05-15 11:18:25.297497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-05-15 11:18:25.297741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-05-15 11:18:25.297991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-05-15 11:18:25.298005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-05-15 11:18:25.298301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-05-15 11:18:25.298548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-05-15 11:18:25.298561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-05-15 11:18:25.298736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-05-15 11:18:25.298957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-05-15 11:18:25.298970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-05-15 11:18:25.299230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-05-15 11:18:25.299467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-05-15 11:18:25.299480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-05-15 11:18:25.299650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-05-15 11:18:25.299745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-05-15 11:18:25.299758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-05-15 11:18:25.299928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-05-15 11:18:25.300176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-05-15 11:18:25.300191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-05-15 11:18:25.300435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-05-15 11:18:25.300605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-05-15 11:18:25.300619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-05-15 11:18:25.300788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-05-15 11:18:25.300956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-05-15 11:18:25.300970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-05-15 11:18:25.301220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-05-15 11:18:25.301500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-05-15 11:18:25.301514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-05-15 11:18:25.301787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-05-15 11:18:25.302035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-05-15 11:18:25.302050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-05-15 11:18:25.302293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-05-15 11:18:25.302463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-05-15 11:18:25.302477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-05-15 11:18:25.302653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-05-15 11:18:25.302932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-05-15 11:18:25.302946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-05-15 11:18:25.303230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-05-15 11:18:25.303496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-05-15 11:18:25.303510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-05-15 11:18:25.303745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-05-15 11:18:25.303964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-05-15 11:18:25.303978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-05-15 11:18:25.304230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-05-15 11:18:25.304476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-05-15 11:18:25.304489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-05-15 11:18:25.304653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-05-15 11:18:25.304890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-05-15 11:18:25.304904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-05-15 11:18:25.305064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-05-15 11:18:25.305316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-05-15 11:18:25.305331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-05-15 11:18:25.305564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-05-15 11:18:25.305829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-05-15 11:18:25.305843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-05-15 11:18:25.305938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-05-15 11:18:25.306143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-05-15 11:18:25.306157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-05-15 11:18:25.306413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-05-15 11:18:25.306585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-05-15 11:18:25.306599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-05-15 11:18:25.306781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-05-15 11:18:25.306954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-05-15 11:18:25.306967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-05-15 11:18:25.307133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-05-15 11:18:25.307320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-05-15 11:18:25.307335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-05-15 11:18:25.307505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-05-15 11:18:25.307726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-05-15 11:18:25.307740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-05-15 11:18:25.307912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-05-15 11:18:25.308172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-05-15 11:18:25.308187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-05-15 11:18:25.308348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-05-15 11:18:25.308569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-05-15 11:18:25.308583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-05-15 11:18:25.308827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-05-15 11:18:25.309064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-05-15 11:18:25.309077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-05-15 11:18:25.309196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-05-15 11:18:25.309378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-05-15 11:18:25.309392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-05-15 11:18:25.309491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-05-15 11:18:25.309659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-05-15 11:18:25.309672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-05-15 11:18:25.309791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-05-15 11:18:25.309955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-05-15 11:18:25.309970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-05-15 11:18:25.310155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-05-15 11:18:25.310351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-05-15 11:18:25.310366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-05-15 11:18:25.310531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-05-15 11:18:25.310721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-05-15 11:18:25.310735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-05-15 11:18:25.310908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-05-15 11:18:25.310995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-05-15 11:18:25.311013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-05-15 11:18:25.311258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-05-15 11:18:25.311434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-05-15 11:18:25.311450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-05-15 11:18:25.311613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-05-15 11:18:25.311772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-05-15 11:18:25.311786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-05-15 11:18:25.312036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-05-15 11:18:25.312276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-05-15 11:18:25.312293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-05-15 11:18:25.312557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-05-15 11:18:25.312800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-05-15 11:18:25.312815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-05-15 11:18:25.313054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-05-15 11:18:25.313328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-05-15 11:18:25.313343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-05-15 11:18:25.313515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-05-15 11:18:25.313692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-05-15 11:18:25.313706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-05-15 11:18:25.313811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-05-15 11:18:25.314018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-05-15 11:18:25.314032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-05-15 11:18:25.314203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-05-15 11:18:25.314444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-05-15 11:18:25.314458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-05-15 11:18:25.314563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-05-15 11:18:25.314712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-05-15 11:18:25.314726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-05-15 11:18:25.314913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-05-15 11:18:25.315141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-05-15 11:18:25.315156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-05-15 11:18:25.315334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-05-15 11:18:25.315544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-05-15 11:18:25.315559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-05-15 11:18:25.315710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-05-15 11:18:25.315803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-05-15 11:18:25.315818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.236 [2024-05-15 11:18:25.316087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-05-15 11:18:25.316330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-05-15 11:18:25.316345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-05-15 11:18:25.316540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-05-15 11:18:25.316760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-05-15 11:18:25.316775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-05-15 11:18:25.317048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-05-15 11:18:25.317217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-05-15 11:18:25.317232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-05-15 11:18:25.317474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-05-15 11:18:25.317578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-05-15 11:18:25.317592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-05-15 11:18:25.317777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-05-15 11:18:25.317997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-05-15 11:18:25.318011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-05-15 11:18:25.318179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-05-15 11:18:25.318374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-05-15 11:18:25.318389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-05-15 11:18:25.318640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-05-15 11:18:25.318912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-05-15 11:18:25.318926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-05-15 11:18:25.319188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-05-15 11:18:25.319434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-05-15 11:18:25.319448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-05-15 11:18:25.319601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-05-15 11:18:25.319844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-05-15 11:18:25.319859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-05-15 11:18:25.320033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-05-15 11:18:25.320255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-05-15 11:18:25.320270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-05-15 11:18:25.320472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-05-15 11:18:25.320701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-05-15 11:18:25.320716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-05-15 11:18:25.320891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-05-15 11:18:25.321135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-05-15 11:18:25.321149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-05-15 11:18:25.321325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-05-15 11:18:25.321414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-05-15 11:18:25.321428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-05-15 11:18:25.321525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-05-15 11:18:25.321800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-05-15 11:18:25.321814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-05-15 11:18:25.322011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-05-15 11:18:25.322198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-05-15 11:18:25.322213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-05-15 11:18:25.322425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-05-15 11:18:25.322616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-05-15 11:18:25.322631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-05-15 11:18:25.322872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-05-15 11:18:25.323110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-05-15 11:18:25.323124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-05-15 11:18:25.323306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-05-15 11:18:25.323610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-05-15 11:18:25.323624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-05-15 11:18:25.323794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-05-15 11:18:25.323968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-05-15 11:18:25.323982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-05-15 11:18:25.324150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-05-15 11:18:25.324309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-05-15 11:18:25.324323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-05-15 11:18:25.324496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-05-15 11:18:25.324658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-05-15 11:18:25.324672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-05-15 11:18:25.324895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-05-15 11:18:25.325138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-05-15 11:18:25.325152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-05-15 11:18:25.325349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-05-15 11:18:25.325582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-05-15 11:18:25.325596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-05-15 11:18:25.325714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-05-15 11:18:25.325906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-05-15 11:18:25.325921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.237 [2024-05-15 11:18:25.326031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-05-15 11:18:25.326116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-05-15 11:18:25.326131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.237 qpair failed and we were unable to recover it. 00:26:28.237 [2024-05-15 11:18:25.326234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-05-15 11:18:25.326456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-05-15 11:18:25.326470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.237 qpair failed and we were unable to recover it. 00:26:28.237 [2024-05-15 11:18:25.326595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-05-15 11:18:25.326890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-05-15 11:18:25.326904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.237 qpair failed and we were unable to recover it. 00:26:28.237 [2024-05-15 11:18:25.327128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-05-15 11:18:25.327297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-05-15 11:18:25.327315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.237 qpair failed and we were unable to recover it. 00:26:28.237 [2024-05-15 11:18:25.327539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-05-15 11:18:25.327656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-05-15 11:18:25.327670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.237 qpair failed and we were unable to recover it. 00:26:28.237 [2024-05-15 11:18:25.327937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-05-15 11:18:25.328099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-05-15 11:18:25.328113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.237 qpair failed and we were unable to recover it. 00:26:28.237 [2024-05-15 11:18:25.328313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-05-15 11:18:25.328484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-05-15 11:18:25.328498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.237 qpair failed and we were unable to recover it. 00:26:28.237 [2024-05-15 11:18:25.328689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-05-15 11:18:25.328891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-05-15 11:18:25.328905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.237 qpair failed and we were unable to recover it. 00:26:28.237 [2024-05-15 11:18:25.329155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-05-15 11:18:25.329439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-05-15 11:18:25.329454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.237 qpair failed and we were unable to recover it. 00:26:28.237 [2024-05-15 11:18:25.329675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-05-15 11:18:25.329970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-05-15 11:18:25.329999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.237 qpair failed and we were unable to recover it. 00:26:28.237 [2024-05-15 11:18:25.330200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-05-15 11:18:25.330355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-05-15 11:18:25.330385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.237 qpair failed and we were unable to recover it. 00:26:28.237 [2024-05-15 11:18:25.330659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-05-15 11:18:25.330982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-05-15 11:18:25.331011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.237 qpair failed and we were unable to recover it. 00:26:28.237 [2024-05-15 11:18:25.331216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-05-15 11:18:25.331487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-05-15 11:18:25.331516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.237 qpair failed and we were unable to recover it. 00:26:28.237 [2024-05-15 11:18:25.331801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-05-15 11:18:25.332053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-05-15 11:18:25.332089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.237 qpair failed and we were unable to recover it. 00:26:28.237 [2024-05-15 11:18:25.332376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-05-15 11:18:25.332645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-05-15 11:18:25.332673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.237 qpair failed and we were unable to recover it. 00:26:28.237 [2024-05-15 11:18:25.332998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-05-15 11:18:25.333206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-05-15 11:18:25.333236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.237 qpair failed and we were unable to recover it. 00:26:28.237 [2024-05-15 11:18:25.333444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-05-15 11:18:25.333715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-05-15 11:18:25.333744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.237 qpair failed and we were unable to recover it. 00:26:28.237 [2024-05-15 11:18:25.333980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-05-15 11:18:25.334215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-05-15 11:18:25.334246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.237 qpair failed and we were unable to recover it. 00:26:28.237 [2024-05-15 11:18:25.334388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-05-15 11:18:25.334527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-05-15 11:18:25.334556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-05-15 11:18:25.334867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-05-15 11:18:25.335143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-05-15 11:18:25.335185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-05-15 11:18:25.335441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-05-15 11:18:25.335599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-05-15 11:18:25.335613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-05-15 11:18:25.335862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-05-15 11:18:25.336057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-05-15 11:18:25.336086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-05-15 11:18:25.336288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-05-15 11:18:25.336559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-05-15 11:18:25.336588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-05-15 11:18:25.336859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-05-15 11:18:25.337033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-05-15 11:18:25.337068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-05-15 11:18:25.337346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-05-15 11:18:25.337613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-05-15 11:18:25.337642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-05-15 11:18:25.337907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-05-15 11:18:25.338051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-05-15 11:18:25.338080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-05-15 11:18:25.338304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-05-15 11:18:25.338505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-05-15 11:18:25.338536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-05-15 11:18:25.338665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-05-15 11:18:25.338860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-05-15 11:18:25.338874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-05-15 11:18:25.339093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-05-15 11:18:25.339283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-05-15 11:18:25.339314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-05-15 11:18:25.339527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-05-15 11:18:25.339757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-05-15 11:18:25.339786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-05-15 11:18:25.339973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-05-15 11:18:25.340227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-05-15 11:18:25.340258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-05-15 11:18:25.340403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-05-15 11:18:25.340539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-05-15 11:18:25.340568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-05-15 11:18:25.340747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-05-15 11:18:25.341001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-05-15 11:18:25.341031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-05-15 11:18:25.341153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-05-15 11:18:25.341315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-05-15 11:18:25.341351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-05-15 11:18:25.341638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-05-15 11:18:25.341792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-05-15 11:18:25.341821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-05-15 11:18:25.342118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-05-15 11:18:25.342422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-05-15 11:18:25.342453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-05-15 11:18:25.342736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-05-15 11:18:25.342987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-05-15 11:18:25.343016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-05-15 11:18:25.343275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-05-15 11:18:25.343476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-05-15 11:18:25.343490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-05-15 11:18:25.343660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-05-15 11:18:25.343838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-05-15 11:18:25.343853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-05-15 11:18:25.344106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-05-15 11:18:25.344334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-05-15 11:18:25.344350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-05-15 11:18:25.344519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-05-15 11:18:25.344624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-05-15 11:18:25.344640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-05-15 11:18:25.344806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-05-15 11:18:25.344919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-05-15 11:18:25.344933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-05-15 11:18:25.345162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-05-15 11:18:25.345336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-05-15 11:18:25.345351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-05-15 11:18:25.345591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-05-15 11:18:25.345811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-05-15 11:18:25.345842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-05-15 11:18:25.346127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-05-15 11:18:25.346390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-05-15 11:18:25.346421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-05-15 11:18:25.346643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-05-15 11:18:25.346885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-05-15 11:18:25.346915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-05-15 11:18:25.347188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-05-15 11:18:25.347424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-05-15 11:18:25.347452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-05-15 11:18:25.347663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-05-15 11:18:25.347951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-05-15 11:18:25.347980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-05-15 11:18:25.348256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-05-15 11:18:25.348457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-05-15 11:18:25.348486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-05-15 11:18:25.348682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-05-15 11:18:25.348776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-05-15 11:18:25.348790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-05-15 11:18:25.348969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-05-15 11:18:25.349190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-05-15 11:18:25.349222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-05-15 11:18:25.349481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-05-15 11:18:25.349592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-05-15 11:18:25.349623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-05-15 11:18:25.349946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-05-15 11:18:25.350113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-05-15 11:18:25.350128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-05-15 11:18:25.350395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-05-15 11:18:25.350492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-05-15 11:18:25.350505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-05-15 11:18:25.350662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-05-15 11:18:25.350846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-05-15 11:18:25.350859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-05-15 11:18:25.351008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-05-15 11:18:25.351154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-05-15 11:18:25.351174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-05-15 11:18:25.351352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-05-15 11:18:25.351519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-05-15 11:18:25.351533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-05-15 11:18:25.351708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-05-15 11:18:25.351982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-05-15 11:18:25.352010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-05-15 11:18:25.352248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-05-15 11:18:25.352398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-05-15 11:18:25.352427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-05-15 11:18:25.352550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-05-15 11:18:25.352720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-05-15 11:18:25.352734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-05-15 11:18:25.352933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-05-15 11:18:25.353062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-05-15 11:18:25.353076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-05-15 11:18:25.353243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-05-15 11:18:25.353341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-05-15 11:18:25.353356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-05-15 11:18:25.353609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-05-15 11:18:25.353816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-05-15 11:18:25.353848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-05-15 11:18:25.354035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-05-15 11:18:25.354285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-05-15 11:18:25.354300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-05-15 11:18:25.354526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-05-15 11:18:25.354804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-05-15 11:18:25.354832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-05-15 11:18:25.355084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-05-15 11:18:25.355361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-05-15 11:18:25.355394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-05-15 11:18:25.355549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-05-15 11:18:25.355819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-05-15 11:18:25.355848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-05-15 11:18:25.356052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-05-15 11:18:25.356246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-05-15 11:18:25.356277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-05-15 11:18:25.356567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-05-15 11:18:25.356823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-05-15 11:18:25.356854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-05-15 11:18:25.357040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-05-15 11:18:25.357313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-05-15 11:18:25.357328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-05-15 11:18:25.357551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-05-15 11:18:25.357789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-05-15 11:18:25.357818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-05-15 11:18:25.357978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-05-15 11:18:25.358197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-05-15 11:18:25.358229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-05-15 11:18:25.358479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-05-15 11:18:25.358705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-05-15 11:18:25.358719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-05-15 11:18:25.358954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-05-15 11:18:25.359223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-05-15 11:18:25.359238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-05-15 11:18:25.359349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-05-15 11:18:25.359521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-05-15 11:18:25.359536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-05-15 11:18:25.359805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-05-15 11:18:25.359910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-05-15 11:18:25.359925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-05-15 11:18:25.360020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-05-15 11:18:25.360224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-05-15 11:18:25.360257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-05-15 11:18:25.360417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-05-15 11:18:25.360643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-05-15 11:18:25.360673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-05-15 11:18:25.360949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-05-15 11:18:25.361225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-05-15 11:18:25.361255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-05-15 11:18:25.361464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-05-15 11:18:25.361742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-05-15 11:18:25.361772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-05-15 11:18:25.362001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-05-15 11:18:25.362199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-05-15 11:18:25.362242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-05-15 11:18:25.362532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-05-15 11:18:25.362806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-05-15 11:18:25.362820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-05-15 11:18:25.363071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-05-15 11:18:25.363306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-05-15 11:18:25.363321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-05-15 11:18:25.363498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-05-15 11:18:25.363612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-05-15 11:18:25.363627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-05-15 11:18:25.363825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-05-15 11:18:25.364070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-05-15 11:18:25.364084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-05-15 11:18:25.364345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-05-15 11:18:25.364529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-05-15 11:18:25.364557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-05-15 11:18:25.364839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-05-15 11:18:25.365065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-05-15 11:18:25.365095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-05-15 11:18:25.365316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-05-15 11:18:25.365532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-05-15 11:18:25.365561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-05-15 11:18:25.365829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-05-15 11:18:25.366047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-05-15 11:18:25.366061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-05-15 11:18:25.366319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-05-15 11:18:25.366493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-05-15 11:18:25.366508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-05-15 11:18:25.366679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-05-15 11:18:25.366868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-05-15 11:18:25.366885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-05-15 11:18:25.366969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-05-15 11:18:25.367202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-05-15 11:18:25.367234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-05-15 11:18:25.367517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-05-15 11:18:25.367736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-05-15 11:18:25.367766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.241 [2024-05-15 11:18:25.367971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-05-15 11:18:25.368209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-05-15 11:18:25.368241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-05-15 11:18:25.368476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-05-15 11:18:25.368669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-05-15 11:18:25.368683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-05-15 11:18:25.368982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-05-15 11:18:25.369218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-05-15 11:18:25.369233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-05-15 11:18:25.369459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-05-15 11:18:25.369704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-05-15 11:18:25.369734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-05-15 11:18:25.370004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-05-15 11:18:25.370225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-05-15 11:18:25.370257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-05-15 11:18:25.370478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-05-15 11:18:25.370592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-05-15 11:18:25.370606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-05-15 11:18:25.370774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-05-15 11:18:25.370994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-05-15 11:18:25.371009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-05-15 11:18:25.371104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-05-15 11:18:25.371252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-05-15 11:18:25.371267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-05-15 11:18:25.371434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-05-15 11:18:25.371589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-05-15 11:18:25.371619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-05-15 11:18:25.371914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-05-15 11:18:25.372211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-05-15 11:18:25.372256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-05-15 11:18:25.372511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-05-15 11:18:25.372629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-05-15 11:18:25.372644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-05-15 11:18:25.372857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-05-15 11:18:25.373113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-05-15 11:18:25.373141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-05-15 11:18:25.373434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-05-15 11:18:25.373686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-05-15 11:18:25.373715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-05-15 11:18:25.373970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-05-15 11:18:25.374227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-05-15 11:18:25.374264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-05-15 11:18:25.374496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-05-15 11:18:25.374682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-05-15 11:18:25.374711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-05-15 11:18:25.374908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-05-15 11:18:25.375047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-05-15 11:18:25.375077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-05-15 11:18:25.375284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-05-15 11:18:25.375555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-05-15 11:18:25.375586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-05-15 11:18:25.375866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-05-15 11:18:25.376039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-05-15 11:18:25.376053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-05-15 11:18:25.376293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-05-15 11:18:25.376471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-05-15 11:18:25.376504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-05-15 11:18:25.376664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-05-15 11:18:25.376932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-05-15 11:18:25.376963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-05-15 11:18:25.377247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-05-15 11:18:25.377525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-05-15 11:18:25.377540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-05-15 11:18:25.377675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-05-15 11:18:25.377930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-05-15 11:18:25.377945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-05-15 11:18:25.378156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-05-15 11:18:25.378360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-05-15 11:18:25.378376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-05-15 11:18:25.378555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-05-15 11:18:25.378697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-05-15 11:18:25.378727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-05-15 11:18:25.378928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-05-15 11:18:25.379062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-05-15 11:18:25.379091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-05-15 11:18:25.379325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-05-15 11:18:25.379606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-05-15 11:18:25.379637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-05-15 11:18:25.379909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-05-15 11:18:25.380146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-05-15 11:18:25.380162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-05-15 11:18:25.380375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-05-15 11:18:25.380558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-05-15 11:18:25.380574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-05-15 11:18:25.380738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-05-15 11:18:25.380842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-05-15 11:18:25.380857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-05-15 11:18:25.381053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-05-15 11:18:25.381336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-05-15 11:18:25.381368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-05-15 11:18:25.381637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-05-15 11:18:25.381964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-05-15 11:18:25.381994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-05-15 11:18:25.382206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-05-15 11:18:25.382423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-05-15 11:18:25.382454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-05-15 11:18:25.382678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-05-15 11:18:25.382873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-05-15 11:18:25.382904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-05-15 11:18:25.383203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-05-15 11:18:25.383349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-05-15 11:18:25.383381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-05-15 11:18:25.383585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-05-15 11:18:25.383825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-05-15 11:18:25.383839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-05-15 11:18:25.384006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-05-15 11:18:25.384230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-05-15 11:18:25.384262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-05-15 11:18:25.384451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-05-15 11:18:25.384650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-05-15 11:18:25.384679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-05-15 11:18:25.385003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-05-15 11:18:25.385204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-05-15 11:18:25.385236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-05-15 11:18:25.385518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-05-15 11:18:25.385717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-05-15 11:18:25.385746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-05-15 11:18:25.385952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-05-15 11:18:25.386105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-05-15 11:18:25.386119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-05-15 11:18:25.386360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-05-15 11:18:25.386505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-05-15 11:18:25.386535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-05-15 11:18:25.386816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-05-15 11:18:25.387011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-05-15 11:18:25.387040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-05-15 11:18:25.387308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-05-15 11:18:25.387581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-05-15 11:18:25.387596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-05-15 11:18:25.387888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-05-15 11:18:25.388153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-05-15 11:18:25.388174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-05-15 11:18:25.388384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-05-15 11:18:25.388624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-05-15 11:18:25.388638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-05-15 11:18:25.388763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-05-15 11:18:25.388933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-05-15 11:18:25.388947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-05-15 11:18:25.389188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-05-15 11:18:25.389319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-05-15 11:18:25.389333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-05-15 11:18:25.389423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-05-15 11:18:25.389599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-05-15 11:18:25.389628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-05-15 11:18:25.389856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-05-15 11:18:25.390057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-05-15 11:18:25.390087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-05-15 11:18:25.390235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-05-15 11:18:25.390424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-05-15 11:18:25.390455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.243 [2024-05-15 11:18:25.390660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-05-15 11:18:25.390937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-05-15 11:18:25.390966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.243 [2024-05-15 11:18:25.391250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-05-15 11:18:25.391447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-05-15 11:18:25.391477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.243 [2024-05-15 11:18:25.391693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-05-15 11:18:25.391963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-05-15 11:18:25.391977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.243 [2024-05-15 11:18:25.392080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-05-15 11:18:25.392305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-05-15 11:18:25.392321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.243 [2024-05-15 11:18:25.392443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-05-15 11:18:25.392561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-05-15 11:18:25.392574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.243 [2024-05-15 11:18:25.392751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-05-15 11:18:25.392933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-05-15 11:18:25.392962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.243 [2024-05-15 11:18:25.393214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-05-15 11:18:25.393377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-05-15 11:18:25.393391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.243 [2024-05-15 11:18:25.393590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-05-15 11:18:25.393837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-05-15 11:18:25.393867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.243 [2024-05-15 11:18:25.394099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-05-15 11:18:25.394250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-05-15 11:18:25.394281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.243 [2024-05-15 11:18:25.394420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-05-15 11:18:25.394656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-05-15 11:18:25.394686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.243 [2024-05-15 11:18:25.394897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-05-15 11:18:25.395100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-05-15 11:18:25.395130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.243 [2024-05-15 11:18:25.395327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-05-15 11:18:25.395539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-05-15 11:18:25.395569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.243 [2024-05-15 11:18:25.395718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-05-15 11:18:25.395952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-05-15 11:18:25.395983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.243 [2024-05-15 11:18:25.396189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-05-15 11:18:25.396413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-05-15 11:18:25.396442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.243 [2024-05-15 11:18:25.396705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-05-15 11:18:25.396995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-05-15 11:18:25.397009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.243 [2024-05-15 11:18:25.397177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-05-15 11:18:25.397426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-05-15 11:18:25.397456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.243 [2024-05-15 11:18:25.397663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-05-15 11:18:25.398004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-05-15 11:18:25.398033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.243 [2024-05-15 11:18:25.398279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-05-15 11:18:25.398472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-05-15 11:18:25.398503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.243 [2024-05-15 11:18:25.398655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-05-15 11:18:25.398949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-05-15 11:18:25.398964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.243 [2024-05-15 11:18:25.399114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-05-15 11:18:25.399300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-05-15 11:18:25.399315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.244 [2024-05-15 11:18:25.399482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-05-15 11:18:25.399608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-05-15 11:18:25.399623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-05-15 11:18:25.399861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-05-15 11:18:25.400063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-05-15 11:18:25.400099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-05-15 11:18:25.400291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-05-15 11:18:25.400493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-05-15 11:18:25.400523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-05-15 11:18:25.400746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-05-15 11:18:25.401008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-05-15 11:18:25.401038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-05-15 11:18:25.401285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-05-15 11:18:25.401493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-05-15 11:18:25.401524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-05-15 11:18:25.401677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-05-15 11:18:25.401907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-05-15 11:18:25.401936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-05-15 11:18:25.402224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-05-15 11:18:25.402367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-05-15 11:18:25.402397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-05-15 11:18:25.402655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-05-15 11:18:25.402881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-05-15 11:18:25.402910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-05-15 11:18:25.403196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-05-15 11:18:25.403445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-05-15 11:18:25.403459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-05-15 11:18:25.403561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-05-15 11:18:25.403736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-05-15 11:18:25.403750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-05-15 11:18:25.403922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-05-15 11:18:25.404143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-05-15 11:18:25.404181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-05-15 11:18:25.404373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-05-15 11:18:25.404605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-05-15 11:18:25.404640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-05-15 11:18:25.404864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-05-15 11:18:25.405129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-05-15 11:18:25.405158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-05-15 11:18:25.405355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-05-15 11:18:25.405550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-05-15 11:18:25.405592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-05-15 11:18:25.405771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-05-15 11:18:25.405938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-05-15 11:18:25.405952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-05-15 11:18:25.406040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-05-15 11:18:25.406223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-05-15 11:18:25.406237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-05-15 11:18:25.406414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-05-15 11:18:25.406569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-05-15 11:18:25.406597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-05-15 11:18:25.406752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-05-15 11:18:25.406941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-05-15 11:18:25.406970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-05-15 11:18:25.407187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-05-15 11:18:25.407446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-05-15 11:18:25.407476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-05-15 11:18:25.407683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-05-15 11:18:25.407892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-05-15 11:18:25.407922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-05-15 11:18:25.408076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-05-15 11:18:25.408285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-05-15 11:18:25.408316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-05-15 11:18:25.408554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-05-15 11:18:25.408712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-05-15 11:18:25.408746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-05-15 11:18:25.408960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-05-15 11:18:25.409209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-05-15 11:18:25.409240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-05-15 11:18:25.409453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-05-15 11:18:25.409659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-05-15 11:18:25.409689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-05-15 11:18:25.409898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-05-15 11:18:25.410093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-05-15 11:18:25.410122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-05-15 11:18:25.410420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-05-15 11:18:25.410559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.410589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-05-15 11:18:25.410817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.411057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.411086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-05-15 11:18:25.411377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.411515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.411548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-05-15 11:18:25.411751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.412022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.412052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-05-15 11:18:25.412272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.412477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.412507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-05-15 11:18:25.412750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.413064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.413094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-05-15 11:18:25.413305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.413580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.413615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-05-15 11:18:25.413751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.413921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.413935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-05-15 11:18:25.414029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.414191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.414206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-05-15 11:18:25.414322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.414443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.414456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-05-15 11:18:25.414574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.414675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.414689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-05-15 11:18:25.414953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.415120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.415152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-05-15 11:18:25.415359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.415562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.415591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-05-15 11:18:25.415893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.416161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.416205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-05-15 11:18:25.416410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.416548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.416577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-05-15 11:18:25.416848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.417053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.417083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-05-15 11:18:25.417293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.417481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.417496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-05-15 11:18:25.417780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.417883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.417897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-05-15 11:18:25.418053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.418178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.418193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-05-15 11:18:25.418422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.418609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.418638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-05-15 11:18:25.418973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.419256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.419287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-05-15 11:18:25.419472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.419665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.419679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-05-15 11:18:25.419865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.420107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.420137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-05-15 11:18:25.420436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.420559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.420588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-05-15 11:18:25.420812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.421038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.421067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-05-15 11:18:25.421299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.421512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.421553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-05-15 11:18:25.421662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.421846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.421860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-05-15 11:18:25.421952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.422179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.422216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-05-15 11:18:25.422443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.422626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-05-15 11:18:25.422655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-05-15 11:18:25.422979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.423182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.423212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-05-15 11:18:25.423360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.423480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.423509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-05-15 11:18:25.423642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.423794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.423808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-05-15 11:18:25.424072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.424237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.424251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-05-15 11:18:25.424463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.424545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.424558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-05-15 11:18:25.424747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.424989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.425003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-05-15 11:18:25.425125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.425295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.425311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-05-15 11:18:25.425489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.425585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.425605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-05-15 11:18:25.425787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.425967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.425986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-05-15 11:18:25.426229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.426360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.426379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-05-15 11:18:25.426632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.426858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.426887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-05-15 11:18:25.427085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.427278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.427310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-05-15 11:18:25.427524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.427771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.427801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-05-15 11:18:25.428083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.428285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.428315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-05-15 11:18:25.428464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.428652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.428681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-05-15 11:18:25.428900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.429114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.429143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-05-15 11:18:25.429297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.429503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.429533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-05-15 11:18:25.429810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.429903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.429917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-05-15 11:18:25.430184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.430282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.430296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-05-15 11:18:25.430469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.430575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.430590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-05-15 11:18:25.430759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.430850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.430864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-05-15 11:18:25.431087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.431332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.431364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-05-15 11:18:25.431573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.431775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.431804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-05-15 11:18:25.432068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.432332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.432347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-05-15 11:18:25.432542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.432720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.432749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-05-15 11:18:25.432876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.433071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.433100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-05-15 11:18:25.433219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.433448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-05-15 11:18:25.433477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.247 [2024-05-15 11:18:25.433637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-05-15 11:18:25.433794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-05-15 11:18:25.433824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-05-15 11:18:25.434033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-05-15 11:18:25.434306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-05-15 11:18:25.434337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-05-15 11:18:25.434594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-05-15 11:18:25.434738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-05-15 11:18:25.434767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-05-15 11:18:25.434970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-05-15 11:18:25.435226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-05-15 11:18:25.435256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-05-15 11:18:25.435465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-05-15 11:18:25.435665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-05-15 11:18:25.435694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-05-15 11:18:25.435962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-05-15 11:18:25.436080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-05-15 11:18:25.436109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-05-15 11:18:25.436377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-05-15 11:18:25.436603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-05-15 11:18:25.436631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-05-15 11:18:25.436878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-05-15 11:18:25.437150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-05-15 11:18:25.437191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-05-15 11:18:25.437342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-05-15 11:18:25.437522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-05-15 11:18:25.437536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-05-15 11:18:25.437743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-05-15 11:18:25.438017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-05-15 11:18:25.438047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-05-15 11:18:25.438319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-05-15 11:18:25.438468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-05-15 11:18:25.438498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-05-15 11:18:25.438714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-05-15 11:18:25.439003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-05-15 11:18:25.439032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-05-15 11:18:25.439283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-05-15 11:18:25.439489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-05-15 11:18:25.439518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-05-15 11:18:25.439769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-05-15 11:18:25.439949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-05-15 11:18:25.439978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-05-15 11:18:25.440255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-05-15 11:18:25.440484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-05-15 11:18:25.440513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-05-15 11:18:25.440762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-05-15 11:18:25.440996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-05-15 11:18:25.441010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-05-15 11:18:25.441281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-05-15 11:18:25.441454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-05-15 11:18:25.441468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-05-15 11:18:25.441687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-05-15 11:18:25.441874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-05-15 11:18:25.441903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-05-15 11:18:25.442191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-05-15 11:18:25.442338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-05-15 11:18:25.442368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-05-15 11:18:25.442574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-05-15 11:18:25.442767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-05-15 11:18:25.442796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-05-15 11:18:25.443087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-05-15 11:18:25.443238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-05-15 11:18:25.443269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-05-15 11:18:25.443482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-05-15 11:18:25.443671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-05-15 11:18:25.443701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-05-15 11:18:25.443838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-05-15 11:18:25.444031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-05-15 11:18:25.444061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-05-15 11:18:25.444348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-05-15 11:18:25.444568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-05-15 11:18:25.444597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-05-15 11:18:25.444871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-05-15 11:18:25.445125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-05-15 11:18:25.445154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-05-15 11:18:25.445382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-05-15 11:18:25.445583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-05-15 11:18:25.445596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.248 qpair failed and we were unable to recover it. 00:26:28.248 [2024-05-15 11:18:25.445703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.248 [2024-05-15 11:18:25.445885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.248 [2024-05-15 11:18:25.445899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.248 qpair failed and we were unable to recover it. 00:26:28.248 [2024-05-15 11:18:25.446063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.248 [2024-05-15 11:18:25.446262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.248 [2024-05-15 11:18:25.446293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.248 qpair failed and we were unable to recover it. 00:26:28.248 [2024-05-15 11:18:25.446528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.248 [2024-05-15 11:18:25.446706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.248 [2024-05-15 11:18:25.446736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.248 qpair failed and we were unable to recover it. 00:26:28.248 [2024-05-15 11:18:25.447034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.248 [2024-05-15 11:18:25.447152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.248 [2024-05-15 11:18:25.447176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.248 qpair failed and we were unable to recover it. 00:26:28.248 [2024-05-15 11:18:25.447328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.248 [2024-05-15 11:18:25.447481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.248 [2024-05-15 11:18:25.447495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.248 qpair failed and we were unable to recover it. 00:26:28.248 [2024-05-15 11:18:25.447725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.248 [2024-05-15 11:18:25.447904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.248 [2024-05-15 11:18:25.447933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.248 qpair failed and we were unable to recover it. 00:26:28.248 [2024-05-15 11:18:25.448153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.248 [2024-05-15 11:18:25.448378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.248 [2024-05-15 11:18:25.448409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.248 qpair failed and we were unable to recover it. 00:26:28.248 [2024-05-15 11:18:25.448625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.248 [2024-05-15 11:18:25.448847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.248 [2024-05-15 11:18:25.448861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.248 qpair failed and we were unable to recover it. 00:26:28.248 [2024-05-15 11:18:25.449142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.248 [2024-05-15 11:18:25.449347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.248 [2024-05-15 11:18:25.449363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.248 qpair failed and we were unable to recover it. 00:26:28.248 [2024-05-15 11:18:25.449562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.248 [2024-05-15 11:18:25.449764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.248 [2024-05-15 11:18:25.449793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.248 qpair failed and we were unable to recover it. 00:26:28.248 [2024-05-15 11:18:25.449990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.248 [2024-05-15 11:18:25.450268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.248 [2024-05-15 11:18:25.450298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.248 qpair failed and we were unable to recover it. 00:26:28.248 [2024-05-15 11:18:25.450501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.248 [2024-05-15 11:18:25.450752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.248 [2024-05-15 11:18:25.450781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.248 qpair failed and we were unable to recover it. 00:26:28.248 [2024-05-15 11:18:25.450967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.248 [2024-05-15 11:18:25.451251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.248 [2024-05-15 11:18:25.451282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.248 qpair failed and we were unable to recover it. 00:26:28.248 [2024-05-15 11:18:25.451414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.248 [2024-05-15 11:18:25.451600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.248 [2024-05-15 11:18:25.451614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.248 qpair failed and we were unable to recover it. 00:26:28.248 [2024-05-15 11:18:25.451831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.248 [2024-05-15 11:18:25.451995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.248 [2024-05-15 11:18:25.452026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.248 qpair failed and we were unable to recover it. 00:26:28.248 [2024-05-15 11:18:25.452301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.248 [2024-05-15 11:18:25.452483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.248 [2024-05-15 11:18:25.452497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.248 qpair failed and we were unable to recover it. 00:26:28.248 [2024-05-15 11:18:25.452684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.248 [2024-05-15 11:18:25.452786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.248 [2024-05-15 11:18:25.452800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.248 qpair failed and we were unable to recover it. 00:26:28.248 [2024-05-15 11:18:25.453048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.248 [2024-05-15 11:18:25.453151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.248 [2024-05-15 11:18:25.453170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.248 qpair failed and we were unable to recover it. 00:26:28.248 [2024-05-15 11:18:25.453278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.248 [2024-05-15 11:18:25.453478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.248 [2024-05-15 11:18:25.453492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.248 qpair failed and we were unable to recover it. 00:26:28.248 [2024-05-15 11:18:25.453737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.248 [2024-05-15 11:18:25.453917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.248 [2024-05-15 11:18:25.453946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.248 qpair failed and we were unable to recover it. 00:26:28.248 [2024-05-15 11:18:25.454253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.248 [2024-05-15 11:18:25.454464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.248 [2024-05-15 11:18:25.454494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.248 qpair failed and we were unable to recover it. 00:26:28.248 [2024-05-15 11:18:25.454661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.248 [2024-05-15 11:18:25.454921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.248 [2024-05-15 11:18:25.454951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.248 qpair failed and we were unable to recover it. 00:26:28.248 [2024-05-15 11:18:25.455102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.248 [2024-05-15 11:18:25.455324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.248 [2024-05-15 11:18:25.455354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.248 qpair failed and we were unable to recover it. 00:26:28.248 [2024-05-15 11:18:25.455512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.248 [2024-05-15 11:18:25.455762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.455791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.249 qpair failed and we were unable to recover it. 00:26:28.249 [2024-05-15 11:18:25.455920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.456140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.456155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.249 qpair failed and we were unable to recover it. 00:26:28.249 [2024-05-15 11:18:25.456357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.456582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.456611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.249 qpair failed and we were unable to recover it. 00:26:28.249 [2024-05-15 11:18:25.456930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.457132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.457160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.249 qpair failed and we were unable to recover it. 00:26:28.249 [2024-05-15 11:18:25.457332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.457532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.457561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.249 qpair failed and we were unable to recover it. 00:26:28.249 [2024-05-15 11:18:25.457805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.458073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.458102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.249 qpair failed and we were unable to recover it. 00:26:28.249 [2024-05-15 11:18:25.458309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.458500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.458529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.249 qpair failed and we were unable to recover it. 00:26:28.249 [2024-05-15 11:18:25.458720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.458844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.458873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.249 qpair failed and we were unable to recover it. 00:26:28.249 [2024-05-15 11:18:25.459129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.459329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.459359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.249 qpair failed and we were unable to recover it. 00:26:28.249 [2024-05-15 11:18:25.459520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.459695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.459710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.249 qpair failed and we were unable to recover it. 00:26:28.249 [2024-05-15 11:18:25.459977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.460073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.460088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.249 qpair failed and we were unable to recover it. 00:26:28.249 [2024-05-15 11:18:25.460263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.460382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.460396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.249 qpair failed and we were unable to recover it. 00:26:28.249 [2024-05-15 11:18:25.460626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.460847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.460878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.249 qpair failed and we were unable to recover it. 00:26:28.249 [2024-05-15 11:18:25.461178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.461431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.461461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.249 qpair failed and we were unable to recover it. 00:26:28.249 [2024-05-15 11:18:25.461618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.461833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.461848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.249 qpair failed and we were unable to recover it. 00:26:28.249 [2024-05-15 11:18:25.462016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.462196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.462212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.249 qpair failed and we were unable to recover it. 00:26:28.249 [2024-05-15 11:18:25.462386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.462552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.462566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.249 qpair failed and we were unable to recover it. 00:26:28.249 [2024-05-15 11:18:25.462754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.462958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.462988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.249 qpair failed and we were unable to recover it. 00:26:28.249 [2024-05-15 11:18:25.463184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.463314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.463344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.249 qpair failed and we were unable to recover it. 00:26:28.249 [2024-05-15 11:18:25.463558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.463713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.463742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.249 qpair failed and we were unable to recover it. 00:26:28.249 [2024-05-15 11:18:25.463896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.464146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.464204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.249 qpair failed and we were unable to recover it. 00:26:28.249 [2024-05-15 11:18:25.464432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.464732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.464746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.249 qpair failed and we were unable to recover it. 00:26:28.249 [2024-05-15 11:18:25.464924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.465108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.465122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.249 qpair failed and we were unable to recover it. 00:26:28.249 [2024-05-15 11:18:25.465284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.465404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.465418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.249 qpair failed and we were unable to recover it. 00:26:28.249 [2024-05-15 11:18:25.465527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.465778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.465808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.249 qpair failed and we were unable to recover it. 00:26:28.249 [2024-05-15 11:18:25.466089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.466293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.466325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.249 qpair failed and we were unable to recover it. 00:26:28.249 [2024-05-15 11:18:25.466472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.466675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.249 [2024-05-15 11:18:25.466704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.249 qpair failed and we were unable to recover it. 00:26:28.249 [2024-05-15 11:18:25.466842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-05-15 11:18:25.467090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-05-15 11:18:25.467105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-05-15 11:18:25.467282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-05-15 11:18:25.467484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-05-15 11:18:25.467498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-05-15 11:18:25.467603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-05-15 11:18:25.467841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-05-15 11:18:25.467854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-05-15 11:18:25.468022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-05-15 11:18:25.468271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-05-15 11:18:25.468285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-05-15 11:18:25.468388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-05-15 11:18:25.468582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-05-15 11:18:25.468595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-05-15 11:18:25.468789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-05-15 11:18:25.469053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-05-15 11:18:25.469070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-05-15 11:18:25.469313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-05-15 11:18:25.469427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-05-15 11:18:25.469441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-05-15 11:18:25.469557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-05-15 11:18:25.469812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-05-15 11:18:25.469827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-05-15 11:18:25.469926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-05-15 11:18:25.470087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-05-15 11:18:25.470101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-05-15 11:18:25.470297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-05-15 11:18:25.470545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-05-15 11:18:25.470575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-05-15 11:18:25.470846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-05-15 11:18:25.471041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-05-15 11:18:25.471070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-05-15 11:18:25.471334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-05-15 11:18:25.471490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-05-15 11:18:25.471519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-05-15 11:18:25.471720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-05-15 11:18:25.471964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-05-15 11:18:25.471978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-05-15 11:18:25.472142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-05-15 11:18:25.472317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-05-15 11:18:25.472333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-05-15 11:18:25.472460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-05-15 11:18:25.472568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-05-15 11:18:25.472612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-05-15 11:18:25.472765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-05-15 11:18:25.472950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-05-15 11:18:25.472986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-05-15 11:18:25.473215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-05-15 11:18:25.473409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-05-15 11:18:25.473438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-05-15 11:18:25.473590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-05-15 11:18:25.473757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-05-15 11:18:25.473771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-05-15 11:18:25.474017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-05-15 11:18:25.474125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-05-15 11:18:25.474139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-05-15 11:18:25.474270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-05-15 11:18:25.474452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-05-15 11:18:25.474466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-05-15 11:18:25.474635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-05-15 11:18:25.474958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-05-15 11:18:25.474988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-05-15 11:18:25.475195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-05-15 11:18:25.475417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-05-15 11:18:25.475447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-05-15 11:18:25.475607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-05-15 11:18:25.475836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-05-15 11:18:25.475850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-05-15 11:18:25.476116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-05-15 11:18:25.476339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-05-15 11:18:25.476354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.532 [2024-05-15 11:18:25.476466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.476689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.476704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 [2024-05-15 11:18:25.476884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.477151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.477198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 [2024-05-15 11:18:25.477354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.477559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.477588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 [2024-05-15 11:18:25.477815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.478035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.478064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 [2024-05-15 11:18:25.478353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.478554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.478584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 [2024-05-15 11:18:25.478794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.479055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.479069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 [2024-05-15 11:18:25.479279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.479458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.479472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 [2024-05-15 11:18:25.479606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.479836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.479850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 [2024-05-15 11:18:25.479964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2408485 Killed "${NVMF_APP[@]}" "$@" 00:26:28.532 [2024-05-15 11:18:25.480160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.480191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 [2024-05-15 11:18:25.480314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.480516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.480530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 [2024-05-15 11:18:25.480643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.480749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.480764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.532 11:18:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 [2024-05-15 11:18:25.481032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.481145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.481160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 11:18:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:28.532 [2024-05-15 11:18:25.481353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 11:18:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:28.532 [2024-05-15 11:18:25.481515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.481530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 [2024-05-15 11:18:25.481710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 11:18:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@721 -- # xtrace_disable 00:26:28.532 [2024-05-15 11:18:25.481891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.481905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 11:18:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:28.532 [2024-05-15 11:18:25.482155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.482354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.482368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 [2024-05-15 11:18:25.482460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.482623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.482637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 [2024-05-15 11:18:25.482895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.483077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.483092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 [2024-05-15 11:18:25.483250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.483427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.483442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 [2024-05-15 11:18:25.483600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.483717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.483731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 [2024-05-15 11:18:25.483917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.484078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.484092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 [2024-05-15 11:18:25.484216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.484337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.484352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 [2024-05-15 11:18:25.484539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.484650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.484664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 [2024-05-15 11:18:25.484863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.484977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.484995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 [2024-05-15 11:18:25.485223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.485369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.485399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 [2024-05-15 11:18:25.485639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.485954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.485971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 [2024-05-15 11:18:25.486171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.486350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.486382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 [2024-05-15 11:18:25.486640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.486879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.486908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 [2024-05-15 11:18:25.487141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.487413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-05-15 11:18:25.487444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-05-15 11:18:25.487586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-05-15 11:18:25.487746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-05-15 11:18:25.487775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-05-15 11:18:25.488052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-05-15 11:18:25.488249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-05-15 11:18:25.488264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-05-15 11:18:25.488492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-05-15 11:18:25.488644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-05-15 11:18:25.488658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-05-15 11:18:25.488781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-05-15 11:18:25.488883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-05-15 11:18:25.488897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-05-15 11:18:25.489072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 11:18:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2409285 00:26:28.533 [2024-05-15 11:18:25.489246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-05-15 11:18:25.489264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-05-15 11:18:25.489372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 11:18:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2409285 00:26:28.533 [2024-05-15 11:18:25.489542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-05-15 11:18:25.489564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 11:18:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:28.533 [2024-05-15 11:18:25.489678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-05-15 11:18:25.489853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-05-15 11:18:25.489868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 11:18:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@828 -- # '[' -z 2409285 ']' 00:26:28.533 [2024-05-15 11:18:25.490135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-05-15 11:18:25.490265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-05-15 11:18:25.490282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 11:18:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:28.533 [2024-05-15 11:18:25.490480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 11:18:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local max_retries=100 00:26:28.533 [2024-05-15 11:18:25.490667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-05-15 11:18:25.490685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-05-15 11:18:25.490773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 11:18:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:28.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:28.533 [2024-05-15 11:18:25.490957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-05-15 11:18:25.490979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 11:18:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # xtrace_disable 00:26:28.533 [2024-05-15 11:18:25.491219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 11:18:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:28.533 [2024-05-15 11:18:25.491386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-05-15 11:18:25.491407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-05-15 11:18:25.491536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-05-15 11:18:25.491634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-05-15 11:18:25.491648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-05-15 11:18:25.491807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-05-15 11:18:25.492077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-05-15 11:18:25.492092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-05-15 11:18:25.492323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-05-15 11:18:25.492589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-05-15 11:18:25.492605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-05-15 11:18:25.492731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-05-15 11:18:25.492972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-05-15 11:18:25.492988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-05-15 11:18:25.493075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-05-15 11:18:25.493325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-05-15 11:18:25.493345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-05-15 11:18:25.493447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-05-15 11:18:25.493621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-05-15 11:18:25.493636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-05-15 11:18:25.493839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-05-15 11:18:25.494031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-05-15 11:18:25.494047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-05-15 11:18:25.494140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-05-15 11:18:25.494252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-05-15 11:18:25.494268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-05-15 11:18:25.494383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-05-15 11:18:25.494482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-05-15 11:18:25.494497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-05-15 11:18:25.494661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-05-15 11:18:25.494775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-05-15 11:18:25.494789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-05-15 11:18:25.494962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-05-15 11:18:25.495153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-05-15 11:18:25.495176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-05-15 11:18:25.495358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-05-15 11:18:25.495580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-05-15 11:18:25.495595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-05-15 11:18:25.495717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-05-15 11:18:25.495957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-05-15 11:18:25.495971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-05-15 11:18:25.496073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-05-15 11:18:25.496252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-05-15 11:18:25.496267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-05-15 11:18:25.496389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-05-15 11:18:25.496490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-05-15 11:18:25.496504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-05-15 11:18:25.496623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.496774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.496788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-05-15 11:18:25.496959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.497186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.497202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-05-15 11:18:25.497320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.497585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.497601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-05-15 11:18:25.497863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.498106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.498121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-05-15 11:18:25.498290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.498411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.498424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-05-15 11:18:25.498524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.498683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.498697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-05-15 11:18:25.498915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.499101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.499115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-05-15 11:18:25.499360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.499521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.499536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-05-15 11:18:25.499661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.499839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.499854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-05-15 11:18:25.499942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.500133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.500148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-05-15 11:18:25.500338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.500561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.500575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-05-15 11:18:25.500686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.501003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.501018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-05-15 11:18:25.501324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.501490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.501505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-05-15 11:18:25.501607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.501774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.501788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-05-15 11:18:25.502033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.502224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.502240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-05-15 11:18:25.502431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.502608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.502622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-05-15 11:18:25.502724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.502959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.502973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-05-15 11:18:25.503126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.503344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.503359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-05-15 11:18:25.503455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.503710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.503725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-05-15 11:18:25.503987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.504228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.504243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-05-15 11:18:25.504441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.504608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.504622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-05-15 11:18:25.504747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.504983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.504998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-05-15 11:18:25.505208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.505359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.505373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-05-15 11:18:25.505556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.505673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.505688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-05-15 11:18:25.505970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.506134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.506149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-05-15 11:18:25.506255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.506449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.506463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-05-15 11:18:25.506659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.506949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.506966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-05-15 11:18:25.507224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.507413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.507429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-05-15 11:18:25.507525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.507641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.507655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-05-15 11:18:25.507928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.508045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-05-15 11:18:25.508059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.535 [2024-05-15 11:18:25.508174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-05-15 11:18:25.508329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-05-15 11:18:25.508344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-05-15 11:18:25.508455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-05-15 11:18:25.508620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-05-15 11:18:25.508634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-05-15 11:18:25.508939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-05-15 11:18:25.509117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-05-15 11:18:25.509132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-05-15 11:18:25.509254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-05-15 11:18:25.509352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-05-15 11:18:25.509366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-05-15 11:18:25.509520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-05-15 11:18:25.509618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-05-15 11:18:25.509632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-05-15 11:18:25.509748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-05-15 11:18:25.509836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-05-15 11:18:25.509850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-05-15 11:18:25.509954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-05-15 11:18:25.510138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-05-15 11:18:25.510151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-05-15 11:18:25.510308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-05-15 11:18:25.510511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-05-15 11:18:25.510530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-05-15 11:18:25.510867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-05-15 11:18:25.511063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-05-15 11:18:25.511079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-05-15 11:18:25.511303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-05-15 11:18:25.511457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-05-15 11:18:25.511469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-05-15 11:18:25.511640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-05-15 11:18:25.511880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-05-15 11:18:25.511892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-05-15 11:18:25.512144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-05-15 11:18:25.512412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-05-15 11:18:25.512424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-05-15 11:18:25.512513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-05-15 11:18:25.512614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-05-15 11:18:25.512625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-05-15 11:18:25.512817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-05-15 11:18:25.513053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-05-15 11:18:25.513064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-05-15 11:18:25.513232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-05-15 11:18:25.513361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-05-15 11:18:25.513372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-05-15 11:18:25.513482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-05-15 11:18:25.513573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-05-15 11:18:25.513584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-05-15 11:18:25.513745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-05-15 11:18:25.513861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-05-15 11:18:25.513872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-05-15 11:18:25.513970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-05-15 11:18:25.514059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-05-15 11:18:25.514069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-05-15 11:18:25.514184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-05-15 11:18:25.514290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-05-15 11:18:25.514300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-05-15 11:18:25.514391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-05-15 11:18:25.514477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-05-15 11:18:25.514488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-05-15 11:18:25.514581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-05-15 11:18:25.514661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-05-15 11:18:25.514671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-05-15 11:18:25.514763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-05-15 11:18:25.514865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-05-15 11:18:25.514875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-05-15 11:18:25.515029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-05-15 11:18:25.515132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-05-15 11:18:25.515143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-05-15 11:18:25.515327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.515423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.515438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-05-15 11:18:25.515545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.515644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.515658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-05-15 11:18:25.515742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.515827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.515842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-05-15 11:18:25.516001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.516102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.516116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-05-15 11:18:25.516200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.516292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.516306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-05-15 11:18:25.516413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.516598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.516612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-05-15 11:18:25.516715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.516877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.516891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-05-15 11:18:25.517061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.517173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.517190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-05-15 11:18:25.517288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.517367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.517381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-05-15 11:18:25.517466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.517557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.517571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-05-15 11:18:25.517757] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e3770 is same with the state(5) to be set 00:26:28.536 [2024-05-15 11:18:25.518005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.518144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.518174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-05-15 11:18:25.518281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.518392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.518403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-05-15 11:18:25.518554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.518705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.518717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-05-15 11:18:25.518797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.518882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.518892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-05-15 11:18:25.518964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.519055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.519066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-05-15 11:18:25.519148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.519329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.519342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-05-15 11:18:25.519457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.519622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.519632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-05-15 11:18:25.519725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.519801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.519812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-05-15 11:18:25.519906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.519992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.520003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-05-15 11:18:25.520086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.520229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.520241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-05-15 11:18:25.520342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.520434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.520444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-05-15 11:18:25.520523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.520591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.520600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-05-15 11:18:25.520679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.520761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.520772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-05-15 11:18:25.520846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.520943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.520955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-05-15 11:18:25.521049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.521268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.521280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-05-15 11:18:25.521367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.521448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.521458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-05-15 11:18:25.521537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.521612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.521621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-05-15 11:18:25.521767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.521914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.521925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-05-15 11:18:25.522010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.522095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.522107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-05-15 11:18:25.522195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-05-15 11:18:25.522293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.522305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-05-15 11:18:25.522396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.522485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.522495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-05-15 11:18:25.522613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.522696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.522706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-05-15 11:18:25.522801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.522883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.522893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-05-15 11:18:25.522997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.523147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.523157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-05-15 11:18:25.523267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.523350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.523360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-05-15 11:18:25.523433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.523505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.523521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-05-15 11:18:25.523596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.523745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.523756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-05-15 11:18:25.523862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.523935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.523954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-05-15 11:18:25.524055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.524150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.524161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-05-15 11:18:25.524262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.524407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.524417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-05-15 11:18:25.524592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.524671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.524682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-05-15 11:18:25.524758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.524828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.524838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-05-15 11:18:25.524932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.525076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.525087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-05-15 11:18:25.525158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.525335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.525346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-05-15 11:18:25.525436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.525515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.525525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-05-15 11:18:25.525607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.525683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.525694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-05-15 11:18:25.525775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.525920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.525931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-05-15 11:18:25.526004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.526096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.526106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-05-15 11:18:25.526201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.526284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.526295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-05-15 11:18:25.526386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.526459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.526470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-05-15 11:18:25.526546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.526647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.526657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-05-15 11:18:25.526803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.526877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.526887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-05-15 11:18:25.526973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.527075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.527086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-05-15 11:18:25.527252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.527328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.527340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-05-15 11:18:25.527427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.527496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.527506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-05-15 11:18:25.527648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.527723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.527733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-05-15 11:18:25.527822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.527909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.527920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-05-15 11:18:25.528008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.528082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.528093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-05-15 11:18:25.528172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.528268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-05-15 11:18:25.528278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-05-15 11:18:25.528349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.528431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.528441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-05-15 11:18:25.528527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.528606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.528630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-05-15 11:18:25.528730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.528897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.528907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-05-15 11:18:25.528986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.529082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.529092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-05-15 11:18:25.529197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.529347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.529358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-05-15 11:18:25.529438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.529520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.529530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-05-15 11:18:25.529623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.529767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.529777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-05-15 11:18:25.529995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.530089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.530099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-05-15 11:18:25.530313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.530397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.530407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-05-15 11:18:25.530503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.530579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.530589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-05-15 11:18:25.530738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.530889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.530900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-05-15 11:18:25.531054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.531128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.531139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-05-15 11:18:25.531240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.531309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.531319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-05-15 11:18:25.531390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.531459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.531470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-05-15 11:18:25.531541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.531609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.531619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-05-15 11:18:25.531765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.531913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.531924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-05-15 11:18:25.531992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.532067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.532077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-05-15 11:18:25.532224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.532298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.532308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-05-15 11:18:25.532406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.532474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.532485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-05-15 11:18:25.532560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.532747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.532757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-05-15 11:18:25.532846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.532939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.532950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-05-15 11:18:25.533027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.533112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.533123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-05-15 11:18:25.533210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.533345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.533356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-05-15 11:18:25.533438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.533511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.533521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-05-15 11:18:25.533662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.533733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.533744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-05-15 11:18:25.533808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.533967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.533978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-05-15 11:18:25.534128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.534223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.534234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-05-15 11:18:25.534304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.534469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.534480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-05-15 11:18:25.534578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.534655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.534665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-05-15 11:18:25.534820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.534916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-05-15 11:18:25.534926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.539 [2024-05-15 11:18:25.535011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.535099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.535110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-05-15 11:18:25.535188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.535347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.535358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-05-15 11:18:25.535452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.535621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.535631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-05-15 11:18:25.535719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.535815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.535826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-05-15 11:18:25.535905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.535982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.535993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-05-15 11:18:25.536064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.536219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.536230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-05-15 11:18:25.536339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.536413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.536424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-05-15 11:18:25.536536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.536609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.536619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-05-15 11:18:25.536716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.536820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.536830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-05-15 11:18:25.536919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.537027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.537038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-05-15 11:18:25.537198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.537276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.537286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-05-15 11:18:25.537357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.537471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.537482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-05-15 11:18:25.537628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.537702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.537712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-05-15 11:18:25.537788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.537866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.537875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-05-15 11:18:25.538088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.538256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.538267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-05-15 11:18:25.538362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.538509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.538520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-05-15 11:18:25.538613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.538696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.538706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-05-15 11:18:25.538796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.538866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.538876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-05-15 11:18:25.538954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.539113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.539127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-05-15 11:18:25.539207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.539415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.539426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-05-15 11:18:25.539554] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:26:28.539 [2024-05-15 11:18:25.539607] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:28.539 [2024-05-15 11:18:25.539612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.539702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.539719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-05-15 11:18:25.539817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.539916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.539925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-05-15 11:18:25.540004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.540103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.540113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-05-15 11:18:25.540212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.540282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.540291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-05-15 11:18:25.540368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.540432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.540448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-05-15 11:18:25.540595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.540682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.540692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-05-15 11:18:25.540835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.540976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.540986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-05-15 11:18:25.541075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.541143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.541153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-05-15 11:18:25.541333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.541404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-05-15 11:18:25.541416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-05-15 11:18:25.541487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-05-15 11:18:25.541571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-05-15 11:18:25.541583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-05-15 11:18:25.541665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-05-15 11:18:25.541751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-05-15 11:18:25.541769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-05-15 11:18:25.541914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-05-15 11:18:25.541998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-05-15 11:18:25.542010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-05-15 11:18:25.542179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-05-15 11:18:25.542299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-05-15 11:18:25.542311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-05-15 11:18:25.542387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-05-15 11:18:25.542470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-05-15 11:18:25.542483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-05-15 11:18:25.542582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-05-15 11:18:25.542657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-05-15 11:18:25.542669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-05-15 11:18:25.542754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-05-15 11:18:25.542834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-05-15 11:18:25.542846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-05-15 11:18:25.542999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-05-15 11:18:25.543087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-05-15 11:18:25.543098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-05-15 11:18:25.543207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-05-15 11:18:25.543270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-05-15 11:18:25.543288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-05-15 11:18:25.543436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-05-15 11:18:25.543511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-05-15 11:18:25.543523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-05-15 11:18:25.543601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-05-15 11:18:25.543751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-05-15 11:18:25.543762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-05-15 11:18:25.543843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-05-15 11:18:25.543981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-05-15 11:18:25.543993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-05-15 11:18:25.544076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-05-15 11:18:25.544146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-05-15 11:18:25.544157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-05-15 11:18:25.544309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-05-15 11:18:25.544386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-05-15 11:18:25.544397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-05-15 11:18:25.544546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-05-15 11:18:25.544686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-05-15 11:18:25.544697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-05-15 11:18:25.544770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-05-15 11:18:25.544867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-05-15 11:18:25.544877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-05-15 11:18:25.544956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-05-15 11:18:25.545025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-05-15 11:18:25.545035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-05-15 11:18:25.545122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-05-15 11:18:25.545199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-05-15 11:18:25.545209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-05-15 11:18:25.545294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-05-15 11:18:25.545369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-05-15 11:18:25.545379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-05-15 11:18:25.545479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-05-15 11:18:25.545560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-05-15 11:18:25.545570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-05-15 11:18:25.545646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-05-15 11:18:25.545735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-05-15 11:18:25.545745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-05-15 11:18:25.545831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-05-15 11:18:25.545974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-05-15 11:18:25.545984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-05-15 11:18:25.546127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-05-15 11:18:25.546228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-05-15 11:18:25.546239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-05-15 11:18:25.546321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-05-15 11:18:25.546397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-05-15 11:18:25.546407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-05-15 11:18:25.546478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.546567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.546578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-05-15 11:18:25.546658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.546803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.546814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-05-15 11:18:25.546953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.547034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.547044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-05-15 11:18:25.547120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.547205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.547216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-05-15 11:18:25.547307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.547395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.547406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-05-15 11:18:25.547478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.547552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.547563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-05-15 11:18:25.547641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.547709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.547720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-05-15 11:18:25.547870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.547950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.547960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-05-15 11:18:25.548039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.548118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.548128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-05-15 11:18:25.548220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.548283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.548294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-05-15 11:18:25.548368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.548442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.548452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-05-15 11:18:25.548524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.548589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.548600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-05-15 11:18:25.548744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.548829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.548839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-05-15 11:18:25.548910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.548981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.548991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-05-15 11:18:25.549112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.549218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.549228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-05-15 11:18:25.549404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.549539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.549549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-05-15 11:18:25.549620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.549702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.549712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-05-15 11:18:25.549787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.549856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.549866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-05-15 11:18:25.549954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.550066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.550076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-05-15 11:18:25.550145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.550300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.550312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-05-15 11:18:25.550400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.550474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.550485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-05-15 11:18:25.550646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.550726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.550737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-05-15 11:18:25.550816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.550906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.550916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-05-15 11:18:25.550991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.551133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.551144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-05-15 11:18:25.551259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.551333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.551345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-05-15 11:18:25.551510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.551579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.551589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-05-15 11:18:25.551733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.551798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.551809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-05-15 11:18:25.551902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.551995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.552006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-05-15 11:18:25.552173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.552252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.552263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-05-15 11:18:25.552340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-05-15 11:18:25.552487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.552498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-05-15 11:18:25.552572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.552642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.552653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-05-15 11:18:25.552725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.552808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.552818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-05-15 11:18:25.552895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.552966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.552976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-05-15 11:18:25.553044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.553127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.553138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-05-15 11:18:25.553230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.553382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.553392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-05-15 11:18:25.553454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.553602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.553613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-05-15 11:18:25.553688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.553763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.553774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-05-15 11:18:25.553853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.554001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.554012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-05-15 11:18:25.554101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.554200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.554210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-05-15 11:18:25.554283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.554354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.554364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-05-15 11:18:25.554447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.554519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.554529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-05-15 11:18:25.554708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.554778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.554788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-05-15 11:18:25.554872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.554947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.554958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-05-15 11:18:25.555032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.555117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.555128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-05-15 11:18:25.555208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.555274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.555284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-05-15 11:18:25.555375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.555442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.555452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-05-15 11:18:25.555527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.555682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.555692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-05-15 11:18:25.555773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.555851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.555862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-05-15 11:18:25.555954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.556129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.556140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-05-15 11:18:25.556225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.556302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.556313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-05-15 11:18:25.556450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.556531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.556542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-05-15 11:18:25.556626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.556701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.556712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-05-15 11:18:25.556800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.556938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.556949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-05-15 11:18:25.557023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.557104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.557114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-05-15 11:18:25.557199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.557270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.557281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-05-15 11:18:25.557380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.557446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.557456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-05-15 11:18:25.557526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.557690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.557700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-05-15 11:18:25.557757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.557842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.557852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-05-15 11:18:25.557941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.558026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-05-15 11:18:25.558037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-05-15 11:18:25.558119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.558197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.558208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-05-15 11:18:25.558279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.558371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.558381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-05-15 11:18:25.558458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.558546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.558557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-05-15 11:18:25.558625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.558702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.558712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-05-15 11:18:25.558787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.558876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.558887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-05-15 11:18:25.559043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.559161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.559177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-05-15 11:18:25.559262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.559400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.559410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-05-15 11:18:25.559478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.559559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.559569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-05-15 11:18:25.559657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.559797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.559807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-05-15 11:18:25.559946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.560028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.560039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-05-15 11:18:25.560128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.560221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.560232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-05-15 11:18:25.560317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.560403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.560414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-05-15 11:18:25.560555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.560742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.560753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-05-15 11:18:25.560906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.560985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.560995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-05-15 11:18:25.561154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.561226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.561236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-05-15 11:18:25.561326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.561488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.561499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-05-15 11:18:25.561576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.561661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.561671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-05-15 11:18:25.561763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.561838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.561848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-05-15 11:18:25.561995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.562142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.562152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-05-15 11:18:25.562304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.562368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.562378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-05-15 11:18:25.562457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.562553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.562563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-05-15 11:18:25.562633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.562715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.562726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-05-15 11:18:25.562830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.562909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.562920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-05-15 11:18:25.563003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.563140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.563151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-05-15 11:18:25.563292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.563451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.563463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-05-15 11:18:25.563552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.563642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.563653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-05-15 11:18:25.563720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.563891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.563902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-05-15 11:18:25.564020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.564109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.564120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-05-15 11:18:25.564185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.564270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.564281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-05-15 11:18:25.564472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.564552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-05-15 11:18:25.564562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.544 [2024-05-15 11:18:25.564667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-05-15 11:18:25.564746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-05-15 11:18:25.564757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-05-15 11:18:25.564846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-05-15 11:18:25.564913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-05-15 11:18:25.564924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-05-15 11:18:25.565071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-05-15 11:18:25.565153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-05-15 11:18:25.565168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-05-15 11:18:25.565267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-05-15 11:18:25.565337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-05-15 11:18:25.565347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-05-15 11:18:25.565430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-05-15 11:18:25.565513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-05-15 11:18:25.565525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-05-15 11:18:25.565598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-05-15 11:18:25.565681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-05-15 11:18:25.565691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-05-15 11:18:25.565845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-05-15 11:18:25.565915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-05-15 11:18:25.565926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-05-15 11:18:25.566000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-05-15 11:18:25.566066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-05-15 11:18:25.566076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-05-15 11:18:25.566255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-05-15 11:18:25.566351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-05-15 11:18:25.566363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-05-15 11:18:25.566459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-05-15 11:18:25.566640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-05-15 11:18:25.566652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-05-15 11:18:25.566740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-05-15 11:18:25.566831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-05-15 11:18:25.566841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-05-15 11:18:25.566923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-05-15 11:18:25.566993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-05-15 11:18:25.567003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-05-15 11:18:25.567082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-05-15 11:18:25.567170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-05-15 11:18:25.567181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-05-15 11:18:25.567334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-05-15 11:18:25.567411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-05-15 11:18:25.567423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-05-15 11:18:25.567576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-05-15 11:18:25.567787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-05-15 11:18:25.567799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-05-15 11:18:25.567963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-05-15 11:18:25.568055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-05-15 11:18:25.568067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-05-15 11:18:25.568150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-05-15 11:18:25.568224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-05-15 11:18:25.568235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-05-15 11:18:25.568332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-05-15 11:18:25.568405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-05-15 11:18:25.568415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-05-15 11:18:25.568577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-05-15 11:18:25.568642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-05-15 11:18:25.568653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-05-15 11:18:25.568758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-05-15 11:18:25.568900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-05-15 11:18:25.568912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-05-15 11:18:25.569003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-05-15 11:18:25.569087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-05-15 11:18:25.569096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-05-15 11:18:25.569179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-05-15 11:18:25.569322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-05-15 11:18:25.569333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-05-15 11:18:25.569443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-05-15 11:18:25.569520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-05-15 11:18:25.569531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-05-15 11:18:25.569670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-05-15 11:18:25.569827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-05-15 11:18:25.569837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-05-15 11:18:25.569982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-05-15 11:18:25.570067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.570078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-05-15 11:18:25.570173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.570255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.570265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-05-15 11:18:25.570357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.570491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.570501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-05-15 11:18:25.570574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.570670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.570680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-05-15 11:18:25.570748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.570895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.570907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-05-15 11:18:25.571003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.571149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.571162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-05-15 11:18:25.571254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.571318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.571329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-05-15 11:18:25.571537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.571620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.571630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 EAL: No free 2048 kB hugepages reported on node 1 00:26:28.545 [2024-05-15 11:18:25.571708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.571809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.571819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-05-15 11:18:25.571903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.571973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.571987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-05-15 11:18:25.572135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.572300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.572319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-05-15 11:18:25.572401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.572517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.572528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-05-15 11:18:25.572601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.572745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.572755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-05-15 11:18:25.572835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.573055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.573065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-05-15 11:18:25.573139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.573298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.573309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-05-15 11:18:25.573448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.573548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.573560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-05-15 11:18:25.573629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.573726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.573737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-05-15 11:18:25.573875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.574060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.574070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-05-15 11:18:25.574157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.574249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.574261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-05-15 11:18:25.574349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.574440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.574450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-05-15 11:18:25.574566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.574638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.574649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-05-15 11:18:25.574734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.574803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.574814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-05-15 11:18:25.574971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.575127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.575138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-05-15 11:18:25.575224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.575312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.575322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-05-15 11:18:25.575410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.575484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.575495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-05-15 11:18:25.575556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.575697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.575711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-05-15 11:18:25.575868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.576098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.576109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-05-15 11:18:25.576209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.576288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.576299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-05-15 11:18:25.576371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.576527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.576538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-05-15 11:18:25.576679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.576761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-05-15 11:18:25.576772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.546 [2024-05-15 11:18:25.576913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.576993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.577003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-05-15 11:18:25.577069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.577213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.577223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-05-15 11:18:25.577288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.577368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.577377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-05-15 11:18:25.577446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.577548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.577558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-05-15 11:18:25.577637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.577704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.577713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-05-15 11:18:25.577827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.577881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.577892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-05-15 11:18:25.577981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.578049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.578059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-05-15 11:18:25.578145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.578225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.578236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-05-15 11:18:25.578337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.578424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.578434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-05-15 11:18:25.578572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.578638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.578648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-05-15 11:18:25.578727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.578932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.578941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-05-15 11:18:25.579034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.579107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.579117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-05-15 11:18:25.579302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.579376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.579386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-05-15 11:18:25.579471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.579548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.579557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-05-15 11:18:25.579643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.579720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.579730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-05-15 11:18:25.579870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.580003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.580015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-05-15 11:18:25.580137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.580237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.580248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-05-15 11:18:25.580479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.580573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.580583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-05-15 11:18:25.580647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.580797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.580807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-05-15 11:18:25.580872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.580968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.580978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-05-15 11:18:25.581059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.581206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.581217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-05-15 11:18:25.581303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.581385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.581396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-05-15 11:18:25.581548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.581655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.581665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-05-15 11:18:25.581800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.581895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.581906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-05-15 11:18:25.581973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.582051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.582060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-05-15 11:18:25.582198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.582261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.582271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-05-15 11:18:25.582353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.582443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.582453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-05-15 11:18:25.582522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.582666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.582675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-05-15 11:18:25.582758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.582898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.582908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-05-15 11:18:25.582981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-05-15 11:18:25.583065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.583075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-05-15 11:18:25.583145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.583207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.583217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-05-15 11:18:25.583301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.583368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.583378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-05-15 11:18:25.583461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.583613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.583623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-05-15 11:18:25.583731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.583813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.583823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-05-15 11:18:25.583897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.583965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.583975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-05-15 11:18:25.584053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.584124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.584133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-05-15 11:18:25.584226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.584307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.584316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-05-15 11:18:25.584557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.584654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.584664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-05-15 11:18:25.584735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.584805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.584814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-05-15 11:18:25.584965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.585033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.585043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-05-15 11:18:25.585216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.585358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.585368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-05-15 11:18:25.585509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.585586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.585596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-05-15 11:18:25.585668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.585736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.585745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-05-15 11:18:25.585847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.585921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.585930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-05-15 11:18:25.586019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.586183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.586194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-05-15 11:18:25.586330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.586419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.586430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-05-15 11:18:25.586499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.586566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.586576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-05-15 11:18:25.586656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.586816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.586827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-05-15 11:18:25.586911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.586976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.587010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-05-15 11:18:25.587079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.587173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.587183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-05-15 11:18:25.587258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.587435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.587445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-05-15 11:18:25.587517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.587763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.587773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-05-15 11:18:25.587859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.587945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.587955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-05-15 11:18:25.588102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.588275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.588286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-05-15 11:18:25.588369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.588440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.588450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-05-15 11:18:25.588596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.588661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.588671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-05-15 11:18:25.588756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.588843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.588852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-05-15 11:18:25.588924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.588993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.589002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-05-15 11:18:25.589072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.589139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.589148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-05-15 11:18:25.589238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-05-15 11:18:25.589376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.589387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-05-15 11:18:25.589465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.589540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.589550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-05-15 11:18:25.589641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.589783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.589793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-05-15 11:18:25.589948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.590010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.590019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-05-15 11:18:25.590087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.590200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.590211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-05-15 11:18:25.590289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.590422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.590431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-05-15 11:18:25.590596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.590665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.590674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-05-15 11:18:25.590753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.590828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.590838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-05-15 11:18:25.590924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.591058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.591068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-05-15 11:18:25.591144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.591226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.591236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-05-15 11:18:25.591307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.591381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.591391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-05-15 11:18:25.591461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.591526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.591536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-05-15 11:18:25.591617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.591701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.591711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-05-15 11:18:25.591781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.591849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.591858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-05-15 11:18:25.592008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.592082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.592093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-05-15 11:18:25.592154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.592217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.592228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-05-15 11:18:25.592414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.592495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.592505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-05-15 11:18:25.592590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.592784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.592794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-05-15 11:18:25.592930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.592994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.593003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-05-15 11:18:25.593216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.593352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.593362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-05-15 11:18:25.593455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.593533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.593543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-05-15 11:18:25.593601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.593674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.593683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-05-15 11:18:25.593755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.593818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.593826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-05-15 11:18:25.593926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.594084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.594094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-05-15 11:18:25.594174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.594259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.594269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-05-15 11:18:25.594450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.594522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.594533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-05-15 11:18:25.594620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.594698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.594708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-05-15 11:18:25.594869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.594952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-05-15 11:18:25.594962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-05-15 11:18:25.595038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.595115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.595125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-05-15 11:18:25.595207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.595340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.595351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-05-15 11:18:25.595495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.595573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.595582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-05-15 11:18:25.595650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.595731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.595741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-05-15 11:18:25.595901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.595970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.595980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-05-15 11:18:25.596054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.596120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.596129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-05-15 11:18:25.596279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.596346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.596356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-05-15 11:18:25.596423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.596561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.596571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-05-15 11:18:25.596641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.596708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.596717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-05-15 11:18:25.596801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.596859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.596869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-05-15 11:18:25.596942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.597012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.597022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-05-15 11:18:25.597091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.597236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.597246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-05-15 11:18:25.597312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.597401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.597424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-05-15 11:18:25.597502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.597656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.597668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-05-15 11:18:25.597797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.597953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.597963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-05-15 11:18:25.598099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.598186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.598197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-05-15 11:18:25.598343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.598494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.598504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-05-15 11:18:25.598572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.598639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.598649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-05-15 11:18:25.598712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.598787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.598796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-05-15 11:18:25.598890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.598965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.598975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-05-15 11:18:25.599059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.599120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.599129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-05-15 11:18:25.599267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.599413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.599423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-05-15 11:18:25.599564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.599656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.599666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-05-15 11:18:25.599749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.599886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.599896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-05-15 11:18:25.599965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.600043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.600052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-05-15 11:18:25.600113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.600183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.600192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-05-15 11:18:25.600365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.600496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.600506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-05-15 11:18:25.600590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-05-15 11:18:25.600727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.600736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-05-15 11:18:25.600808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.600887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.600897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-05-15 11:18:25.601051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.601139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.601149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-05-15 11:18:25.601307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.601378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.601387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-05-15 11:18:25.601460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.601535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.601545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-05-15 11:18:25.601678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.601847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.601856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-05-15 11:18:25.601935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.602018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.602028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-05-15 11:18:25.602114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.602290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.602301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-05-15 11:18:25.602380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.602455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.602465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-05-15 11:18:25.602537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.602719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.602729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-05-15 11:18:25.602830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.602971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.602981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-05-15 11:18:25.603059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.603135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.603144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-05-15 11:18:25.603221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.603305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.603315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-05-15 11:18:25.603476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.603542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.603551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-05-15 11:18:25.603616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.603753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.603764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-05-15 11:18:25.603845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.603919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.603930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-05-15 11:18:25.604004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.604079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.604089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-05-15 11:18:25.604246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.604340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.604350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-05-15 11:18:25.604421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.604493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.604503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-05-15 11:18:25.604601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.604748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.604757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-05-15 11:18:25.604826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.604885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.604894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-05-15 11:18:25.605039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.605175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.605185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-05-15 11:18:25.605320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.605397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.605407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-05-15 11:18:25.605482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.605601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.605611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-05-15 11:18:25.605688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.605763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.605773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-05-15 11:18:25.605850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.605936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.605945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-05-15 11:18:25.606012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.606146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.606156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-05-15 11:18:25.606231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.606296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.606305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-05-15 11:18:25.606383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.606519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.606529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-05-15 11:18:25.606672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.606751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-05-15 11:18:25.606762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-05-15 11:18:25.606943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.607134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.607145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-05-15 11:18:25.607237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.607380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.607390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-05-15 11:18:25.607537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.607615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.607627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-05-15 11:18:25.607725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.607814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.607824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-05-15 11:18:25.607962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.608139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.608149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-05-15 11:18:25.608304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.608464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.608473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-05-15 11:18:25.608565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.608698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.608709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-05-15 11:18:25.608770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.608969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.608978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-05-15 11:18:25.609133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.609210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.609220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-05-15 11:18:25.609360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.609442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.609452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-05-15 11:18:25.609528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.609592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.609602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-05-15 11:18:25.609683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.609821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.609831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-05-15 11:18:25.609900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.610130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.610142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-05-15 11:18:25.610223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.610301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.610312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-05-15 11:18:25.610446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.610520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.610530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-05-15 11:18:25.610603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.610689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.610700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-05-15 11:18:25.610857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.610947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.610957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-05-15 11:18:25.611031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.611095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.611106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-05-15 11:18:25.611194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.611269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.611279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-05-15 11:18:25.611361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.611436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.611446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-05-15 11:18:25.611588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.611669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.611681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-05-15 11:18:25.611819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.611955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.611965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-05-15 11:18:25.612128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.612216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.612235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-05-15 11:18:25.612375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.612526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.612535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-05-15 11:18:25.612697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.612784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.612793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-05-15 11:18:25.612873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.612940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.612951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-05-15 11:18:25.613092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.613236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.613247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-05-15 11:18:25.613326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.613461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.613472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-05-15 11:18:25.613541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.613630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.613640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-05-15 11:18:25.613777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.613869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-05-15 11:18:25.613879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-05-15 11:18:25.614040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.614177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.614188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-05-15 11:18:25.614316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.614456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.614466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-05-15 11:18:25.614631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.614702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.614711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-05-15 11:18:25.614806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.614880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.614890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-05-15 11:18:25.614961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.615110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.615120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-05-15 11:18:25.615255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.615325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.615336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-05-15 11:18:25.615420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.615553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.615563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-05-15 11:18:25.615632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.615771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.615781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-05-15 11:18:25.615872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.616011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.616022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-05-15 11:18:25.616175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.616312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.616323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-05-15 11:18:25.616526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.616606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.616617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-05-15 11:18:25.616685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.616763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.616773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-05-15 11:18:25.616847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.616921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.616931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-05-15 11:18:25.617023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.617122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.617133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-05-15 11:18:25.617314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.617449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.617465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-05-15 11:18:25.617510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:28.552 [2024-05-15 11:18:25.617605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.617780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.617790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-05-15 11:18:25.617876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.617938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.617957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-05-15 11:18:25.618025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.618104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.618114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-05-15 11:18:25.618195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.618259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.618269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-05-15 11:18:25.618343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.618483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.618493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-05-15 11:18:25.618634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.618712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.618723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-05-15 11:18:25.618877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.618960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.618970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-05-15 11:18:25.619066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.619137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.619150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-05-15 11:18:25.619234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.619304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.619314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-05-15 11:18:25.619389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.619471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.619481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-05-15 11:18:25.619546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.619678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.619688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-05-15 11:18:25.619768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.619833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.619843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-05-15 11:18:25.619907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.619977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.619987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-05-15 11:18:25.620056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.620141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-05-15 11:18:25.620152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-05-15 11:18:25.620222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-05-15 11:18:25.620301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-05-15 11:18:25.620312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-05-15 11:18:25.620382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-05-15 11:18:25.620458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-05-15 11:18:25.620468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-05-15 11:18:25.620610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-05-15 11:18:25.620669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-05-15 11:18:25.620679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-05-15 11:18:25.620758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-05-15 11:18:25.620837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-05-15 11:18:25.620849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-05-15 11:18:25.620921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-05-15 11:18:25.621074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-05-15 11:18:25.621085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-05-15 11:18:25.621160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-05-15 11:18:25.621310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-05-15 11:18:25.621321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-05-15 11:18:25.621469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-05-15 11:18:25.621540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-05-15 11:18:25.621551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-05-15 11:18:25.621637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-05-15 11:18:25.621769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-05-15 11:18:25.621779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-05-15 11:18:25.621923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-05-15 11:18:25.622001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-05-15 11:18:25.622010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-05-15 11:18:25.622101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-05-15 11:18:25.622176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-05-15 11:18:25.622186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-05-15 11:18:25.622255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-05-15 11:18:25.622322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-05-15 11:18:25.622331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-05-15 11:18:25.622486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-05-15 11:18:25.622561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-05-15 11:18:25.622571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-05-15 11:18:25.622650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-05-15 11:18:25.622790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-05-15 11:18:25.622801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-05-15 11:18:25.622937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-05-15 11:18:25.623014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-05-15 11:18:25.623023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-05-15 11:18:25.623142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-05-15 11:18:25.623209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-05-15 11:18:25.623219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-05-15 11:18:25.623292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-05-15 11:18:25.623356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-05-15 11:18:25.623366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-05-15 11:18:25.623507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-05-15 11:18:25.623576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-05-15 11:18:25.623586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-05-15 11:18:25.623655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-05-15 11:18:25.623730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-05-15 11:18:25.623741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-05-15 11:18:25.623821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-05-15 11:18:25.623901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-05-15 11:18:25.623911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-05-15 11:18:25.623979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-05-15 11:18:25.624120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-05-15 11:18:25.624131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-05-15 11:18:25.624218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-05-15 11:18:25.624288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-05-15 11:18:25.624297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-05-15 11:18:25.624382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-05-15 11:18:25.624460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-05-15 11:18:25.624470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-05-15 11:18:25.624540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-05-15 11:18:25.624671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-05-15 11:18:25.624681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-05-15 11:18:25.624753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.624820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.624830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-05-15 11:18:25.624901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.624985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.624996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-05-15 11:18:25.625063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.625146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.625156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-05-15 11:18:25.625228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.625299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.625308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-05-15 11:18:25.625452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.625521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.625531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-05-15 11:18:25.625618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.625697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.625707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-05-15 11:18:25.625785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.625852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.625862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-05-15 11:18:25.625940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.626008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.626019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-05-15 11:18:25.626154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.626310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.626321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-05-15 11:18:25.626410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.626487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.626497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-05-15 11:18:25.626571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.626809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.626820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-05-15 11:18:25.627025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.627298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.627310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-05-15 11:18:25.627391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.627558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.627569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-05-15 11:18:25.627647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.627728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.627739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-05-15 11:18:25.627815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.627925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.627935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-05-15 11:18:25.628011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.628076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.628087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-05-15 11:18:25.628228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.628302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.628312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-05-15 11:18:25.628388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.628474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.628484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-05-15 11:18:25.628578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.628745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.628755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-05-15 11:18:25.628828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.628896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.628906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-05-15 11:18:25.628973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.629124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.629134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-05-15 11:18:25.629215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.629288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.629299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-05-15 11:18:25.629432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.629575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.629585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-05-15 11:18:25.629652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.629720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.629730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-05-15 11:18:25.629807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.629882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.629893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-05-15 11:18:25.630040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.630120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.630130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-05-15 11:18:25.630205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.630277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.630286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-05-15 11:18:25.630421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.630496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-05-15 11:18:25.630505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.555 [2024-05-15 11:18:25.630601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.630747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.630757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-05-15 11:18:25.630836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.630903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.630913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-05-15 11:18:25.631002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.631079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.631088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-05-15 11:18:25.631223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.631292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.631301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-05-15 11:18:25.631443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.631577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.631587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-05-15 11:18:25.631664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.631747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.631756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-05-15 11:18:25.631892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.631975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.631985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-05-15 11:18:25.632080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.632148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.632157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-05-15 11:18:25.632239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.632313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.632323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-05-15 11:18:25.632396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.632462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.632472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-05-15 11:18:25.632548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.632615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.632624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-05-15 11:18:25.632694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.632777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.632788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-05-15 11:18:25.632859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.632928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.632939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-05-15 11:18:25.633011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.633140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.633150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-05-15 11:18:25.633237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.633317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.633326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-05-15 11:18:25.633398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.633474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.633485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-05-15 11:18:25.633628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.633770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.633780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-05-15 11:18:25.633849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.633925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.633936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-05-15 11:18:25.634005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.634084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.634094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-05-15 11:18:25.634168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.634317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.634328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-05-15 11:18:25.634397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.634473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.634483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-05-15 11:18:25.634566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.634662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.634671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-05-15 11:18:25.634753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.634818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.634828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-05-15 11:18:25.634931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.635018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.635033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-05-15 11:18:25.635115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.635272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.635287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-05-15 11:18:25.635385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.635522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.635536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-05-15 11:18:25.635620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.635718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.635731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-05-15 11:18:25.635807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.635897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.635910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-05-15 11:18:25.635985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.636070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.636084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-05-15 11:18:25.636227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-05-15 11:18:25.636321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.636336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-05-15 11:18:25.636423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.636507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.636521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-05-15 11:18:25.636614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.636703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.636717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-05-15 11:18:25.636797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.637005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.637019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-05-15 11:18:25.637189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.637272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.637287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-05-15 11:18:25.637434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.637523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.637538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-05-15 11:18:25.637620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.637712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.637726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-05-15 11:18:25.637873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.637969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.637983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-05-15 11:18:25.638062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.638142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.638156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-05-15 11:18:25.638308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.638460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.638474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-05-15 11:18:25.638555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.638636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.638650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-05-15 11:18:25.638880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.638961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.638975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-05-15 11:18:25.639066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.639162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.639183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-05-15 11:18:25.639327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.639403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.639419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-05-15 11:18:25.639508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.639667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.639681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-05-15 11:18:25.639778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.639885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.639901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-05-15 11:18:25.639982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.640056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.640070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-05-15 11:18:25.640146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.640309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.640323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-05-15 11:18:25.640474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.640551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.640569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-05-15 11:18:25.640731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.640819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.640840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-05-15 11:18:25.640918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.641004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.641018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-05-15 11:18:25.641147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.641245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.641260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-05-15 11:18:25.641334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.641412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.641426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-05-15 11:18:25.641515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.641587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.641600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-05-15 11:18:25.641678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.641822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.641836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-05-15 11:18:25.641911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.641995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.642009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-05-15 11:18:25.642095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.642179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.642194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-05-15 11:18:25.642262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.642333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.642347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-05-15 11:18:25.642449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.642533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.642546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-05-15 11:18:25.642674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.642812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.642826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-05-15 11:18:25.642924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-05-15 11:18:25.643079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.643093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-05-15 11:18:25.643305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.643378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.643391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-05-15 11:18:25.643464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.643560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.643573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-05-15 11:18:25.643660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.643819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.643832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-05-15 11:18:25.643920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.644001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.644016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-05-15 11:18:25.644178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.644334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.644348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-05-15 11:18:25.644499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.644572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.644586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-05-15 11:18:25.644672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.644752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.644766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-05-15 11:18:25.644857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.645001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.645016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-05-15 11:18:25.645102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.645192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.645206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-05-15 11:18:25.645274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.645431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.645451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-05-15 11:18:25.645532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.645601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.645615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-05-15 11:18:25.645703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.645780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.645794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-05-15 11:18:25.645903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.646048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.646061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-05-15 11:18:25.646123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.646272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.646294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-05-15 11:18:25.646383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.646522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.646536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-05-15 11:18:25.646720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.646878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.646892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-05-15 11:18:25.646980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.647059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.647073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-05-15 11:18:25.647171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.647250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.647261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-05-15 11:18:25.647336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.647473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.647484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-05-15 11:18:25.647582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.647649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.647659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-05-15 11:18:25.647741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.647826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.647836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-05-15 11:18:25.647903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.647966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.647976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-05-15 11:18:25.648043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.648116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.648126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-05-15 11:18:25.648327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.648407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.648419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-05-15 11:18:25.648490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.648634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.648644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-05-15 11:18:25.648710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.648840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.648850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-05-15 11:18:25.648920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.648996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-05-15 11:18:25.649006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-05-15 11:18:25.649078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.649139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.649150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-05-15 11:18:25.649240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.649377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.649388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-05-15 11:18:25.649467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.649616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.649627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-05-15 11:18:25.649848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.649935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.649945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-05-15 11:18:25.650018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.650094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.650105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-05-15 11:18:25.650262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.650343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.650353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-05-15 11:18:25.650456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.650535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.650548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-05-15 11:18:25.650611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.650681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.650691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-05-15 11:18:25.650757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.650884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.650893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-05-15 11:18:25.650969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.651061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.651071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-05-15 11:18:25.651137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.651240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.651251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-05-15 11:18:25.651317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.651463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.651473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-05-15 11:18:25.651547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.651689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.651699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-05-15 11:18:25.651772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.651841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.651851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-05-15 11:18:25.651939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.652009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.652019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-05-15 11:18:25.652089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.652176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.652186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-05-15 11:18:25.652263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.652342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.652354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-05-15 11:18:25.652421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.652500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.652509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-05-15 11:18:25.652581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.652647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.652656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-05-15 11:18:25.652724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.652806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.652815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-05-15 11:18:25.652954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.653032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.653042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-05-15 11:18:25.653124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.653195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.653206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-05-15 11:18:25.653277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.653348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.653358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-05-15 11:18:25.653415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.653479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.653490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-05-15 11:18:25.653574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.653639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.653651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-05-15 11:18:25.653801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.653884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.653895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-05-15 11:18:25.653997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.654058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-05-15 11:18:25.654071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.559 [2024-05-15 11:18:25.654140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.654343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.654355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-05-15 11:18:25.654437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.654572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.654584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-05-15 11:18:25.654657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.654725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.654736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-05-15 11:18:25.654795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.654864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.654875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-05-15 11:18:25.654947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.655089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.655101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-05-15 11:18:25.655192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.655334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.655345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-05-15 11:18:25.655415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.655486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.655496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-05-15 11:18:25.655576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.655663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.655674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-05-15 11:18:25.655744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.655811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.655821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-05-15 11:18:25.655888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.655973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.655983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-05-15 11:18:25.656064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.656147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.656159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-05-15 11:18:25.656253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.656324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.656334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-05-15 11:18:25.656414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.656506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.656517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-05-15 11:18:25.656600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.656756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.656767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-05-15 11:18:25.656853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.656932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.656942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-05-15 11:18:25.657009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.657091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.657101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-05-15 11:18:25.657184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.657256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.657267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-05-15 11:18:25.657346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.657480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.657491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-05-15 11:18:25.657569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.657634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.657646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-05-15 11:18:25.657797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.657877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.657887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-05-15 11:18:25.657982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.658045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.658055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-05-15 11:18:25.658133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.658219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.658230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-05-15 11:18:25.658308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.658389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.658400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-05-15 11:18:25.658471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.658607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.658618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-05-15 11:18:25.658697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.658766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.658776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-05-15 11:18:25.658868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.658953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.658963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-05-15 11:18:25.659031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.659104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.659114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-05-15 11:18:25.659185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.659315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.659334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-05-15 11:18:25.659407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.659542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.659552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-05-15 11:18:25.659688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.659755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.659765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-05-15 11:18:25.659863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.659962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-05-15 11:18:25.659972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-05-15 11:18:25.660053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-05-15 11:18:25.660194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-05-15 11:18:25.660205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-05-15 11:18:25.660278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-05-15 11:18:25.660344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-05-15 11:18:25.660354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-05-15 11:18:25.660420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-05-15 11:18:25.660508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-05-15 11:18:25.660518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-05-15 11:18:25.660680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-05-15 11:18:25.660829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-05-15 11:18:25.660839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-05-15 11:18:25.660910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-05-15 11:18:25.660977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-05-15 11:18:25.660987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-05-15 11:18:25.661134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-05-15 11:18:25.661306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-05-15 11:18:25.661317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-05-15 11:18:25.661387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-05-15 11:18:25.661472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-05-15 11:18:25.661483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-05-15 11:18:25.661567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-05-15 11:18:25.661640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-05-15 11:18:25.661651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-05-15 11:18:25.661724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-05-15 11:18:25.661791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-05-15 11:18:25.661801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-05-15 11:18:25.661873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-05-15 11:18:25.661946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-05-15 11:18:25.661957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-05-15 11:18:25.662046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-05-15 11:18:25.662121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-05-15 11:18:25.662131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-05-15 11:18:25.662210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-05-15 11:18:25.662274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-05-15 11:18:25.662285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-05-15 11:18:25.662363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-05-15 11:18:25.662428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-05-15 11:18:25.662439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-05-15 11:18:25.662518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-05-15 11:18:25.662587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-05-15 11:18:25.662596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-05-15 11:18:25.662743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-05-15 11:18:25.662949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-05-15 11:18:25.662960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-05-15 11:18:25.663030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-05-15 11:18:25.663115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-05-15 11:18:25.663125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-05-15 11:18:25.663205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-05-15 11:18:25.663364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-05-15 11:18:25.663375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-05-15 11:18:25.663461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-05-15 11:18:25.663530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-05-15 11:18:25.663540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-05-15 11:18:25.663608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-05-15 11:18:25.663687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-05-15 11:18:25.663697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-05-15 11:18:25.663773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-05-15 11:18:25.663849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-05-15 11:18:25.663859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-05-15 11:18:25.663927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-05-15 11:18:25.663995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-05-15 11:18:25.664006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-05-15 11:18:25.664099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-05-15 11:18:25.664235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-05-15 11:18:25.664245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-05-15 11:18:25.664325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-05-15 11:18:25.664487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-05-15 11:18:25.664497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-05-15 11:18:25.664567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-05-15 11:18:25.664721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-05-15 11:18:25.664732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-05-15 11:18:25.664809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:18:25.664888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:18:25.664898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-05-15 11:18:25.664963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:18:25.665097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:18:25.665106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-05-15 11:18:25.665182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:18:25.665254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:18:25.665264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-05-15 11:18:25.665348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:18:25.665407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:18:25.665417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-05-15 11:18:25.665551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:18:25.665620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:18:25.665630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-05-15 11:18:25.665704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:18:25.665852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:18:25.665862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-05-15 11:18:25.665933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:18:25.666009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:18:25.666019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-05-15 11:18:25.666094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:18:25.666249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:18:25.666260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-05-15 11:18:25.666392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:18:25.666476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:18:25.666486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-05-15 11:18:25.666660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:18:25.666733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:18:25.666743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-05-15 11:18:25.666829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:18:25.666909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:18:25.666919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-05-15 11:18:25.666989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:18:25.667070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:18:25.667080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-05-15 11:18:25.667230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:18:25.667305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:18:25.667315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-05-15 11:18:25.667486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:18:25.667562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:18:25.667572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-05-15 11:18:25.667638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:18:25.667723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:18:25.667736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-05-15 11:18:25.667840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:18:25.667970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:18:25.667986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-05-15 11:18:25.668077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:18:25.668259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:18:25.668274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-05-15 11:18:25.668360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:18:25.668436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:18:25.668451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-05-15 11:18:25.668548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:18:25.668626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:18:25.668640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-05-15 11:18:25.668720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:18:25.668829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:18:25.668842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-05-15 11:18:25.668990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:18:25.669081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:18:25.669095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-05-15 11:18:25.669181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:18:25.669341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:18:25.669355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-05-15 11:18:25.669441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:18:25.669518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:18:25.669531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-05-15 11:18:25.669626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:18:25.669704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:18:25.669717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-05-15 11:18:25.669795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:18:25.669870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-05-15 11:18:25.669884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-05-15 11:18:25.670031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-05-15 11:18:25.670182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-05-15 11:18:25.670196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-05-15 11:18:25.670273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-05-15 11:18:25.670477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-05-15 11:18:25.670491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-05-15 11:18:25.670575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-05-15 11:18:25.670738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-05-15 11:18:25.670752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-05-15 11:18:25.670905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-05-15 11:18:25.671004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-05-15 11:18:25.671017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-05-15 11:18:25.671103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-05-15 11:18:25.671245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-05-15 11:18:25.671259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-05-15 11:18:25.671347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-05-15 11:18:25.671437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-05-15 11:18:25.671450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-05-15 11:18:25.671612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-05-15 11:18:25.671690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-05-15 11:18:25.671703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-05-15 11:18:25.671794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-05-15 11:18:25.671868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-05-15 11:18:25.671882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-05-15 11:18:25.672029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-05-15 11:18:25.672129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-05-15 11:18:25.672143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-05-15 11:18:25.672295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-05-15 11:18:25.672366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-05-15 11:18:25.672380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-05-15 11:18:25.672455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-05-15 11:18:25.672602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-05-15 11:18:25.672615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-05-15 11:18:25.672691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-05-15 11:18:25.672764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-05-15 11:18:25.672777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-05-15 11:18:25.672865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-05-15 11:18:25.673010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-05-15 11:18:25.673024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-05-15 11:18:25.673100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-05-15 11:18:25.673180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-05-15 11:18:25.673194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-05-15 11:18:25.673274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-05-15 11:18:25.673428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-05-15 11:18:25.673441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-05-15 11:18:25.673535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-05-15 11:18:25.673694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-05-15 11:18:25.673708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-05-15 11:18:25.673919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-05-15 11:18:25.674026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-05-15 11:18:25.674039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-05-15 11:18:25.674130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-05-15 11:18:25.674224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-05-15 11:18:25.674237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-05-15 11:18:25.674325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-05-15 11:18:25.674423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-05-15 11:18:25.674436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-05-15 11:18:25.674526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-05-15 11:18:25.674603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-05-15 11:18:25.674616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.563 qpair failed and we were unable to recover it. 00:26:28.563 [2024-05-15 11:18:25.674710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-05-15 11:18:25.674790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-05-15 11:18:25.674804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.563 qpair failed and we were unable to recover it. 00:26:28.563 [2024-05-15 11:18:25.674893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-05-15 11:18:25.674965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-05-15 11:18:25.674979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.563 qpair failed and we were unable to recover it. 00:26:28.563 [2024-05-15 11:18:25.675069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-05-15 11:18:25.675151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-05-15 11:18:25.675168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.563 qpair failed and we were unable to recover it. 00:26:28.563 [2024-05-15 11:18:25.675311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-05-15 11:18:25.675395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-05-15 11:18:25.675408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.563 qpair failed and we were unable to recover it. 00:26:28.563 [2024-05-15 11:18:25.675480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-05-15 11:18:25.675558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-05-15 11:18:25.675571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.563 qpair failed and we were unable to recover it. 00:26:28.563 [2024-05-15 11:18:25.675657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-05-15 11:18:25.675733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-05-15 11:18:25.675746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.563 qpair failed and we were unable to recover it. 00:26:28.563 [2024-05-15 11:18:25.675836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-05-15 11:18:25.675929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-05-15 11:18:25.675942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.563 qpair failed and we were unable to recover it. 00:26:28.563 [2024-05-15 11:18:25.676087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-05-15 11:18:25.676286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-05-15 11:18:25.676299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.563 qpair failed and we were unable to recover it. 00:26:28.563 [2024-05-15 11:18:25.676379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-05-15 11:18:25.676474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-05-15 11:18:25.676487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.563 qpair failed and we were unable to recover it. 00:26:28.563 [2024-05-15 11:18:25.676558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-05-15 11:18:25.676628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-05-15 11:18:25.676642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.563 qpair failed and we were unable to recover it. 00:26:28.563 [2024-05-15 11:18:25.676718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-05-15 11:18:25.676813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-05-15 11:18:25.676827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.563 qpair failed and we were unable to recover it. 00:26:28.563 [2024-05-15 11:18:25.676975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-05-15 11:18:25.677115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-05-15 11:18:25.677128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.563 qpair failed and we were unable to recover it. 00:26:28.563 [2024-05-15 11:18:25.677209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-05-15 11:18:25.677310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-05-15 11:18:25.677323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.563 qpair failed and we were unable to recover it. 00:26:28.563 [2024-05-15 11:18:25.677428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-05-15 11:18:25.677502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-05-15 11:18:25.677515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.563 qpair failed and we were unable to recover it. 00:26:28.563 [2024-05-15 11:18:25.677657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-05-15 11:18:25.677745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-05-15 11:18:25.677758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.563 qpair failed and we were unable to recover it. 00:26:28.563 [2024-05-15 11:18:25.677831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-05-15 11:18:25.677917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-05-15 11:18:25.677930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.563 qpair failed and we were unable to recover it. 00:26:28.563 [2024-05-15 11:18:25.678017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-05-15 11:18:25.678088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-05-15 11:18:25.678101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.563 qpair failed and we were unable to recover it. 00:26:28.563 [2024-05-15 11:18:25.678182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-05-15 11:18:25.678271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-05-15 11:18:25.678284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.563 qpair failed and we were unable to recover it. 00:26:28.563 [2024-05-15 11:18:25.678367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-05-15 11:18:25.678458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-05-15 11:18:25.678471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.563 qpair failed and we were unable to recover it. 00:26:28.563 [2024-05-15 11:18:25.678552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-05-15 11:18:25.678628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-05-15 11:18:25.678640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.563 qpair failed and we were unable to recover it. 00:26:28.563 [2024-05-15 11:18:25.678758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-05-15 11:18:25.678831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-05-15 11:18:25.678847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.563 qpair failed and we were unable to recover it. 00:26:28.563 [2024-05-15 11:18:25.678929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-05-15 11:18:25.679069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-05-15 11:18:25.679082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.563 qpair failed and we were unable to recover it. 00:26:28.563 [2024-05-15 11:18:25.679227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-05-15 11:18:25.679310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-05-15 11:18:25.679323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.564 qpair failed and we were unable to recover it. 00:26:28.564 [2024-05-15 11:18:25.679398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.564 [2024-05-15 11:18:25.679584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.564 [2024-05-15 11:18:25.679597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.564 qpair failed and we were unable to recover it. 00:26:28.564 [2024-05-15 11:18:25.679686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.564 [2024-05-15 11:18:25.679758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.564 [2024-05-15 11:18:25.679772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.564 qpair failed and we were unable to recover it. 00:26:28.564 [2024-05-15 11:18:25.679857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.564 [2024-05-15 11:18:25.679928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.564 [2024-05-15 11:18:25.679941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.564 qpair failed and we were unable to recover it. 00:26:28.564 [2024-05-15 11:18:25.680091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.564 [2024-05-15 11:18:25.680233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.564 [2024-05-15 11:18:25.680246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.564 qpair failed and we were unable to recover it. 00:26:28.564 [2024-05-15 11:18:25.680326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.564 [2024-05-15 11:18:25.680465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.564 [2024-05-15 11:18:25.680479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.564 qpair failed and we were unable to recover it. 00:26:28.564 [2024-05-15 11:18:25.680631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.564 [2024-05-15 11:18:25.680706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.564 [2024-05-15 11:18:25.680719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.564 qpair failed and we were unable to recover it. 00:26:28.564 [2024-05-15 11:18:25.680794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.564 [2024-05-15 11:18:25.680891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.564 [2024-05-15 11:18:25.680904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.564 qpair failed and we were unable to recover it. 00:26:28.564 [2024-05-15 11:18:25.680994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.564 [2024-05-15 11:18:25.681079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.564 [2024-05-15 11:18:25.681095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.564 qpair failed and we were unable to recover it. 00:26:28.564 [2024-05-15 11:18:25.681179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.564 [2024-05-15 11:18:25.681257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.564 [2024-05-15 11:18:25.681271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.564 qpair failed and we were unable to recover it. 00:26:28.564 [2024-05-15 11:18:25.681357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.564 [2024-05-15 11:18:25.681501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.564 [2024-05-15 11:18:25.681514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.564 qpair failed and we were unable to recover it. 00:26:28.564 [2024-05-15 11:18:25.681606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.564 [2024-05-15 11:18:25.681746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.564 [2024-05-15 11:18:25.681759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.564 qpair failed and we were unable to recover it. 00:26:28.564 [2024-05-15 11:18:25.681836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.564 [2024-05-15 11:18:25.681929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.564 [2024-05-15 11:18:25.681942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.564 qpair failed and we were unable to recover it. 00:26:28.564 [2024-05-15 11:18:25.682086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.564 [2024-05-15 11:18:25.682162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.564 [2024-05-15 11:18:25.682180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.564 qpair failed and we were unable to recover it. 00:26:28.564 [2024-05-15 11:18:25.682355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.564 [2024-05-15 11:18:25.682513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.564 [2024-05-15 11:18:25.682526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.564 qpair failed and we were unable to recover it. 00:26:28.564 [2024-05-15 11:18:25.682617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.564 [2024-05-15 11:18:25.682692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.564 [2024-05-15 11:18:25.682705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.564 qpair failed and we were unable to recover it. 00:26:28.564 [2024-05-15 11:18:25.682857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.564 [2024-05-15 11:18:25.682931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.564 [2024-05-15 11:18:25.682944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.564 qpair failed and we were unable to recover it. 00:26:28.564 [2024-05-15 11:18:25.683029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.564 [2024-05-15 11:18:25.683178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.564 [2024-05-15 11:18:25.683192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.564 qpair failed and we were unable to recover it. 00:26:28.564 [2024-05-15 11:18:25.683267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.564 [2024-05-15 11:18:25.683338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.564 [2024-05-15 11:18:25.683355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.564 qpair failed and we were unable to recover it. 00:26:28.564 [2024-05-15 11:18:25.683431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.564 [2024-05-15 11:18:25.683501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.564 [2024-05-15 11:18:25.683514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.564 qpair failed and we were unable to recover it. 00:26:28.564 [2024-05-15 11:18:25.683593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.564 [2024-05-15 11:18:25.683678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.564 [2024-05-15 11:18:25.683691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.564 qpair failed and we were unable to recover it. 00:26:28.564 [2024-05-15 11:18:25.683771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.564 [2024-05-15 11:18:25.683847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.564 [2024-05-15 11:18:25.683860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.564 qpair failed and we were unable to recover it. 00:26:28.564 [2024-05-15 11:18:25.683934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.564 [2024-05-15 11:18:25.684081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.564 [2024-05-15 11:18:25.684095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.564 qpair failed and we were unable to recover it. 00:26:28.564 [2024-05-15 11:18:25.684193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.564 [2024-05-15 11:18:25.684332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.564 [2024-05-15 11:18:25.684346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.564 qpair failed and we were unable to recover it. 00:26:28.564 [2024-05-15 11:18:25.684450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.564 [2024-05-15 11:18:25.684527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.564 [2024-05-15 11:18:25.684540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.564 qpair failed and we were unable to recover it. 00:26:28.564 [2024-05-15 11:18:25.684681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.564 [2024-05-15 11:18:25.684820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.564 [2024-05-15 11:18:25.684834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.564 qpair failed and we were unable to recover it. 00:26:28.565 [2024-05-15 11:18:25.684926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.684997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.685010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.565 qpair failed and we were unable to recover it. 00:26:28.565 [2024-05-15 11:18:25.685094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.685233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.685248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.565 qpair failed and we were unable to recover it. 00:26:28.565 [2024-05-15 11:18:25.685356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.685425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.685442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.565 qpair failed and we were unable to recover it. 00:26:28.565 [2024-05-15 11:18:25.685520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.685602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.685615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.565 qpair failed and we were unable to recover it. 00:26:28.565 [2024-05-15 11:18:25.685694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.685766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.685780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.565 qpair failed and we were unable to recover it. 00:26:28.565 [2024-05-15 11:18:25.685872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.686029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.686042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.565 qpair failed and we were unable to recover it. 00:26:28.565 [2024-05-15 11:18:25.686109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.686177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.686192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.565 qpair failed and we were unable to recover it. 00:26:28.565 [2024-05-15 11:18:25.686268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.686352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.686365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.565 qpair failed and we were unable to recover it. 00:26:28.565 [2024-05-15 11:18:25.686527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.686611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.686624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.565 qpair failed and we were unable to recover it. 00:26:28.565 [2024-05-15 11:18:25.686769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.686842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.686862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.565 qpair failed and we were unable to recover it. 00:26:28.565 [2024-05-15 11:18:25.686959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.687047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.687060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.565 qpair failed and we were unable to recover it. 00:26:28.565 [2024-05-15 11:18:25.687133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.687220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.687234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.565 qpair failed and we were unable to recover it. 00:26:28.565 [2024-05-15 11:18:25.687309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.687388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.687401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.565 qpair failed and we were unable to recover it. 00:26:28.565 [2024-05-15 11:18:25.687473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.687545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.687558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.565 qpair failed and we were unable to recover it. 00:26:28.565 [2024-05-15 11:18:25.687659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.687742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.687755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.565 qpair failed and we were unable to recover it. 00:26:28.565 [2024-05-15 11:18:25.687864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.687947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.687960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.565 qpair failed and we were unable to recover it. 00:26:28.565 [2024-05-15 11:18:25.688047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.688138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.688152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.565 qpair failed and we were unable to recover it. 00:26:28.565 [2024-05-15 11:18:25.688325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.688427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.688441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.565 qpair failed and we were unable to recover it. 00:26:28.565 [2024-05-15 11:18:25.688540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.688697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.688710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.565 qpair failed and we were unable to recover it. 00:26:28.565 [2024-05-15 11:18:25.688797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.688873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.688886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.565 qpair failed and we were unable to recover it. 00:26:28.565 [2024-05-15 11:18:25.688969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.689058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.689072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.565 qpair failed and we were unable to recover it. 00:26:28.565 [2024-05-15 11:18:25.689153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.689312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.689327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.565 qpair failed and we were unable to recover it. 00:26:28.565 [2024-05-15 11:18:25.689474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.689550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.689563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.565 qpair failed and we were unable to recover it. 00:26:28.565 [2024-05-15 11:18:25.689714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.689801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.689814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.565 qpair failed and we were unable to recover it. 00:26:28.565 [2024-05-15 11:18:25.689894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.689961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.689975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.565 qpair failed and we were unable to recover it. 00:26:28.565 [2024-05-15 11:18:25.690053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.690138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.690151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.565 qpair failed and we were unable to recover it. 00:26:28.565 [2024-05-15 11:18:25.690235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.690317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.565 [2024-05-15 11:18:25.690330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.565 qpair failed and we were unable to recover it. 00:26:28.566 [2024-05-15 11:18:25.690420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.566 [2024-05-15 11:18:25.690493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.566 [2024-05-15 11:18:25.690506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.566 qpair failed and we were unable to recover it. 00:26:28.566 [2024-05-15 11:18:25.690582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.566 [2024-05-15 11:18:25.690658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.566 [2024-05-15 11:18:25.690671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.566 qpair failed and we were unable to recover it. 00:26:28.566 [2024-05-15 11:18:25.690762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.566 [2024-05-15 11:18:25.690837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.566 [2024-05-15 11:18:25.690850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.566 qpair failed and we were unable to recover it. 00:26:28.566 [2024-05-15 11:18:25.690942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.566 [2024-05-15 11:18:25.691013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.566 [2024-05-15 11:18:25.691027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.566 qpair failed and we were unable to recover it. 00:26:28.566 [2024-05-15 11:18:25.691111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.566 [2024-05-15 11:18:25.691196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.566 [2024-05-15 11:18:25.691210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.566 qpair failed and we were unable to recover it. 00:26:28.566 [2024-05-15 11:18:25.691419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.566 [2024-05-15 11:18:25.691524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.566 [2024-05-15 11:18:25.691538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.566 qpair failed and we were unable to recover it. 00:26:28.566 [2024-05-15 11:18:25.691615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.566 [2024-05-15 11:18:25.691693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.566 [2024-05-15 11:18:25.691706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.566 qpair failed and we were unable to recover it. 00:26:28.566 [2024-05-15 11:18:25.691802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.566 [2024-05-15 11:18:25.691874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.566 [2024-05-15 11:18:25.691887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.566 qpair failed and we were unable to recover it. 00:26:28.566 [2024-05-15 11:18:25.691887] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:28.566 [2024-05-15 11:18:25.691913] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:28.566 [2024-05-15 11:18:25.691920] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:28.566 [2024-05-15 11:18:25.691926] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:28.566 [2024-05-15 11:18:25.691932] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:28.566 [2024-05-15 11:18:25.691970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.566 [2024-05-15 11:18:25.692063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.566 [2024-05-15 11:18:25.692075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.566 qpair failed and we were unable to recover it. 00:26:28.566 [2024-05-15 11:18:25.692045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:26:28.566 [2024-05-15 11:18:25.692153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.566 [2024-05-15 11:18:25.692152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:26:28.566 [2024-05-15 11:18:25.692261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:28.566 [2024-05-15 11:18:25.692342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.566 [2024-05-15 11:18:25.692355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.566 qpair failed and we were unable to recover it. 00:26:28.566 [2024-05-15 11:18:25.692261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:26:28.566 [2024-05-15 11:18:25.692443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.566 [2024-05-15 11:18:25.692627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.566 [2024-05-15 11:18:25.692640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.566 qpair failed and we were unable to recover it. 00:26:28.566 [2024-05-15 11:18:25.692789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.566 [2024-05-15 11:18:25.692873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.566 [2024-05-15 11:18:25.692885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.566 qpair failed and we were unable to recover it. 00:26:28.566 [2024-05-15 11:18:25.692970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.566 [2024-05-15 11:18:25.693114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.566 [2024-05-15 11:18:25.693127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.566 qpair failed and we were unable to recover it. 00:26:28.566 [2024-05-15 11:18:25.693241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.566 [2024-05-15 11:18:25.693312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.566 [2024-05-15 11:18:25.693328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.566 qpair failed and we were unable to recover it. 00:26:28.566 [2024-05-15 11:18:25.693404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.566 [2024-05-15 11:18:25.693496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.566 [2024-05-15 11:18:25.693510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.566 qpair failed and we were unable to recover it. 00:26:28.566 [2024-05-15 11:18:25.693592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.566 [2024-05-15 11:18:25.693683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.566 [2024-05-15 11:18:25.693697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.566 qpair failed and we were unable to recover it. 00:26:28.566 [2024-05-15 11:18:25.693779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.566 [2024-05-15 11:18:25.693920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.566 [2024-05-15 11:18:25.693933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.566 qpair failed and we were unable to recover it. 00:26:28.566 [2024-05-15 11:18:25.694013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.566 [2024-05-15 11:18:25.694244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.566 [2024-05-15 11:18:25.694257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.566 qpair failed and we were unable to recover it. 00:26:28.566 [2024-05-15 11:18:25.694399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.566 [2024-05-15 11:18:25.694480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.566 [2024-05-15 11:18:25.694493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.566 qpair failed and we were unable to recover it. 00:26:28.566 [2024-05-15 11:18:25.694650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.566 [2024-05-15 11:18:25.694882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.566 [2024-05-15 11:18:25.694896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.566 qpair failed and we were unable to recover it. 00:26:28.566 [2024-05-15 11:18:25.694983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.566 [2024-05-15 11:18:25.695086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.566 [2024-05-15 11:18:25.695099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.566 qpair failed and we were unable to recover it. 00:26:28.566 [2024-05-15 11:18:25.695191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.566 [2024-05-15 11:18:25.695289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.566 [2024-05-15 11:18:25.695302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.566 qpair failed and we were unable to recover it. 00:26:28.566 [2024-05-15 11:18:25.695384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.566 [2024-05-15 11:18:25.695530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.566 [2024-05-15 11:18:25.695544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.566 qpair failed and we were unable to recover it. 00:26:28.566 [2024-05-15 11:18:25.695702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.566 [2024-05-15 11:18:25.695799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.566 [2024-05-15 11:18:25.695815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.566 qpair failed and we were unable to recover it. 00:26:28.566 [2024-05-15 11:18:25.695895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.566 [2024-05-15 11:18:25.695963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.566 [2024-05-15 11:18:25.695977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.566 qpair failed and we were unable to recover it. 00:26:28.566 [2024-05-15 11:18:25.696054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.566 [2024-05-15 11:18:25.696205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.567 [2024-05-15 11:18:25.696219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.567 qpair failed and we were unable to recover it. 00:26:28.567 [2024-05-15 11:18:25.696374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.567 [2024-05-15 11:18:25.696459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.567 [2024-05-15 11:18:25.696472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.567 qpair failed and we were unable to recover it. 00:26:28.567 [2024-05-15 11:18:25.696625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.567 [2024-05-15 11:18:25.696696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.567 [2024-05-15 11:18:25.696709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.567 qpair failed and we were unable to recover it. 00:26:28.567 [2024-05-15 11:18:25.696786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.567 [2024-05-15 11:18:25.696883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.567 [2024-05-15 11:18:25.696896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.567 qpair failed and we were unable to recover it. 00:26:28.567 [2024-05-15 11:18:25.697050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.567 [2024-05-15 11:18:25.697179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.567 [2024-05-15 11:18:25.697192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.567 qpair failed and we were unable to recover it. 00:26:28.567 [2024-05-15 11:18:25.697285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.567 [2024-05-15 11:18:25.697382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.567 [2024-05-15 11:18:25.697396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.567 qpair failed and we were unable to recover it. 00:26:28.567 [2024-05-15 11:18:25.697477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.567 [2024-05-15 11:18:25.697570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.567 [2024-05-15 11:18:25.697583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.567 qpair failed and we were unable to recover it. 00:26:28.567 [2024-05-15 11:18:25.697683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.567 [2024-05-15 11:18:25.697763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.567 [2024-05-15 11:18:25.697777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.567 qpair failed and we were unable to recover it. 00:26:28.567 [2024-05-15 11:18:25.697876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.567 [2024-05-15 11:18:25.697971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.567 [2024-05-15 11:18:25.697987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.567 qpair failed and we were unable to recover it. 00:26:28.567 [2024-05-15 11:18:25.698068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.567 [2024-05-15 11:18:25.698145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.567 [2024-05-15 11:18:25.698158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.567 qpair failed and we were unable to recover it. 00:26:28.567 [2024-05-15 11:18:25.698258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.567 [2024-05-15 11:18:25.698336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.567 [2024-05-15 11:18:25.698349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.567 qpair failed and we were unable to recover it. 00:26:28.567 [2024-05-15 11:18:25.698615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.567 [2024-05-15 11:18:25.698864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.567 [2024-05-15 11:18:25.698877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.567 qpair failed and we were unable to recover it. 00:26:28.567 [2024-05-15 11:18:25.699036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.567 [2024-05-15 11:18:25.699216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.567 [2024-05-15 11:18:25.699230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.567 qpair failed and we were unable to recover it. 00:26:28.567 [2024-05-15 11:18:25.699439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.567 [2024-05-15 11:18:25.699585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.567 [2024-05-15 11:18:25.699599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.567 qpair failed and we were unable to recover it. 00:26:28.567 [2024-05-15 11:18:25.699759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.567 [2024-05-15 11:18:25.699858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.567 [2024-05-15 11:18:25.699871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.567 qpair failed and we were unable to recover it. 00:26:28.567 [2024-05-15 11:18:25.700020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.567 [2024-05-15 11:18:25.700289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.567 [2024-05-15 11:18:25.700304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.567 qpair failed and we were unable to recover it. 00:26:28.567 [2024-05-15 11:18:25.700384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.567 [2024-05-15 11:18:25.700488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.567 [2024-05-15 11:18:25.700502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.567 qpair failed and we were unable to recover it. 00:26:28.567 [2024-05-15 11:18:25.700675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.567 [2024-05-15 11:18:25.700769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.567 [2024-05-15 11:18:25.700783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.567 qpair failed and we were unable to recover it. 00:26:28.567 [2024-05-15 11:18:25.700896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.567 [2024-05-15 11:18:25.701080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.567 [2024-05-15 11:18:25.701097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.567 qpair failed and we were unable to recover it. 00:26:28.567 [2024-05-15 11:18:25.701207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.567 [2024-05-15 11:18:25.701324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.567 [2024-05-15 11:18:25.701338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.567 qpair failed and we were unable to recover it. 00:26:28.567 [2024-05-15 11:18:25.701513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.567 [2024-05-15 11:18:25.701659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.567 [2024-05-15 11:18:25.701673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.567 qpair failed and we were unable to recover it. 00:26:28.567 [2024-05-15 11:18:25.701840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.567 [2024-05-15 11:18:25.701925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.567 [2024-05-15 11:18:25.701939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.567 qpair failed and we were unable to recover it. 00:26:28.567 [2024-05-15 11:18:25.702175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.567 [2024-05-15 11:18:25.702403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.567 [2024-05-15 11:18:25.702417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.567 qpair failed and we were unable to recover it. 00:26:28.567 [2024-05-15 11:18:25.702595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.567 [2024-05-15 11:18:25.702828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.567 [2024-05-15 11:18:25.702842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.567 qpair failed and we were unable to recover it. 00:26:28.567 [2024-05-15 11:18:25.703023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.567 [2024-05-15 11:18:25.703099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.568 [2024-05-15 11:18:25.703113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.568 qpair failed and we were unable to recover it. 00:26:28.568 [2024-05-15 11:18:25.703345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.568 [2024-05-15 11:18:25.703437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.568 [2024-05-15 11:18:25.703450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.568 qpair failed and we were unable to recover it. 00:26:28.568 [2024-05-15 11:18:25.703544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.568 [2024-05-15 11:18:25.703749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.568 [2024-05-15 11:18:25.703763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.568 qpair failed and we were unable to recover it. 00:26:28.568 [2024-05-15 11:18:25.703977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.568 [2024-05-15 11:18:25.704070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.568 [2024-05-15 11:18:25.704085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.568 qpair failed and we were unable to recover it. 00:26:28.568 [2024-05-15 11:18:25.704274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.568 [2024-05-15 11:18:25.704389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.568 [2024-05-15 11:18:25.704402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.568 qpair failed and we were unable to recover it. 00:26:28.568 [2024-05-15 11:18:25.704506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.568 [2024-05-15 11:18:25.704618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.568 [2024-05-15 11:18:25.704632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.568 qpair failed and we were unable to recover it. 00:26:28.568 [2024-05-15 11:18:25.704719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.568 [2024-05-15 11:18:25.704858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.568 [2024-05-15 11:18:25.704872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.568 qpair failed and we were unable to recover it. 00:26:28.568 [2024-05-15 11:18:25.704965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.568 [2024-05-15 11:18:25.705147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.568 [2024-05-15 11:18:25.705162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.568 qpair failed and we were unable to recover it. 00:26:28.568 [2024-05-15 11:18:25.705378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.568 [2024-05-15 11:18:25.705526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.568 [2024-05-15 11:18:25.705540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.568 qpair failed and we were unable to recover it. 00:26:28.568 [2024-05-15 11:18:25.705794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.568 [2024-05-15 11:18:25.706009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.568 [2024-05-15 11:18:25.706023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.568 qpair failed and we were unable to recover it. 00:26:28.568 [2024-05-15 11:18:25.706226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.568 [2024-05-15 11:18:25.706413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.568 [2024-05-15 11:18:25.706428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.568 qpair failed and we were unable to recover it. 00:26:28.568 [2024-05-15 11:18:25.706607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.568 [2024-05-15 11:18:25.706767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.568 [2024-05-15 11:18:25.706781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.568 qpair failed and we were unable to recover it. 00:26:28.568 [2024-05-15 11:18:25.706883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.568 [2024-05-15 11:18:25.706976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.568 [2024-05-15 11:18:25.706989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.568 qpair failed and we were unable to recover it. 00:26:28.568 [2024-05-15 11:18:25.707139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.568 [2024-05-15 11:18:25.707308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.568 [2024-05-15 11:18:25.707323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.568 qpair failed and we were unable to recover it. 00:26:28.568 [2024-05-15 11:18:25.707431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.568 [2024-05-15 11:18:25.707522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.568 [2024-05-15 11:18:25.707535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.568 qpair failed and we were unable to recover it. 00:26:28.568 [2024-05-15 11:18:25.707760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.568 [2024-05-15 11:18:25.707978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.568 [2024-05-15 11:18:25.707992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.568 qpair failed and we were unable to recover it. 00:26:28.568 [2024-05-15 11:18:25.708142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.568 [2024-05-15 11:18:25.708261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.568 [2024-05-15 11:18:25.708277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.568 qpair failed and we were unable to recover it. 00:26:28.568 [2024-05-15 11:18:25.708453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.568 [2024-05-15 11:18:25.708631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.568 [2024-05-15 11:18:25.708646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.568 qpair failed and we were unable to recover it. 00:26:28.568 [2024-05-15 11:18:25.708810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.568 [2024-05-15 11:18:25.708969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.568 [2024-05-15 11:18:25.708983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.568 qpair failed and we were unable to recover it. 00:26:28.568 [2024-05-15 11:18:25.709088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.568 [2024-05-15 11:18:25.709190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.568 [2024-05-15 11:18:25.709204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.568 qpair failed and we were unable to recover it. 00:26:28.568 [2024-05-15 11:18:25.709418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.568 [2024-05-15 11:18:25.709631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.568 [2024-05-15 11:18:25.709646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.568 qpair failed and we were unable to recover it. 00:26:28.568 [2024-05-15 11:18:25.709783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.568 [2024-05-15 11:18:25.709926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.568 [2024-05-15 11:18:25.709941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.568 qpair failed and we were unable to recover it. 00:26:28.568 [2024-05-15 11:18:25.710125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.568 [2024-05-15 11:18:25.710211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.568 [2024-05-15 11:18:25.710225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.568 qpair failed and we were unable to recover it. 00:26:28.568 [2024-05-15 11:18:25.710367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.568 [2024-05-15 11:18:25.710581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.568 [2024-05-15 11:18:25.710595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.568 qpair failed and we were unable to recover it. 00:26:28.568 [2024-05-15 11:18:25.710759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.568 [2024-05-15 11:18:25.710943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.568 [2024-05-15 11:18:25.710957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.568 qpair failed and we were unable to recover it. 00:26:28.568 [2024-05-15 11:18:25.711141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.568 [2024-05-15 11:18:25.711326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.568 [2024-05-15 11:18:25.711341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.568 qpair failed and we were unable to recover it. 00:26:28.569 [2024-05-15 11:18:25.711516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.711633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.711647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.569 qpair failed and we were unable to recover it. 00:26:28.569 [2024-05-15 11:18:25.711815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.711973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.711987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.569 qpair failed and we were unable to recover it. 00:26:28.569 [2024-05-15 11:18:25.712185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.712402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.712416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.569 qpair failed and we were unable to recover it. 00:26:28.569 [2024-05-15 11:18:25.712525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.712629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.712643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.569 qpair failed and we were unable to recover it. 00:26:28.569 [2024-05-15 11:18:25.712883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.712981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.712995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.569 qpair failed and we were unable to recover it. 00:26:28.569 [2024-05-15 11:18:25.713182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.713341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.713357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.569 qpair failed and we were unable to recover it. 00:26:28.569 [2024-05-15 11:18:25.713520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.713695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.713710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.569 qpair failed and we were unable to recover it. 00:26:28.569 [2024-05-15 11:18:25.713899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.714124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.714138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.569 qpair failed and we were unable to recover it. 00:26:28.569 [2024-05-15 11:18:25.714275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.714359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.714373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.569 qpair failed and we were unable to recover it. 00:26:28.569 [2024-05-15 11:18:25.714478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.714638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.714651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.569 qpair failed and we were unable to recover it. 00:26:28.569 [2024-05-15 11:18:25.714743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.714926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.714940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.569 qpair failed and we were unable to recover it. 00:26:28.569 [2024-05-15 11:18:25.715092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.715319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.715334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.569 qpair failed and we were unable to recover it. 00:26:28.569 [2024-05-15 11:18:25.715500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.715689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.715703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.569 qpair failed and we were unable to recover it. 00:26:28.569 [2024-05-15 11:18:25.715950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.716044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.716057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.569 qpair failed and we were unable to recover it. 00:26:28.569 [2024-05-15 11:18:25.716272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.716378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.716392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.569 qpair failed and we were unable to recover it. 00:26:28.569 [2024-05-15 11:18:25.716555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.716705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.716719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.569 qpair failed and we were unable to recover it. 00:26:28.569 [2024-05-15 11:18:25.716803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.716975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.716990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.569 qpair failed and we were unable to recover it. 00:26:28.569 [2024-05-15 11:18:25.717244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.717427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.717441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.569 qpair failed and we were unable to recover it. 00:26:28.569 [2024-05-15 11:18:25.717654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.717747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.717760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.569 qpair failed and we were unable to recover it. 00:26:28.569 [2024-05-15 11:18:25.718032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.718160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.718178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.569 qpair failed and we were unable to recover it. 00:26:28.569 [2024-05-15 11:18:25.718338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.718467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.718480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.569 qpair failed and we were unable to recover it. 00:26:28.569 [2024-05-15 11:18:25.718655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.718955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.718969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.569 qpair failed and we were unable to recover it. 00:26:28.569 [2024-05-15 11:18:25.719109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.719211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.719226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.569 qpair failed and we were unable to recover it. 00:26:28.569 [2024-05-15 11:18:25.719336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.719529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.719543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.569 qpair failed and we were unable to recover it. 00:26:28.569 [2024-05-15 11:18:25.719641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.719738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.719751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.569 qpair failed and we were unable to recover it. 00:26:28.569 [2024-05-15 11:18:25.719984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.720124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.720138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.569 qpair failed and we were unable to recover it. 00:26:28.569 [2024-05-15 11:18:25.720379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.720585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.720599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.569 qpair failed and we were unable to recover it. 00:26:28.569 [2024-05-15 11:18:25.720750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.569 [2024-05-15 11:18:25.720827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.720841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.570 qpair failed and we were unable to recover it. 00:26:28.570 [2024-05-15 11:18:25.720996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.721207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.721222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.570 qpair failed and we were unable to recover it. 00:26:28.570 [2024-05-15 11:18:25.721386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.721543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.721557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.570 qpair failed and we were unable to recover it. 00:26:28.570 [2024-05-15 11:18:25.721749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.721870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.721884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.570 qpair failed and we were unable to recover it. 00:26:28.570 [2024-05-15 11:18:25.722039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.722198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.722211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.570 qpair failed and we were unable to recover it. 00:26:28.570 [2024-05-15 11:18:25.722387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.722501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.722515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.570 qpair failed and we were unable to recover it. 00:26:28.570 [2024-05-15 11:18:25.722614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.722758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.722772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.570 qpair failed and we were unable to recover it. 00:26:28.570 [2024-05-15 11:18:25.722871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.723051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.723065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.570 qpair failed and we were unable to recover it. 00:26:28.570 [2024-05-15 11:18:25.723268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.723422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.723436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.570 qpair failed and we were unable to recover it. 00:26:28.570 [2024-05-15 11:18:25.723544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.723711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.723725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.570 qpair failed and we were unable to recover it. 00:26:28.570 [2024-05-15 11:18:25.723932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.724086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.724099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.570 qpair failed and we were unable to recover it. 00:26:28.570 [2024-05-15 11:18:25.724241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.724400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.724413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.570 qpair failed and we were unable to recover it. 00:26:28.570 [2024-05-15 11:18:25.724616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.724738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.724754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.570 qpair failed and we were unable to recover it. 00:26:28.570 [2024-05-15 11:18:25.724926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.725122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.725136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.570 qpair failed and we were unable to recover it. 00:26:28.570 [2024-05-15 11:18:25.725285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.725435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.725449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.570 qpair failed and we were unable to recover it. 00:26:28.570 [2024-05-15 11:18:25.725614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.725885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.725900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.570 qpair failed and we were unable to recover it. 00:26:28.570 [2024-05-15 11:18:25.726191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.726407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.726421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.570 qpair failed and we were unable to recover it. 00:26:28.570 [2024-05-15 11:18:25.726516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.726619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.726634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.570 qpair failed and we were unable to recover it. 00:26:28.570 [2024-05-15 11:18:25.726867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.726988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.727001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.570 qpair failed and we were unable to recover it. 00:26:28.570 [2024-05-15 11:18:25.727147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.727276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.727290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.570 qpair failed and we were unable to recover it. 00:26:28.570 [2024-05-15 11:18:25.727397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.727511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.727524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.570 qpair failed and we were unable to recover it. 00:26:28.570 [2024-05-15 11:18:25.727628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.727878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.727892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.570 qpair failed and we were unable to recover it. 00:26:28.570 [2024-05-15 11:18:25.728147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.728366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.728380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.570 qpair failed and we were unable to recover it. 00:26:28.570 [2024-05-15 11:18:25.728477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.728616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.728629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.570 qpair failed and we were unable to recover it. 00:26:28.570 [2024-05-15 11:18:25.728833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.729037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.729050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.570 qpair failed and we were unable to recover it. 00:26:28.570 [2024-05-15 11:18:25.729313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.729424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.729437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.570 qpair failed and we were unable to recover it. 00:26:28.570 [2024-05-15 11:18:25.729595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.729678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.729691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.570 qpair failed and we were unable to recover it. 00:26:28.570 [2024-05-15 11:18:25.729879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.730079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.730093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.570 qpair failed and we were unable to recover it. 00:26:28.570 [2024-05-15 11:18:25.730307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.730480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.570 [2024-05-15 11:18:25.730494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.570 qpair failed and we were unable to recover it. 00:26:28.571 [2024-05-15 11:18:25.730603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.730911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.730925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.571 qpair failed and we were unable to recover it. 00:26:28.571 [2024-05-15 11:18:25.731158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.731278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.731292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.571 qpair failed and we were unable to recover it. 00:26:28.571 [2024-05-15 11:18:25.731455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.731635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.731648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.571 qpair failed and we were unable to recover it. 00:26:28.571 [2024-05-15 11:18:25.731951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.732057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.732073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.571 qpair failed and we were unable to recover it. 00:26:28.571 [2024-05-15 11:18:25.732260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.732409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.732422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.571 qpair failed and we were unable to recover it. 00:26:28.571 [2024-05-15 11:18:25.732583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.732763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.732776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.571 qpair failed and we were unable to recover it. 00:26:28.571 [2024-05-15 11:18:25.732851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.733009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.733023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.571 qpair failed and we were unable to recover it. 00:26:28.571 [2024-05-15 11:18:25.733230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.733394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.733408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.571 qpair failed and we were unable to recover it. 00:26:28.571 [2024-05-15 11:18:25.733518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.733685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.733699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.571 qpair failed and we were unable to recover it. 00:26:28.571 [2024-05-15 11:18:25.733803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.733987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.734000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.571 qpair failed and we were unable to recover it. 00:26:28.571 [2024-05-15 11:18:25.734090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.734240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.734254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.571 qpair failed and we were unable to recover it. 00:26:28.571 [2024-05-15 11:18:25.734412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.734507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.734521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.571 qpair failed and we were unable to recover it. 00:26:28.571 [2024-05-15 11:18:25.734702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.734794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.734807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.571 qpair failed and we were unable to recover it. 00:26:28.571 [2024-05-15 11:18:25.734932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.735014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.735027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.571 qpair failed and we were unable to recover it. 00:26:28.571 [2024-05-15 11:18:25.735263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.735349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.735362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.571 qpair failed and we were unable to recover it. 00:26:28.571 [2024-05-15 11:18:25.735571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.735675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.735688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.571 qpair failed and we were unable to recover it. 00:26:28.571 [2024-05-15 11:18:25.735947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.736208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.736222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.571 qpair failed and we were unable to recover it. 00:26:28.571 [2024-05-15 11:18:25.736372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.736622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.736635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.571 qpair failed and we were unable to recover it. 00:26:28.571 [2024-05-15 11:18:25.736795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.736902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.736916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.571 qpair failed and we were unable to recover it. 00:26:28.571 [2024-05-15 11:18:25.737069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.737147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.737160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.571 qpair failed and we were unable to recover it. 00:26:28.571 [2024-05-15 11:18:25.737260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.737420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.737433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.571 qpair failed and we were unable to recover it. 00:26:28.571 [2024-05-15 11:18:25.737703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.737975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.737989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.571 qpair failed and we were unable to recover it. 00:26:28.571 [2024-05-15 11:18:25.738147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.738262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.738276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.571 qpair failed and we were unable to recover it. 00:26:28.571 [2024-05-15 11:18:25.738425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.738563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.738578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.571 qpair failed and we were unable to recover it. 00:26:28.571 [2024-05-15 11:18:25.738827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.738920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.738933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.571 qpair failed and we were unable to recover it. 00:26:28.571 [2024-05-15 11:18:25.739192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.739417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.739431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.571 qpair failed and we were unable to recover it. 00:26:28.571 [2024-05-15 11:18:25.739584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.739691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.739705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.571 qpair failed and we were unable to recover it. 00:26:28.571 [2024-05-15 11:18:25.739939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.740149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.740163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.571 qpair failed and we were unable to recover it. 00:26:28.571 [2024-05-15 11:18:25.740273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.740368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.740382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.571 qpair failed and we were unable to recover it. 00:26:28.571 [2024-05-15 11:18:25.740525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.740735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.571 [2024-05-15 11:18:25.740750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.572 qpair failed and we were unable to recover it. 00:26:28.572 [2024-05-15 11:18:25.740920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.572 [2024-05-15 11:18:25.741077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.572 [2024-05-15 11:18:25.741091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.572 qpair failed and we were unable to recover it. 00:26:28.572 [2024-05-15 11:18:25.741330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.572 [2024-05-15 11:18:25.741535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.572 [2024-05-15 11:18:25.741548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.572 qpair failed and we were unable to recover it. 00:26:28.572 [2024-05-15 11:18:25.741806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.572 [2024-05-15 11:18:25.742060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.572 [2024-05-15 11:18:25.742073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.572 qpair failed and we were unable to recover it. 00:26:28.572 [2024-05-15 11:18:25.742319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.572 [2024-05-15 11:18:25.742461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.572 [2024-05-15 11:18:25.742475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.572 qpair failed and we were unable to recover it. 00:26:28.572 [2024-05-15 11:18:25.742593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.572 [2024-05-15 11:18:25.742750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.572 [2024-05-15 11:18:25.742765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.572 qpair failed and we were unable to recover it. 00:26:28.572 [2024-05-15 11:18:25.743002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.572 [2024-05-15 11:18:25.743158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.572 [2024-05-15 11:18:25.743179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.572 qpair failed and we were unable to recover it. 00:26:28.572 [2024-05-15 11:18:25.743404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.572 [2024-05-15 11:18:25.743557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.572 [2024-05-15 11:18:25.743571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.572 qpair failed and we were unable to recover it. 00:26:28.572 [2024-05-15 11:18:25.743646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.572 [2024-05-15 11:18:25.743735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.572 [2024-05-15 11:18:25.743749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.572 qpair failed and we were unable to recover it. 00:26:28.572 [2024-05-15 11:18:25.743843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.572 [2024-05-15 11:18:25.744000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.572 [2024-05-15 11:18:25.744014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.572 qpair failed and we were unable to recover it. 00:26:28.572 [2024-05-15 11:18:25.744240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.572 [2024-05-15 11:18:25.744404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.572 [2024-05-15 11:18:25.744418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.572 qpair failed and we were unable to recover it. 00:26:28.572 [2024-05-15 11:18:25.744591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.572 [2024-05-15 11:18:25.744808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.572 [2024-05-15 11:18:25.744823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.572 qpair failed and we were unable to recover it. 00:26:28.572 [2024-05-15 11:18:25.745057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.572 [2024-05-15 11:18:25.745276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.572 [2024-05-15 11:18:25.745292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.572 qpair failed and we were unable to recover it. 00:26:28.572 [2024-05-15 11:18:25.745395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.572 [2024-05-15 11:18:25.745493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.572 [2024-05-15 11:18:25.745508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.572 qpair failed and we were unable to recover it. 00:26:28.572 [2024-05-15 11:18:25.745717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.572 [2024-05-15 11:18:25.745940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.572 [2024-05-15 11:18:25.745955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.572 qpair failed and we were unable to recover it. 00:26:28.572 [2024-05-15 11:18:25.746185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.572 [2024-05-15 11:18:25.746272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.572 [2024-05-15 11:18:25.746287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.572 qpair failed and we were unable to recover it. 00:26:28.572 [2024-05-15 11:18:25.746468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.572 [2024-05-15 11:18:25.746564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.572 [2024-05-15 11:18:25.746578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.572 qpair failed and we were unable to recover it. 00:26:28.572 [2024-05-15 11:18:25.746686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.572 [2024-05-15 11:18:25.746855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.572 [2024-05-15 11:18:25.746869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.572 qpair failed and we were unable to recover it. 00:26:28.572 [2024-05-15 11:18:25.746959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.572 [2024-05-15 11:18:25.747130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.572 [2024-05-15 11:18:25.747146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.572 qpair failed and we were unable to recover it. 00:26:28.572 [2024-05-15 11:18:25.747238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.572 [2024-05-15 11:18:25.747340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.572 [2024-05-15 11:18:25.747355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.572 qpair failed and we were unable to recover it. 00:26:28.572 [2024-05-15 11:18:25.747499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.572 [2024-05-15 11:18:25.747754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.572 [2024-05-15 11:18:25.747770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.572 qpair failed and we were unable to recover it. 00:26:28.572 [2024-05-15 11:18:25.747950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.572 [2024-05-15 11:18:25.748099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.572 [2024-05-15 11:18:25.748114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.572 qpair failed and we were unable to recover it. 00:26:28.572 [2024-05-15 11:18:25.748289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.572 [2024-05-15 11:18:25.748442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.572 [2024-05-15 11:18:25.748456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.572 qpair failed and we were unable to recover it. 00:26:28.572 [2024-05-15 11:18:25.748599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.572 [2024-05-15 11:18:25.748699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.572 [2024-05-15 11:18:25.748713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.572 qpair failed and we were unable to recover it. 00:26:28.572 [2024-05-15 11:18:25.748936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.572 [2024-05-15 11:18:25.749027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.572 [2024-05-15 11:18:25.749041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.572 qpair failed and we were unable to recover it. 00:26:28.572 [2024-05-15 11:18:25.749340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.572 [2024-05-15 11:18:25.749530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.749543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.573 qpair failed and we were unable to recover it. 00:26:28.573 [2024-05-15 11:18:25.749835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.749929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.749943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.573 qpair failed and we were unable to recover it. 00:26:28.573 [2024-05-15 11:18:25.750096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.750196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.750210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.573 qpair failed and we were unable to recover it. 00:26:28.573 [2024-05-15 11:18:25.750411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.750503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.750516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.573 qpair failed and we were unable to recover it. 00:26:28.573 [2024-05-15 11:18:25.750728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.750914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.750928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.573 qpair failed and we were unable to recover it. 00:26:28.573 [2024-05-15 11:18:25.751035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.751219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.751234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.573 qpair failed and we were unable to recover it. 00:26:28.573 [2024-05-15 11:18:25.751461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.751616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.751630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.573 qpair failed and we were unable to recover it. 00:26:28.573 [2024-05-15 11:18:25.751887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.752025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.752039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.573 qpair failed and we were unable to recover it. 00:26:28.573 [2024-05-15 11:18:25.752298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.752450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.752464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.573 qpair failed and we were unable to recover it. 00:26:28.573 [2024-05-15 11:18:25.752621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.752829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.752842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.573 qpair failed and we were unable to recover it. 00:26:28.573 [2024-05-15 11:18:25.753003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.753161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.753192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.573 qpair failed and we were unable to recover it. 00:26:28.573 [2024-05-15 11:18:25.753313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.753418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.753431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.573 qpair failed and we were unable to recover it. 00:26:28.573 [2024-05-15 11:18:25.753611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.753696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.753710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.573 qpair failed and we were unable to recover it. 00:26:28.573 [2024-05-15 11:18:25.753899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.754126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.754139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.573 qpair failed and we were unable to recover it. 00:26:28.573 [2024-05-15 11:18:25.754312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.754417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.754431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.573 qpair failed and we were unable to recover it. 00:26:28.573 [2024-05-15 11:18:25.754532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.754674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.754687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.573 qpair failed and we were unable to recover it. 00:26:28.573 [2024-05-15 11:18:25.754939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.755034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.755048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.573 qpair failed and we were unable to recover it. 00:26:28.573 [2024-05-15 11:18:25.755207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.755302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.755317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.573 qpair failed and we were unable to recover it. 00:26:28.573 [2024-05-15 11:18:25.755490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.755609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.755622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.573 qpair failed and we were unable to recover it. 00:26:28.573 [2024-05-15 11:18:25.755767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.755990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.756004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.573 qpair failed and we were unable to recover it. 00:26:28.573 [2024-05-15 11:18:25.756158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.756317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.756334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.573 qpair failed and we were unable to recover it. 00:26:28.573 [2024-05-15 11:18:25.756495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.756601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.756616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.573 qpair failed and we were unable to recover it. 00:26:28.573 [2024-05-15 11:18:25.756837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.757000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.757014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.573 qpair failed and we were unable to recover it. 00:26:28.573 [2024-05-15 11:18:25.757118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.757229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.757244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.573 qpair failed and we were unable to recover it. 00:26:28.573 [2024-05-15 11:18:25.757339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.757491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.757505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.573 qpair failed and we were unable to recover it. 00:26:28.573 [2024-05-15 11:18:25.757641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.757753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.757767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.573 qpair failed and we were unable to recover it. 00:26:28.573 [2024-05-15 11:18:25.757939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.758120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.758134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.573 qpair failed and we were unable to recover it. 00:26:28.573 [2024-05-15 11:18:25.758247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.758362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.758375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.573 qpair failed and we were unable to recover it. 00:26:28.573 [2024-05-15 11:18:25.758582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.758724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.758739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.573 qpair failed and we were unable to recover it. 00:26:28.573 [2024-05-15 11:18:25.758907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.759008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.759022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.573 qpair failed and we were unable to recover it. 00:26:28.573 [2024-05-15 11:18:25.759235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.759444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.759460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.573 qpair failed and we were unable to recover it. 00:26:28.573 [2024-05-15 11:18:25.759573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.759730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.573 [2024-05-15 11:18:25.759743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.574 qpair failed and we were unable to recover it. 00:26:28.574 [2024-05-15 11:18:25.759831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.760062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.760076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.574 qpair failed and we were unable to recover it. 00:26:28.574 [2024-05-15 11:18:25.760258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.760369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.760384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.574 qpair failed and we were unable to recover it. 00:26:28.574 [2024-05-15 11:18:25.760475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.760649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.760663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.574 qpair failed and we were unable to recover it. 00:26:28.574 [2024-05-15 11:18:25.760771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.761023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.761037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.574 qpair failed and we were unable to recover it. 00:26:28.574 [2024-05-15 11:18:25.761239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.761449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.761463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.574 qpair failed and we were unable to recover it. 00:26:28.574 [2024-05-15 11:18:25.761610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.761758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.761772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.574 qpair failed and we were unable to recover it. 00:26:28.574 [2024-05-15 11:18:25.761874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.762101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.762114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.574 qpair failed and we were unable to recover it. 00:26:28.574 [2024-05-15 11:18:25.762269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.762425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.762438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.574 qpair failed and we were unable to recover it. 00:26:28.574 [2024-05-15 11:18:25.762591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.762808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.762824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.574 qpair failed and we were unable to recover it. 00:26:28.574 [2024-05-15 11:18:25.762912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.763141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.763155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.574 qpair failed and we were unable to recover it. 00:26:28.574 [2024-05-15 11:18:25.763270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.763430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.763443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.574 qpair failed and we were unable to recover it. 00:26:28.574 [2024-05-15 11:18:25.763609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.763771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.763784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.574 qpair failed and we were unable to recover it. 00:26:28.574 [2024-05-15 11:18:25.763980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.764147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.764160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.574 qpair failed and we were unable to recover it. 00:26:28.574 [2024-05-15 11:18:25.764404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.764656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.764669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.574 qpair failed and we were unable to recover it. 00:26:28.574 [2024-05-15 11:18:25.764928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.765029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.765042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.574 qpair failed and we were unable to recover it. 00:26:28.574 [2024-05-15 11:18:25.765128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.765284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.765298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.574 qpair failed and we were unable to recover it. 00:26:28.574 [2024-05-15 11:18:25.765448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.765551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.765564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.574 qpair failed and we were unable to recover it. 00:26:28.574 [2024-05-15 11:18:25.765667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.765810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.765824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.574 qpair failed and we were unable to recover it. 00:26:28.574 [2024-05-15 11:18:25.766031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.766107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.766123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.574 qpair failed and we were unable to recover it. 00:26:28.574 [2024-05-15 11:18:25.766283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.766432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.766445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.574 qpair failed and we were unable to recover it. 00:26:28.574 [2024-05-15 11:18:25.766609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.766695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.766709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.574 qpair failed and we were unable to recover it. 00:26:28.574 [2024-05-15 11:18:25.766789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.766966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.766979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.574 qpair failed and we were unable to recover it. 00:26:28.574 [2024-05-15 11:18:25.767154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.767336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.767350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.574 qpair failed and we were unable to recover it. 00:26:28.574 [2024-05-15 11:18:25.767446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.767605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.767619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.574 qpair failed and we were unable to recover it. 00:26:28.574 [2024-05-15 11:18:25.767741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.767829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.767842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.574 qpair failed and we were unable to recover it. 00:26:28.574 [2024-05-15 11:18:25.767982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.768150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.768168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.574 qpair failed and we were unable to recover it. 00:26:28.574 [2024-05-15 11:18:25.768272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.768450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.768464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.574 qpair failed and we were unable to recover it. 00:26:28.574 [2024-05-15 11:18:25.768648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.768856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.768869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.574 qpair failed and we were unable to recover it. 00:26:28.574 [2024-05-15 11:18:25.769026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.769181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.769195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.574 qpair failed and we were unable to recover it. 00:26:28.574 [2024-05-15 11:18:25.769358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.769564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.769577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.574 qpair failed and we were unable to recover it. 00:26:28.574 [2024-05-15 11:18:25.769738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.769899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.574 [2024-05-15 11:18:25.769914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.575 qpair failed and we were unable to recover it. 00:26:28.575 [2024-05-15 11:18:25.770055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.575 [2024-05-15 11:18:25.770224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.575 [2024-05-15 11:18:25.770237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.575 qpair failed and we were unable to recover it. 00:26:28.575 [2024-05-15 11:18:25.770469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.575 [2024-05-15 11:18:25.770635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.575 [2024-05-15 11:18:25.770648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.575 qpair failed and we were unable to recover it. 00:26:28.575 [2024-05-15 11:18:25.770852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.575 [2024-05-15 11:18:25.771017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.575 [2024-05-15 11:18:25.771030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.575 qpair failed and we were unable to recover it. 00:26:28.575 [2024-05-15 11:18:25.771215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.575 [2024-05-15 11:18:25.771327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.575 [2024-05-15 11:18:25.771340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.575 qpair failed and we were unable to recover it. 00:26:28.575 [2024-05-15 11:18:25.771499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.575 [2024-05-15 11:18:25.771654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.575 [2024-05-15 11:18:25.771667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.575 qpair failed and we were unable to recover it. 00:26:28.863 [2024-05-15 11:18:25.771920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.863 [2024-05-15 11:18:25.772148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.863 [2024-05-15 11:18:25.772162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.863 qpair failed and we were unable to recover it. 00:26:28.863 [2024-05-15 11:18:25.772267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.863 [2024-05-15 11:18:25.772434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.863 [2024-05-15 11:18:25.772448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.863 qpair failed and we were unable to recover it. 00:26:28.863 [2024-05-15 11:18:25.772611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.863 [2024-05-15 11:18:25.772717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.863 [2024-05-15 11:18:25.772730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.863 qpair failed and we were unable to recover it. 00:26:28.863 [2024-05-15 11:18:25.772923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.863 [2024-05-15 11:18:25.773006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.863 [2024-05-15 11:18:25.773020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.863 qpair failed and we were unable to recover it. 00:26:28.863 [2024-05-15 11:18:25.773193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.863 [2024-05-15 11:18:25.773301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.863 [2024-05-15 11:18:25.773315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.863 qpair failed and we were unable to recover it. 00:26:28.863 [2024-05-15 11:18:25.773429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.863 [2024-05-15 11:18:25.773634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.863 [2024-05-15 11:18:25.773647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.863 qpair failed and we were unable to recover it. 00:26:28.863 [2024-05-15 11:18:25.773793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.863 [2024-05-15 11:18:25.773914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.863 [2024-05-15 11:18:25.773927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.863 qpair failed and we were unable to recover it. 00:26:28.863 [2024-05-15 11:18:25.774091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.863 [2024-05-15 11:18:25.774234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.863 [2024-05-15 11:18:25.774249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.863 qpair failed and we were unable to recover it. 00:26:28.863 [2024-05-15 11:18:25.774390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.774497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.774512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-05-15 11:18:25.774712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.774972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.774986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-05-15 11:18:25.775205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.775308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.775322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-05-15 11:18:25.775478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.775629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.775642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-05-15 11:18:25.775747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.775904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.775917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-05-15 11:18:25.776001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.776160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.776178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-05-15 11:18:25.776396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.776627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.776640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-05-15 11:18:25.776870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.777023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.777036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-05-15 11:18:25.777204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.777359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.777373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-05-15 11:18:25.777482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.777652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.777665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-05-15 11:18:25.777933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.778180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.778194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-05-15 11:18:25.778370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.778514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.778527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-05-15 11:18:25.778685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.778928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.778941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-05-15 11:18:25.779174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.779318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.779332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-05-15 11:18:25.779563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.779663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.779677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-05-15 11:18:25.779853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.779999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.780012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-05-15 11:18:25.780250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.780353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.780369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-05-15 11:18:25.780601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.780740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.780753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-05-15 11:18:25.780996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.781194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.781208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-05-15 11:18:25.781321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.781615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.781629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-05-15 11:18:25.781880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.782035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.782049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-05-15 11:18:25.782257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.782413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.782426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-05-15 11:18:25.782515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.782623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.782636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-05-15 11:18:25.782788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.782890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.782903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-05-15 11:18:25.783119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.783287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.783301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-05-15 11:18:25.783527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.783666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.783679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-05-15 11:18:25.783919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.784059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.784072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-05-15 11:18:25.784215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.784372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.784385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-05-15 11:18:25.784595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.784751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.784764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-05-15 11:18:25.784940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.785094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-05-15 11:18:25.785108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-05-15 11:18:25.785314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.785421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.785435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-05-15 11:18:25.785582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.785719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.785732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-05-15 11:18:25.785945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.786084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.786097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-05-15 11:18:25.786208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.786383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.786397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-05-15 11:18:25.786618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.786891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.786904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-05-15 11:18:25.787147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.787407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.787421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-05-15 11:18:25.787572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.787834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.787847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-05-15 11:18:25.788077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.788215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.788229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-05-15 11:18:25.788387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.788589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.788602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-05-15 11:18:25.788690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.788932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.788945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-05-15 11:18:25.789099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.789204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.789217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-05-15 11:18:25.789379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.789583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.789596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-05-15 11:18:25.789784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.789932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.789945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-05-15 11:18:25.790094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.790311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.790325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-05-15 11:18:25.790508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.790727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.790740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-05-15 11:18:25.791026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.791290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.791307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-05-15 11:18:25.791477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.791617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.791630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-05-15 11:18:25.791781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.791873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.791886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-05-15 11:18:25.791973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.792118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.792131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-05-15 11:18:25.792242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.792451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.792464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-05-15 11:18:25.792701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.792778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.792791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-05-15 11:18:25.792884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.793023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.793037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-05-15 11:18:25.793119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.793325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.793339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-05-15 11:18:25.793434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.793512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.793526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-05-15 11:18:25.793679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.793768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.793782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-05-15 11:18:25.793873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.793982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.793995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-05-15 11:18:25.794096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.794203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.794218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-05-15 11:18:25.794378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.794454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.794467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-05-15 11:18:25.794616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.794698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-05-15 11:18:25.794712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-05-15 11:18:25.794854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.794997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.795010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-05-15 11:18:25.795174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.795345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.795358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-05-15 11:18:25.795445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.795586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.795599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-05-15 11:18:25.795741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.795890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.795903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-05-15 11:18:25.795976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.796186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.796200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-05-15 11:18:25.796355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.796586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.796600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-05-15 11:18:25.796703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.796853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.796872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-05-15 11:18:25.796974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.797127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.797141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-05-15 11:18:25.797225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.797432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.797445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-05-15 11:18:25.797526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.797613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.797626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-05-15 11:18:25.797713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.800407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.800422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-05-15 11:18:25.800671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.800854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.800867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-05-15 11:18:25.801007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.801108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.801121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-05-15 11:18:25.801277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.801432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.801445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-05-15 11:18:25.801564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.801720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.801733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-05-15 11:18:25.801887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.801987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.802001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-05-15 11:18:25.802158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.802258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.802272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-05-15 11:18:25.802478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.802687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.802701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-05-15 11:18:25.802885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.802979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.802992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-05-15 11:18:25.803159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.803391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.803405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-05-15 11:18:25.803614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.803820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.803833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-05-15 11:18:25.803910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.804010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.804024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-05-15 11:18:25.804194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.804296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.804309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-05-15 11:18:25.804464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.804554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.804568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-05-15 11:18:25.804670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.804809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.804822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-05-15 11:18:25.804985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.805137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.805150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-05-15 11:18:25.805262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.805367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.805379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-05-15 11:18:25.805496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.805662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.805679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-05-15 11:18:25.805789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.805874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.805887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-05-15 11:18:25.806068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.806151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-05-15 11:18:25.806175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.867 [2024-05-15 11:18:25.806396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.806559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.806573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-05-15 11:18:25.806685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.806846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.806859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-05-15 11:18:25.806962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.807046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.807059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-05-15 11:18:25.807202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.807361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.807374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-05-15 11:18:25.807559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.807626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.807639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-05-15 11:18:25.807733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.807837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.807850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-05-15 11:18:25.808019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.808189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.808203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-05-15 11:18:25.808297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.808399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.808411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-05-15 11:18:25.808568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.808724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.808737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-05-15 11:18:25.808873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.809014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.809027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-05-15 11:18:25.809172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.809380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.809393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-05-15 11:18:25.809503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.809649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.809662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-05-15 11:18:25.809761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.809977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.809990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-05-15 11:18:25.810066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.810218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.810231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-05-15 11:18:25.810424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.810560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.810574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-05-15 11:18:25.810682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.810783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.810797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-05-15 11:18:25.811021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.811111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.811124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-05-15 11:18:25.811213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.811418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.811431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-05-15 11:18:25.811518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.811590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.811603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-05-15 11:18:25.811764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.811916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.811930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-05-15 11:18:25.812079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.812169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.812183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-05-15 11:18:25.812393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.812532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.812546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-05-15 11:18:25.812637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.812735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.812748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-05-15 11:18:25.812905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.812996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.813009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-05-15 11:18:25.813096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.813171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.813185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-05-15 11:18:25.813356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.813443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.813456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-05-15 11:18:25.813596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.813754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.813767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-05-15 11:18:25.813878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.814022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.814033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-05-15 11:18:25.814179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.814341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-05-15 11:18:25.814352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.868 [2024-05-15 11:18:25.814498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-05-15 11:18:25.814652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-05-15 11:18:25.814662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.868 [2024-05-15 11:18:25.814747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-05-15 11:18:25.814967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-05-15 11:18:25.814977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.868 [2024-05-15 11:18:25.815107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-05-15 11:18:25.815189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-05-15 11:18:25.815199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.868 [2024-05-15 11:18:25.815297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-05-15 11:18:25.815504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-05-15 11:18:25.815514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.868 [2024-05-15 11:18:25.815583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-05-15 11:18:25.815653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-05-15 11:18:25.815662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.868 [2024-05-15 11:18:25.815883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-05-15 11:18:25.815948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-05-15 11:18:25.815958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.868 [2024-05-15 11:18:25.816104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-05-15 11:18:25.816175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-05-15 11:18:25.816186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.868 [2024-05-15 11:18:25.816256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-05-15 11:18:25.816429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-05-15 11:18:25.816438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.868 [2024-05-15 11:18:25.816540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-05-15 11:18:25.816676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-05-15 11:18:25.816690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.868 [2024-05-15 11:18:25.816805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-05-15 11:18:25.816881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-05-15 11:18:25.816894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.868 [2024-05-15 11:18:25.817064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-05-15 11:18:25.817133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-05-15 11:18:25.817146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.868 [2024-05-15 11:18:25.817233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-05-15 11:18:25.817330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-05-15 11:18:25.817344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.868 [2024-05-15 11:18:25.817509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-05-15 11:18:25.817659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-05-15 11:18:25.817672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.868 [2024-05-15 11:18:25.817818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-05-15 11:18:25.817973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-05-15 11:18:25.817986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.868 [2024-05-15 11:18:25.818081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-05-15 11:18:25.818159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-05-15 11:18:25.818177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.868 [2024-05-15 11:18:25.818337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-05-15 11:18:25.818487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-05-15 11:18:25.818500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.868 [2024-05-15 11:18:25.818673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-05-15 11:18:25.818747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-05-15 11:18:25.818760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.868 [2024-05-15 11:18:25.818917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-05-15 11:18:25.819055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-05-15 11:18:25.819069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.868 [2024-05-15 11:18:25.819143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-05-15 11:18:25.819243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-05-15 11:18:25.819257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.868 [2024-05-15 11:18:25.819340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-05-15 11:18:25.819476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-05-15 11:18:25.819489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.868 [2024-05-15 11:18:25.819648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-05-15 11:18:25.819730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-05-15 11:18:25.819744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.868 [2024-05-15 11:18:25.819829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-05-15 11:18:25.819914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-05-15 11:18:25.819928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.868 [2024-05-15 11:18:25.820009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-05-15 11:18:25.820101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-05-15 11:18:25.820114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.869 [2024-05-15 11:18:25.820262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.820417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.820426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-05-15 11:18:25.820506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.820630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.820640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-05-15 11:18:25.820783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.820874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.820884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-05-15 11:18:25.820968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.821114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.821123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-05-15 11:18:25.821205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.821373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.821383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-05-15 11:18:25.821554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.821635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.821644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-05-15 11:18:25.821717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.821781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.821790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-05-15 11:18:25.821877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.822023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.822032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-05-15 11:18:25.822146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.822301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.822311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-05-15 11:18:25.822492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.822637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.822647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-05-15 11:18:25.822780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.822909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.822919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-05-15 11:18:25.823052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.823135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.823144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-05-15 11:18:25.823279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.823359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.823369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-05-15 11:18:25.823459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.823661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.823671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-05-15 11:18:25.823740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.823816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.823825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-05-15 11:18:25.823906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.823982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.823992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-05-15 11:18:25.824124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.824204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.824215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-05-15 11:18:25.824429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.824496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.824506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-05-15 11:18:25.824573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.824706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.824715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-05-15 11:18:25.824867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.824944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.824954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-05-15 11:18:25.825031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.825191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.825201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-05-15 11:18:25.825272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.825335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.825344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-05-15 11:18:25.825487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.825566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.825576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-05-15 11:18:25.825739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.825827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.825837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-05-15 11:18:25.825926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.825990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.826000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-05-15 11:18:25.826207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.826353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.826362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-05-15 11:18:25.826441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.826598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.826608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-05-15 11:18:25.826683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.826817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.826827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-05-15 11:18:25.826909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.827048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.827058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-05-15 11:18:25.827124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-05-15 11:18:25.827347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.827357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-05-15 11:18:25.827504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.827588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.827597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-05-15 11:18:25.827690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.827883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.827893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-05-15 11:18:25.828040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.828176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.828186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-05-15 11:18:25.828326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.828418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.828428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-05-15 11:18:25.828507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.828597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.828607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-05-15 11:18:25.828748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.828825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.828836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-05-15 11:18:25.828907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.828982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.828991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-05-15 11:18:25.829058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.829204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.829214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-05-15 11:18:25.829417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.829573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.829583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-05-15 11:18:25.829665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.829884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.829893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-05-15 11:18:25.829983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.830115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.830124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-05-15 11:18:25.830212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.830295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.830305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-05-15 11:18:25.830437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.830594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.830604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-05-15 11:18:25.830683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.830877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.830887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-05-15 11:18:25.830981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.831120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.831130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-05-15 11:18:25.831214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.831282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.831294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-05-15 11:18:25.831387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.831488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.831498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-05-15 11:18:25.831582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.831654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.831663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-05-15 11:18:25.831822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.831889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.831898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-05-15 11:18:25.831971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.832055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.832064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-05-15 11:18:25.832138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.832268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.832279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-05-15 11:18:25.832411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.832483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.832492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-05-15 11:18:25.832636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.832734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.832743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-05-15 11:18:25.832809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.832871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.832880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-05-15 11:18:25.832960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.833094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.833103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-05-15 11:18:25.833183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.833264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.833276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-05-15 11:18:25.833357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.833425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.833434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-05-15 11:18:25.833499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.833565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.833573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-05-15 11:18:25.833660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.833794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-05-15 11:18:25.833804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-05-15 11:18:25.833940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.834030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.834039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.871 qpair failed and we were unable to recover it. 00:26:28.871 [2024-05-15 11:18:25.834106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.834187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.834197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.871 qpair failed and we were unable to recover it. 00:26:28.871 [2024-05-15 11:18:25.834347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.834411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.834421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.871 qpair failed and we were unable to recover it. 00:26:28.871 [2024-05-15 11:18:25.834503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.834579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.834589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.871 qpair failed and we were unable to recover it. 00:26:28.871 [2024-05-15 11:18:25.834676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.834909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.834919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.871 qpair failed and we were unable to recover it. 00:26:28.871 [2024-05-15 11:18:25.835005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.835086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.835096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.871 qpair failed and we were unable to recover it. 00:26:28.871 [2024-05-15 11:18:25.835171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.835239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.835249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.871 qpair failed and we were unable to recover it. 00:26:28.871 [2024-05-15 11:18:25.835393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.835457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.835466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.871 qpair failed and we were unable to recover it. 00:26:28.871 [2024-05-15 11:18:25.835541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.835674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.835683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.871 qpair failed and we were unable to recover it. 00:26:28.871 [2024-05-15 11:18:25.835881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.835970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.835979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.871 qpair failed and we were unable to recover it. 00:26:28.871 [2024-05-15 11:18:25.836062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.836213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.836223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.871 qpair failed and we were unable to recover it. 00:26:28.871 [2024-05-15 11:18:25.836302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.836489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.836500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.871 qpair failed and we were unable to recover it. 00:26:28.871 [2024-05-15 11:18:25.836583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.836762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.836772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.871 qpair failed and we were unable to recover it. 00:26:28.871 [2024-05-15 11:18:25.836858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.836937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.836947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.871 qpair failed and we were unable to recover it. 00:26:28.871 [2024-05-15 11:18:25.837173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.837332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.837342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.871 qpair failed and we were unable to recover it. 00:26:28.871 [2024-05-15 11:18:25.837411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.837471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.837480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.871 qpair failed and we were unable to recover it. 00:26:28.871 [2024-05-15 11:18:25.837553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.837631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.837641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.871 qpair failed and we were unable to recover it. 00:26:28.871 [2024-05-15 11:18:25.837782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.837880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.837889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.871 qpair failed and we were unable to recover it. 00:26:28.871 [2024-05-15 11:18:25.838022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.838173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.838183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.871 qpair failed and we were unable to recover it. 00:26:28.871 [2024-05-15 11:18:25.838334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.838466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.838475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.871 qpair failed and we were unable to recover it. 00:26:28.871 [2024-05-15 11:18:25.838565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.838637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.838647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.871 qpair failed and we were unable to recover it. 00:26:28.871 [2024-05-15 11:18:25.838789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.838921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.838931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.871 qpair failed and we were unable to recover it. 00:26:28.871 [2024-05-15 11:18:25.839080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.839211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.839221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.871 qpair failed and we were unable to recover it. 00:26:28.871 [2024-05-15 11:18:25.839296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.839372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.839381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.871 qpair failed and we were unable to recover it. 00:26:28.871 [2024-05-15 11:18:25.839584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.839722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.839731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.871 qpair failed and we were unable to recover it. 00:26:28.871 [2024-05-15 11:18:25.839808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.839898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.839907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.871 qpair failed and we were unable to recover it. 00:26:28.871 [2024-05-15 11:18:25.839975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.840064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.840073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.871 qpair failed and we were unable to recover it. 00:26:28.871 [2024-05-15 11:18:25.840219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.840304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.840313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.871 qpair failed and we were unable to recover it. 00:26:28.871 [2024-05-15 11:18:25.840457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.840542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.840552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.871 qpair failed and we were unable to recover it. 00:26:28.871 [2024-05-15 11:18:25.840696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.840767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-05-15 11:18:25.840777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.871 qpair failed and we were unable to recover it. 00:26:28.872 [2024-05-15 11:18:25.840912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.872 [2024-05-15 11:18:25.840990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.872 [2024-05-15 11:18:25.840999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.872 qpair failed and we were unable to recover it. 00:26:28.872 [2024-05-15 11:18:25.841082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.872 [2024-05-15 11:18:25.841155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.872 [2024-05-15 11:18:25.841167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.872 qpair failed and we were unable to recover it. 00:26:28.872 [2024-05-15 11:18:25.841313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.872 [2024-05-15 11:18:25.841538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.872 [2024-05-15 11:18:25.841547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.872 qpair failed and we were unable to recover it. 00:26:28.872 [2024-05-15 11:18:25.841783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.872 [2024-05-15 11:18:25.841866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.872 [2024-05-15 11:18:25.841878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.872 qpair failed and we were unable to recover it. 00:26:28.872 [2024-05-15 11:18:25.842038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.872 [2024-05-15 11:18:25.842174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.872 [2024-05-15 11:18:25.842185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.872 qpair failed and we were unable to recover it. 00:26:28.872 [2024-05-15 11:18:25.842253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.872 [2024-05-15 11:18:25.842401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.872 [2024-05-15 11:18:25.842411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.872 qpair failed and we were unable to recover it. 00:26:28.872 [2024-05-15 11:18:25.842494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.872 [2024-05-15 11:18:25.842559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.872 [2024-05-15 11:18:25.842569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.872 qpair failed and we were unable to recover it. 00:26:28.872 [2024-05-15 11:18:25.842655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.872 [2024-05-15 11:18:25.842805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.872 [2024-05-15 11:18:25.842815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.872 qpair failed and we were unable to recover it. 00:26:28.872 [2024-05-15 11:18:25.842947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.872 [2024-05-15 11:18:25.843105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.872 [2024-05-15 11:18:25.843115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.872 qpair failed and we were unable to recover it. 00:26:28.872 [2024-05-15 11:18:25.843260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.872 [2024-05-15 11:18:25.843326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.872 [2024-05-15 11:18:25.843336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.872 qpair failed and we were unable to recover it. 00:26:28.872 [2024-05-15 11:18:25.843417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.872 [2024-05-15 11:18:25.843492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.872 [2024-05-15 11:18:25.843502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.872 qpair failed and we were unable to recover it. 00:26:28.872 [2024-05-15 11:18:25.843577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.872 [2024-05-15 11:18:25.843794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.872 [2024-05-15 11:18:25.843803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.872 qpair failed and we were unable to recover it. 00:26:28.872 [2024-05-15 11:18:25.843884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.872 [2024-05-15 11:18:25.843966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.872 [2024-05-15 11:18:25.843976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.872 qpair failed and we were unable to recover it. 00:26:28.872 [2024-05-15 11:18:25.844052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.872 [2024-05-15 11:18:25.844118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.872 [2024-05-15 11:18:25.844127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.872 qpair failed and we were unable to recover it. 00:26:28.872 [2024-05-15 11:18:25.844226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.872 [2024-05-15 11:18:25.844310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.872 [2024-05-15 11:18:25.844319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.872 qpair failed and we were unable to recover it. 00:26:28.872 [2024-05-15 11:18:25.844455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.872 [2024-05-15 11:18:25.844531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.872 [2024-05-15 11:18:25.844540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.872 qpair failed and we were unable to recover it. 00:26:28.872 [2024-05-15 11:18:25.844686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.872 [2024-05-15 11:18:25.844763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.872 [2024-05-15 11:18:25.844773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.872 qpair failed and we were unable to recover it. 00:26:28.872 [2024-05-15 11:18:25.844850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.872 [2024-05-15 11:18:25.844935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.872 [2024-05-15 11:18:25.844945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.872 qpair failed and we were unable to recover it. 00:26:28.872 [2024-05-15 11:18:25.845026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.872 [2024-05-15 11:18:25.845091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.872 [2024-05-15 11:18:25.845101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.872 qpair failed and we were unable to recover it. 00:26:28.872 [2024-05-15 11:18:25.845176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.872 [2024-05-15 11:18:25.845323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.872 [2024-05-15 11:18:25.845332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.872 qpair failed and we were unable to recover it. 00:26:28.872 [2024-05-15 11:18:25.845413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.872 [2024-05-15 11:18:25.845492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.872 [2024-05-15 11:18:25.845502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.872 qpair failed and we were unable to recover it. 00:26:28.872 [2024-05-15 11:18:25.845634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.872 [2024-05-15 11:18:25.845717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.872 [2024-05-15 11:18:25.845727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.872 qpair failed and we were unable to recover it. 00:26:28.872 [2024-05-15 11:18:25.845814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.872 [2024-05-15 11:18:25.845963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.872 [2024-05-15 11:18:25.845973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.872 qpair failed and we were unable to recover it. 00:26:28.872 [2024-05-15 11:18:25.846114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.872 [2024-05-15 11:18:25.846192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.872 [2024-05-15 11:18:25.846202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.872 qpair failed and we were unable to recover it. 00:26:28.872 [2024-05-15 11:18:25.846278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.872 [2024-05-15 11:18:25.846478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.872 [2024-05-15 11:18:25.846487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.872 qpair failed and we were unable to recover it. 00:26:28.873 [2024-05-15 11:18:25.846563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.846629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.846639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.873 qpair failed and we were unable to recover it. 00:26:28.873 [2024-05-15 11:18:25.846716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.846790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.846800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.873 qpair failed and we were unable to recover it. 00:26:28.873 [2024-05-15 11:18:25.846938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.847076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.847085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.873 qpair failed and we were unable to recover it. 00:26:28.873 [2024-05-15 11:18:25.847173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.847332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.847342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.873 qpair failed and we were unable to recover it. 00:26:28.873 [2024-05-15 11:18:25.847493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.847626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.847635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.873 qpair failed and we were unable to recover it. 00:26:28.873 [2024-05-15 11:18:25.847713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.847854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.847863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.873 qpair failed and we were unable to recover it. 00:26:28.873 [2024-05-15 11:18:25.848032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.848170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.848180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.873 qpair failed and we were unable to recover it. 00:26:28.873 [2024-05-15 11:18:25.848263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.848332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.848341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.873 qpair failed and we were unable to recover it. 00:26:28.873 [2024-05-15 11:18:25.848418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.848481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.848490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.873 qpair failed and we were unable to recover it. 00:26:28.873 [2024-05-15 11:18:25.848561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.848702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.848712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.873 qpair failed and we were unable to recover it. 00:26:28.873 [2024-05-15 11:18:25.848780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.848915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.848925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.873 qpair failed and we were unable to recover it. 00:26:28.873 [2024-05-15 11:18:25.849011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.849103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.849112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.873 qpair failed and we were unable to recover it. 00:26:28.873 [2024-05-15 11:18:25.849192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.849268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.849278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.873 qpair failed and we were unable to recover it. 00:26:28.873 [2024-05-15 11:18:25.849363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.849447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.849457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.873 qpair failed and we were unable to recover it. 00:26:28.873 [2024-05-15 11:18:25.849547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.849630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.849639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.873 qpair failed and we were unable to recover it. 00:26:28.873 [2024-05-15 11:18:25.849730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.849895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.849905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.873 qpair failed and we were unable to recover it. 00:26:28.873 [2024-05-15 11:18:25.849973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.850064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.850074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.873 qpair failed and we were unable to recover it. 00:26:28.873 [2024-05-15 11:18:25.850152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.850304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.850314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.873 qpair failed and we were unable to recover it. 00:26:28.873 [2024-05-15 11:18:25.850393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.850528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.850537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.873 qpair failed and we were unable to recover it. 00:26:28.873 [2024-05-15 11:18:25.850618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.850687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.850696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.873 qpair failed and we were unable to recover it. 00:26:28.873 [2024-05-15 11:18:25.850860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.851034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.851043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.873 qpair failed and we were unable to recover it. 00:26:28.873 [2024-05-15 11:18:25.851177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.851315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.851325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.873 qpair failed and we were unable to recover it. 00:26:28.873 [2024-05-15 11:18:25.851520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.851750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.851766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.873 qpair failed and we were unable to recover it. 00:26:28.873 [2024-05-15 11:18:25.851861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.851935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.851949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.873 qpair failed and we were unable to recover it. 00:26:28.873 [2024-05-15 11:18:25.852109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.852254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.852269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.873 qpair failed and we were unable to recover it. 00:26:28.873 [2024-05-15 11:18:25.852385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.852620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.852633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.873 qpair failed and we were unable to recover it. 00:26:28.873 [2024-05-15 11:18:25.852722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.852804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.852818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.873 qpair failed and we were unable to recover it. 00:26:28.873 [2024-05-15 11:18:25.852965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.853055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.853068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.873 qpair failed and we were unable to recover it. 00:26:28.873 [2024-05-15 11:18:25.853180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.853264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.853278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.873 qpair failed and we were unable to recover it. 00:26:28.873 [2024-05-15 11:18:25.853432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.873 [2024-05-15 11:18:25.853506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.853519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.874 qpair failed and we were unable to recover it. 00:26:28.874 [2024-05-15 11:18:25.853695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.853837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.853850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.874 qpair failed and we were unable to recover it. 00:26:28.874 [2024-05-15 11:18:25.853995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.854080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.854093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.874 qpair failed and we were unable to recover it. 00:26:28.874 [2024-05-15 11:18:25.854249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.854349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.854362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.874 qpair failed and we were unable to recover it. 00:26:28.874 [2024-05-15 11:18:25.854514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.854669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.854683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.874 qpair failed and we were unable to recover it. 00:26:28.874 [2024-05-15 11:18:25.854785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.854949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.854963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.874 qpair failed and we were unable to recover it. 00:26:28.874 [2024-05-15 11:18:25.855038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.855137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.855151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.874 qpair failed and we were unable to recover it. 00:26:28.874 [2024-05-15 11:18:25.855317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.855402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.855415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.874 qpair failed and we were unable to recover it. 00:26:28.874 [2024-05-15 11:18:25.855594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.855684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.855698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.874 qpair failed and we were unable to recover it. 00:26:28.874 [2024-05-15 11:18:25.855782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.855936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.855949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.874 qpair failed and we were unable to recover it. 00:26:28.874 [2024-05-15 11:18:25.856094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.856178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.856192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.874 qpair failed and we were unable to recover it. 00:26:28.874 [2024-05-15 11:18:25.856435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.856514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.856527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.874 qpair failed and we were unable to recover it. 00:26:28.874 [2024-05-15 11:18:25.856615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.856753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.856767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.874 qpair failed and we were unable to recover it. 00:26:28.874 [2024-05-15 11:18:25.856853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.857107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.857125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.874 qpair failed and we were unable to recover it. 00:26:28.874 [2024-05-15 11:18:25.857213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.857316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.857329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.874 qpair failed and we were unable to recover it. 00:26:28.874 [2024-05-15 11:18:25.857489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.857588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.857602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.874 qpair failed and we were unable to recover it. 00:26:28.874 [2024-05-15 11:18:25.857761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.857856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.857869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.874 qpair failed and we were unable to recover it. 00:26:28.874 [2024-05-15 11:18:25.857974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.858117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.858130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.874 qpair failed and we were unable to recover it. 00:26:28.874 [2024-05-15 11:18:25.858207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.858362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.858375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.874 qpair failed and we were unable to recover it. 00:26:28.874 [2024-05-15 11:18:25.858521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.858605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.858619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.874 qpair failed and we were unable to recover it. 00:26:28.874 [2024-05-15 11:18:25.858765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.858917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.858931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.874 qpair failed and we were unable to recover it. 00:26:28.874 [2024-05-15 11:18:25.859088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.859251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.859265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.874 qpair failed and we were unable to recover it. 00:26:28.874 [2024-05-15 11:18:25.859338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.859476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.859489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.874 qpair failed and we were unable to recover it. 00:26:28.874 [2024-05-15 11:18:25.859581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.859664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.859677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.874 qpair failed and we were unable to recover it. 00:26:28.874 [2024-05-15 11:18:25.859832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.859918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.859938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.874 qpair failed and we were unable to recover it. 00:26:28.874 [2024-05-15 11:18:25.860088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.860231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.860245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.874 qpair failed and we were unable to recover it. 00:26:28.874 [2024-05-15 11:18:25.860344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.860475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.860489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.874 qpair failed and we were unable to recover it. 00:26:28.874 [2024-05-15 11:18:25.860580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.860742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.860756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.874 qpair failed and we were unable to recover it. 00:26:28.874 [2024-05-15 11:18:25.860907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.861037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.861051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.874 qpair failed and we were unable to recover it. 00:26:28.874 [2024-05-15 11:18:25.861217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.861391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.861405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.874 qpair failed and we were unable to recover it. 00:26:28.874 [2024-05-15 11:18:25.861497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.874 [2024-05-15 11:18:25.861582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.861596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.875 qpair failed and we were unable to recover it. 00:26:28.875 [2024-05-15 11:18:25.861775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.861984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.861998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.875 qpair failed and we were unable to recover it. 00:26:28.875 [2024-05-15 11:18:25.862147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.862307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.862320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.875 qpair failed and we were unable to recover it. 00:26:28.875 [2024-05-15 11:18:25.862393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.862570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.862583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.875 qpair failed and we were unable to recover it. 00:26:28.875 [2024-05-15 11:18:25.862687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.862826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.862840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.875 qpair failed and we were unable to recover it. 00:26:28.875 [2024-05-15 11:18:25.863000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.863095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.863109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.875 qpair failed and we were unable to recover it. 00:26:28.875 [2024-05-15 11:18:25.863190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.863331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.863345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.875 qpair failed and we were unable to recover it. 00:26:28.875 [2024-05-15 11:18:25.863503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.863593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.863606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.875 qpair failed and we were unable to recover it. 00:26:28.875 [2024-05-15 11:18:25.863689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.863765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.863778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.875 qpair failed and we were unable to recover it. 00:26:28.875 [2024-05-15 11:18:25.863990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.864161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.864179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.875 qpair failed and we were unable to recover it. 00:26:28.875 [2024-05-15 11:18:25.864282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.864352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.864365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.875 qpair failed and we were unable to recover it. 00:26:28.875 [2024-05-15 11:18:25.864524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.864603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.864617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.875 qpair failed and we were unable to recover it. 00:26:28.875 [2024-05-15 11:18:25.864771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.864987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.865000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.875 qpair failed and we were unable to recover it. 00:26:28.875 [2024-05-15 11:18:25.865104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.865263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.865277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.875 qpair failed and we were unable to recover it. 00:26:28.875 [2024-05-15 11:18:25.865357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.865511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.865525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.875 qpair failed and we were unable to recover it. 00:26:28.875 [2024-05-15 11:18:25.865630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.865874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.865887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.875 qpair failed and we were unable to recover it. 00:26:28.875 [2024-05-15 11:18:25.866057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.866240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.866253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.875 qpair failed and we were unable to recover it. 00:26:28.875 [2024-05-15 11:18:25.866406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.866616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.866630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.875 qpair failed and we were unable to recover it. 00:26:28.875 [2024-05-15 11:18:25.866783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.866878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.866891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.875 qpair failed and we were unable to recover it. 00:26:28.875 [2024-05-15 11:18:25.867044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.867118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.867132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.875 qpair failed and we were unable to recover it. 00:26:28.875 [2024-05-15 11:18:25.867230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.867475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.867489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.875 qpair failed and we were unable to recover it. 00:26:28.875 [2024-05-15 11:18:25.867649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.867740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.867753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.875 qpair failed and we were unable to recover it. 00:26:28.875 [2024-05-15 11:18:25.867847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.868026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.868039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.875 qpair failed and we were unable to recover it. 00:26:28.875 [2024-05-15 11:18:25.868123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.868288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.868302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.875 qpair failed and we were unable to recover it. 00:26:28.875 [2024-05-15 11:18:25.868504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.868644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.868658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.875 qpair failed and we were unable to recover it. 00:26:28.875 [2024-05-15 11:18:25.868819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.868916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.868929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.875 qpair failed and we were unable to recover it. 00:26:28.875 [2024-05-15 11:18:25.869074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.869236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.869250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.875 qpair failed and we were unable to recover it. 00:26:28.875 [2024-05-15 11:18:25.869403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.869490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.869503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.875 qpair failed and we were unable to recover it. 00:26:28.875 [2024-05-15 11:18:25.869609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.869747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.869760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.875 qpair failed and we were unable to recover it. 00:26:28.875 [2024-05-15 11:18:25.869899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.870121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.870134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.875 qpair failed and we were unable to recover it. 00:26:28.875 [2024-05-15 11:18:25.870226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.875 [2024-05-15 11:18:25.870375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.870388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.876 qpair failed and we were unable to recover it. 00:26:28.876 [2024-05-15 11:18:25.870478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.870627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.870640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.876 qpair failed and we were unable to recover it. 00:26:28.876 [2024-05-15 11:18:25.870716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.870811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.870826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.876 qpair failed and we were unable to recover it. 00:26:28.876 [2024-05-15 11:18:25.870904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.870991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.871004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.876 qpair failed and we were unable to recover it. 00:26:28.876 [2024-05-15 11:18:25.871094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.871246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.871262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.876 qpair failed and we were unable to recover it. 00:26:28.876 [2024-05-15 11:18:25.871408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.871553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.871566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.876 qpair failed and we were unable to recover it. 00:26:28.876 [2024-05-15 11:18:25.871786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.871890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.871903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.876 qpair failed and we were unable to recover it. 00:26:28.876 [2024-05-15 11:18:25.872008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.872227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.872241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.876 qpair failed and we were unable to recover it. 00:26:28.876 [2024-05-15 11:18:25.872390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.872475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.872488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.876 qpair failed and we were unable to recover it. 00:26:28.876 [2024-05-15 11:18:25.872633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.872773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.872786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.876 qpair failed and we were unable to recover it. 00:26:28.876 [2024-05-15 11:18:25.872874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.873081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.873095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.876 qpair failed and we were unable to recover it. 00:26:28.876 [2024-05-15 11:18:25.873178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.873334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.873347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.876 qpair failed and we were unable to recover it. 00:26:28.876 [2024-05-15 11:18:25.873446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.873656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.873669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.876 qpair failed and we were unable to recover it. 00:26:28.876 [2024-05-15 11:18:25.873768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.873861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.873875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.876 qpair failed and we were unable to recover it. 00:26:28.876 [2024-05-15 11:18:25.873962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.874056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.874069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.876 qpair failed and we were unable to recover it. 00:26:28.876 [2024-05-15 11:18:25.874146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.874290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.874303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.876 qpair failed and we were unable to recover it. 00:26:28.876 [2024-05-15 11:18:25.874465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.874612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.874626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.876 qpair failed and we were unable to recover it. 00:26:28.876 [2024-05-15 11:18:25.874701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.874851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.874865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.876 qpair failed and we were unable to recover it. 00:26:28.876 [2024-05-15 11:18:25.875073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.875236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.875250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.876 qpair failed and we were unable to recover it. 00:26:28.876 [2024-05-15 11:18:25.875333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.875466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.875480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.876 qpair failed and we were unable to recover it. 00:26:28.876 [2024-05-15 11:18:25.875622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.875771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.875784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.876 qpair failed and we were unable to recover it. 00:26:28.876 [2024-05-15 11:18:25.875954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.876047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.876060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.876 qpair failed and we were unable to recover it. 00:26:28.876 [2024-05-15 11:18:25.876142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.876355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.876369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.876 qpair failed and we were unable to recover it. 00:26:28.876 [2024-05-15 11:18:25.876448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.876601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.876614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.876 qpair failed and we were unable to recover it. 00:26:28.876 [2024-05-15 11:18:25.876756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.876848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.876861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.876 qpair failed and we were unable to recover it. 00:26:28.876 [2024-05-15 11:18:25.877037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.877241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.877255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.876 qpair failed and we were unable to recover it. 00:26:28.876 [2024-05-15 11:18:25.877341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.877479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.877492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.876 qpair failed and we were unable to recover it. 00:26:28.876 [2024-05-15 11:18:25.877634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.877781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.877794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.876 qpair failed and we were unable to recover it. 00:26:28.876 [2024-05-15 11:18:25.878005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.878178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.878192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.876 qpair failed and we were unable to recover it. 00:26:28.876 [2024-05-15 11:18:25.878291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.878377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.878390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.876 qpair failed and we were unable to recover it. 00:26:28.876 [2024-05-15 11:18:25.878599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.876 [2024-05-15 11:18:25.878745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.877 [2024-05-15 11:18:25.878758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.877 qpair failed and we were unable to recover it. 00:26:28.877 [2024-05-15 11:18:25.878864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.877 [2024-05-15 11:18:25.878936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.877 [2024-05-15 11:18:25.878949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.877 qpair failed and we were unable to recover it. 00:26:28.877 [2024-05-15 11:18:25.879157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.877 [2024-05-15 11:18:25.879251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.877 [2024-05-15 11:18:25.879265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.877 qpair failed and we were unable to recover it. 00:26:28.877 [2024-05-15 11:18:25.879416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.877 [2024-05-15 11:18:25.879484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.877 [2024-05-15 11:18:25.879497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.877 qpair failed and we were unable to recover it. 00:26:28.877 [2024-05-15 11:18:25.879582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.877 [2024-05-15 11:18:25.879681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.877 [2024-05-15 11:18:25.879694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.877 qpair failed and we were unable to recover it. 00:26:28.877 [2024-05-15 11:18:25.879782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.877 [2024-05-15 11:18:25.879921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.877 [2024-05-15 11:18:25.879935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.877 qpair failed and we were unable to recover it. 00:26:28.877 [2024-05-15 11:18:25.880086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.877 [2024-05-15 11:18:25.880244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.877 [2024-05-15 11:18:25.880258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.877 qpair failed and we were unable to recover it. 00:26:28.877 [2024-05-15 11:18:25.880341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.877 [2024-05-15 11:18:25.880482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.877 [2024-05-15 11:18:25.880496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.877 qpair failed and we were unable to recover it. 00:26:28.877 [2024-05-15 11:18:25.880585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.877 [2024-05-15 11:18:25.880665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.877 [2024-05-15 11:18:25.880678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.877 qpair failed and we were unable to recover it. 00:26:28.877 [2024-05-15 11:18:25.880829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.877 [2024-05-15 11:18:25.880985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.877 [2024-05-15 11:18:25.880998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.877 qpair failed and we were unable to recover it. 00:26:28.877 [2024-05-15 11:18:25.881186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.877 [2024-05-15 11:18:25.881362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.877 [2024-05-15 11:18:25.881376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.877 qpair failed and we were unable to recover it. 00:26:28.877 [2024-05-15 11:18:25.881518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.877 [2024-05-15 11:18:25.881606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.877 [2024-05-15 11:18:25.881619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.877 qpair failed and we were unable to recover it. 00:26:28.877 [2024-05-15 11:18:25.881764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.877 [2024-05-15 11:18:25.881915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.877 [2024-05-15 11:18:25.881929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.877 qpair failed and we were unable to recover it. 00:26:28.877 [2024-05-15 11:18:25.882074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.877 [2024-05-15 11:18:25.882147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.877 [2024-05-15 11:18:25.882160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.877 qpair failed and we were unable to recover it. 00:26:28.877 [2024-05-15 11:18:25.882304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.877 [2024-05-15 11:18:25.882375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.877 [2024-05-15 11:18:25.882389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.877 qpair failed and we were unable to recover it. 00:26:28.877 [2024-05-15 11:18:25.882597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.877 [2024-05-15 11:18:25.882765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.877 [2024-05-15 11:18:25.882781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.877 qpair failed and we were unable to recover it. 00:26:28.877 [2024-05-15 11:18:25.882919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.877 [2024-05-15 11:18:25.883007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.877 [2024-05-15 11:18:25.883020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.877 qpair failed and we were unable to recover it. 00:26:28.877 [2024-05-15 11:18:25.883108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.877 [2024-05-15 11:18:25.883203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.877 [2024-05-15 11:18:25.883217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.877 qpair failed and we were unable to recover it. 00:26:28.877 [2024-05-15 11:18:25.883426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.877 [2024-05-15 11:18:25.883642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.877 [2024-05-15 11:18:25.883655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.877 qpair failed and we were unable to recover it. 00:26:28.877 [2024-05-15 11:18:25.883809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.877 [2024-05-15 11:18:25.883976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.877 [2024-05-15 11:18:25.883989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.877 qpair failed and we were unable to recover it. 00:26:28.877 [2024-05-15 11:18:25.884145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.877 [2024-05-15 11:18:25.884303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.877 [2024-05-15 11:18:25.884317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.877 qpair failed and we were unable to recover it. 00:26:28.877 [2024-05-15 11:18:25.884497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.877 [2024-05-15 11:18:25.884586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.877 [2024-05-15 11:18:25.884599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.877 qpair failed and we were unable to recover it. 00:26:28.877 [2024-05-15 11:18:25.884810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.877 [2024-05-15 11:18:25.885043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.877 [2024-05-15 11:18:25.885056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.877 qpair failed and we were unable to recover it. 00:26:28.877 [2024-05-15 11:18:25.885215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.877 [2024-05-15 11:18:25.885375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.877 [2024-05-15 11:18:25.885389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.877 qpair failed and we were unable to recover it. 00:26:28.877 [2024-05-15 11:18:25.885486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.877 [2024-05-15 11:18:25.885642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.877 [2024-05-15 11:18:25.885655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.878 qpair failed and we were unable to recover it. 00:26:28.878 [2024-05-15 11:18:25.885754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.885850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.885864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.878 qpair failed and we were unable to recover it. 00:26:28.878 [2024-05-15 11:18:25.885967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.886106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.886120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.878 qpair failed and we were unable to recover it. 00:26:28.878 [2024-05-15 11:18:25.886329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.886429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.886442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.878 qpair failed and we were unable to recover it. 00:26:28.878 [2024-05-15 11:18:25.886529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.886609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.886622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.878 qpair failed and we were unable to recover it. 00:26:28.878 [2024-05-15 11:18:25.886873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.887145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.887159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.878 qpair failed and we were unable to recover it. 00:26:28.878 [2024-05-15 11:18:25.887387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.887559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.887572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.878 qpair failed and we were unable to recover it. 00:26:28.878 [2024-05-15 11:18:25.887721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.887973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.887986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.878 qpair failed and we were unable to recover it. 00:26:28.878 [2024-05-15 11:18:25.888075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.888276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.888290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.878 qpair failed and we were unable to recover it. 00:26:28.878 [2024-05-15 11:18:25.888443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.888593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.888606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.878 qpair failed and we were unable to recover it. 00:26:28.878 [2024-05-15 11:18:25.888697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.888901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.888915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.878 qpair failed and we were unable to recover it. 00:26:28.878 [2024-05-15 11:18:25.889058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.889129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.889142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.878 qpair failed and we were unable to recover it. 00:26:28.878 [2024-05-15 11:18:25.889323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.889427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.889440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.878 qpair failed and we were unable to recover it. 00:26:28.878 [2024-05-15 11:18:25.889583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.889649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.889663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.878 qpair failed and we were unable to recover it. 00:26:28.878 [2024-05-15 11:18:25.889826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.890056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.890069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.878 qpair failed and we were unable to recover it. 00:26:28.878 [2024-05-15 11:18:25.890207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.890377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.890390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.878 qpair failed and we were unable to recover it. 00:26:28.878 [2024-05-15 11:18:25.890533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.890624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.890638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.878 qpair failed and we were unable to recover it. 00:26:28.878 [2024-05-15 11:18:25.890737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.890893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.890906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.878 qpair failed and we were unable to recover it. 00:26:28.878 [2024-05-15 11:18:25.891075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.891173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.891186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.878 qpair failed and we were unable to recover it. 00:26:28.878 [2024-05-15 11:18:25.891326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.891468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.891482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.878 qpair failed and we were unable to recover it. 00:26:28.878 [2024-05-15 11:18:25.891584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.891739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.891752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.878 qpair failed and we were unable to recover it. 00:26:28.878 [2024-05-15 11:18:25.891906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.891985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.891999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.878 qpair failed and we were unable to recover it. 00:26:28.878 [2024-05-15 11:18:25.892223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.892313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.892328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.878 qpair failed and we were unable to recover it. 00:26:28.878 [2024-05-15 11:18:25.892433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.892514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.892528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.878 qpair failed and we were unable to recover it. 00:26:28.878 [2024-05-15 11:18:25.892604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.892763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.892776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.878 qpair failed and we were unable to recover it. 00:26:28.878 [2024-05-15 11:18:25.892922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.893020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.893034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.878 qpair failed and we were unable to recover it. 00:26:28.878 [2024-05-15 11:18:25.893191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.893274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.893287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.878 qpair failed and we were unable to recover it. 00:26:28.878 [2024-05-15 11:18:25.893361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.893465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.893478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.878 qpair failed and we were unable to recover it. 00:26:28.878 [2024-05-15 11:18:25.893633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.893777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.893791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.878 qpair failed and we were unable to recover it. 00:26:28.878 [2024-05-15 11:18:25.893878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.893952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.878 [2024-05-15 11:18:25.893965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.878 qpair failed and we were unable to recover it. 00:26:28.878 [2024-05-15 11:18:25.894111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.894348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.894362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.879 qpair failed and we were unable to recover it. 00:26:28.879 [2024-05-15 11:18:25.894449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.894515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.894528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.879 qpair failed and we were unable to recover it. 00:26:28.879 [2024-05-15 11:18:25.894754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.894904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.894918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.879 qpair failed and we were unable to recover it. 00:26:28.879 [2024-05-15 11:18:25.895060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.895212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.895227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.879 qpair failed and we were unable to recover it. 00:26:28.879 [2024-05-15 11:18:25.895379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.895600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.895613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.879 qpair failed and we were unable to recover it. 00:26:28.879 [2024-05-15 11:18:25.895762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.895842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.895855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.879 qpair failed and we were unable to recover it. 00:26:28.879 [2024-05-15 11:18:25.896027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.896115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.896128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.879 qpair failed and we were unable to recover it. 00:26:28.879 [2024-05-15 11:18:25.896229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.896367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.896381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.879 qpair failed and we were unable to recover it. 00:26:28.879 [2024-05-15 11:18:25.896529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.896627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.896640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.879 qpair failed and we were unable to recover it. 00:26:28.879 [2024-05-15 11:18:25.896734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.896809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.896822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.879 qpair failed and we were unable to recover it. 00:26:28.879 [2024-05-15 11:18:25.896928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.897017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.897031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.879 qpair failed and we were unable to recover it. 00:26:28.879 [2024-05-15 11:18:25.897253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.897353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.897367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.879 qpair failed and we were unable to recover it. 00:26:28.879 [2024-05-15 11:18:25.897449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.897534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.897548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.879 qpair failed and we were unable to recover it. 00:26:28.879 [2024-05-15 11:18:25.897783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.897923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.897936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.879 qpair failed and we were unable to recover it. 00:26:28.879 [2024-05-15 11:18:25.898031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.898275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.898290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.879 qpair failed and we were unable to recover it. 00:26:28.879 [2024-05-15 11:18:25.898383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.898528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.898541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.879 qpair failed and we were unable to recover it. 00:26:28.879 [2024-05-15 11:18:25.898693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.898794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.898807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.879 qpair failed and we were unable to recover it. 00:26:28.879 [2024-05-15 11:18:25.898969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.899076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.899090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.879 qpair failed and we were unable to recover it. 00:26:28.879 [2024-05-15 11:18:25.899193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.899285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.899299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.879 qpair failed and we were unable to recover it. 00:26:28.879 [2024-05-15 11:18:25.899378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.899468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.899482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.879 qpair failed and we were unable to recover it. 00:26:28.879 [2024-05-15 11:18:25.899568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.899653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.899667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.879 qpair failed and we were unable to recover it. 00:26:28.879 [2024-05-15 11:18:25.899823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.899894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.899907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.879 qpair failed and we were unable to recover it. 00:26:28.879 [2024-05-15 11:18:25.899983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.900081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.900094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.879 qpair failed and we were unable to recover it. 00:26:28.879 [2024-05-15 11:18:25.900191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.900287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.900300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.879 qpair failed and we were unable to recover it. 00:26:28.879 [2024-05-15 11:18:25.900509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.900597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.900610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.879 qpair failed and we were unable to recover it. 00:26:28.879 [2024-05-15 11:18:25.900707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.900780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.900793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.879 qpair failed and we were unable to recover it. 00:26:28.879 [2024-05-15 11:18:25.900883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.901024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.901037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.879 qpair failed and we were unable to recover it. 00:26:28.879 [2024-05-15 11:18:25.901109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.901261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.901274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.879 qpair failed and we were unable to recover it. 00:26:28.879 [2024-05-15 11:18:25.901375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.901452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.901466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.879 qpair failed and we were unable to recover it. 00:26:28.879 [2024-05-15 11:18:25.901607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.901810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.879 [2024-05-15 11:18:25.901824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.880 qpair failed and we were unable to recover it. 00:26:28.880 [2024-05-15 11:18:25.901963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.902033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.902046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.880 qpair failed and we were unable to recover it. 00:26:28.880 [2024-05-15 11:18:25.902187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.902334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.902348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.880 qpair failed and we were unable to recover it. 00:26:28.880 [2024-05-15 11:18:25.902510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.902729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.902742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.880 qpair failed and we were unable to recover it. 00:26:28.880 [2024-05-15 11:18:25.902846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.903008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.903021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.880 qpair failed and we were unable to recover it. 00:26:28.880 [2024-05-15 11:18:25.903113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.903212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.903226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.880 qpair failed and we were unable to recover it. 00:26:28.880 [2024-05-15 11:18:25.903330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.903409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.903422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.880 qpair failed and we were unable to recover it. 00:26:28.880 [2024-05-15 11:18:25.903502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.903645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.903658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.880 qpair failed and we were unable to recover it. 00:26:28.880 [2024-05-15 11:18:25.903894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.903981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.903994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.880 qpair failed and we were unable to recover it. 00:26:28.880 [2024-05-15 11:18:25.904226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.904296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.904311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.880 qpair failed and we were unable to recover it. 00:26:28.880 [2024-05-15 11:18:25.904549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.904650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.904663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.880 qpair failed and we were unable to recover it. 00:26:28.880 [2024-05-15 11:18:25.904817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.904968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.904982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.880 qpair failed and we were unable to recover it. 00:26:28.880 [2024-05-15 11:18:25.905228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.905389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.905403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.880 qpair failed and we were unable to recover it. 00:26:28.880 [2024-05-15 11:18:25.905579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.905683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.905699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.880 qpair failed and we were unable to recover it. 00:26:28.880 [2024-05-15 11:18:25.905859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.906073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.906086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.880 qpair failed and we were unable to recover it. 00:26:28.880 [2024-05-15 11:18:25.906181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.906265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.906278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.880 qpair failed and we were unable to recover it. 00:26:28.880 [2024-05-15 11:18:25.906424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.906524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.906538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.880 qpair failed and we were unable to recover it. 00:26:28.880 [2024-05-15 11:18:25.906689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.906773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.906787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.880 qpair failed and we were unable to recover it. 00:26:28.880 [2024-05-15 11:18:25.906939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.907177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.907191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.880 qpair failed and we were unable to recover it. 00:26:28.880 [2024-05-15 11:18:25.907265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.907420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.907433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.880 qpair failed and we were unable to recover it. 00:26:28.880 [2024-05-15 11:18:25.907579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.907719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.907732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.880 qpair failed and we were unable to recover it. 00:26:28.880 [2024-05-15 11:18:25.907870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.907944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.907957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.880 qpair failed and we were unable to recover it. 00:26:28.880 [2024-05-15 11:18:25.908047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.908184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.908198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.880 qpair failed and we were unable to recover it. 00:26:28.880 [2024-05-15 11:18:25.908341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.908477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.908494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.880 qpair failed and we were unable to recover it. 00:26:28.880 [2024-05-15 11:18:25.908579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.908760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.908773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.880 qpair failed and we were unable to recover it. 00:26:28.880 [2024-05-15 11:18:25.908859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.908933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.908946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.880 qpair failed and we were unable to recover it. 00:26:28.880 [2024-05-15 11:18:25.909026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.909112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.909125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.880 qpair failed and we were unable to recover it. 00:26:28.880 [2024-05-15 11:18:25.909285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.909375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.909389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.880 qpair failed and we were unable to recover it. 00:26:28.880 [2024-05-15 11:18:25.909623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.909815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.909828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.880 qpair failed and we were unable to recover it. 00:26:28.880 [2024-05-15 11:18:25.909910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.910001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.910014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.880 qpair failed and we were unable to recover it. 00:26:28.880 [2024-05-15 11:18:25.910099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.880 [2024-05-15 11:18:25.910278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.881 [2024-05-15 11:18:25.910291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.881 qpair failed and we were unable to recover it. 00:26:28.881 [2024-05-15 11:18:25.910430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.881 [2024-05-15 11:18:25.910572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.881 [2024-05-15 11:18:25.910586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.881 qpair failed and we were unable to recover it. 00:26:28.881 [2024-05-15 11:18:25.910692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.881 [2024-05-15 11:18:25.910784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.881 [2024-05-15 11:18:25.910797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.881 qpair failed and we were unable to recover it. 00:26:28.881 [2024-05-15 11:18:25.910956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.881 [2024-05-15 11:18:25.911114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.881 [2024-05-15 11:18:25.911131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.881 qpair failed and we were unable to recover it. 00:26:28.881 [2024-05-15 11:18:25.911226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.881 [2024-05-15 11:18:25.911317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.881 [2024-05-15 11:18:25.911331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.881 qpair failed and we were unable to recover it. 00:26:28.881 [2024-05-15 11:18:25.911414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.881 [2024-05-15 11:18:25.911502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.881 [2024-05-15 11:18:25.911515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.881 qpair failed and we were unable to recover it. 00:26:28.881 [2024-05-15 11:18:25.911597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.881 [2024-05-15 11:18:25.911737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.881 [2024-05-15 11:18:25.911750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.881 qpair failed and we were unable to recover it. 00:26:28.881 [2024-05-15 11:18:25.911849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.881 [2024-05-15 11:18:25.911987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.881 [2024-05-15 11:18:25.912000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.881 qpair failed and we were unable to recover it. 00:26:28.881 [2024-05-15 11:18:25.912139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.881 [2024-05-15 11:18:25.912210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.881 [2024-05-15 11:18:25.912225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.881 qpair failed and we were unable to recover it. 00:26:28.881 [2024-05-15 11:18:25.912366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.881 [2024-05-15 11:18:25.912440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.881 [2024-05-15 11:18:25.912454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.881 qpair failed and we were unable to recover it. 00:26:28.881 [2024-05-15 11:18:25.912629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.881 [2024-05-15 11:18:25.912774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.881 [2024-05-15 11:18:25.912787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.881 qpair failed and we were unable to recover it. 00:26:28.881 [2024-05-15 11:18:25.912939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.881 [2024-05-15 11:18:25.913147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.881 [2024-05-15 11:18:25.913161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.881 qpair failed and we were unable to recover it. 00:26:28.881 [2024-05-15 11:18:25.913307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.881 [2024-05-15 11:18:25.913400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.881 [2024-05-15 11:18:25.913413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.881 qpair failed and we were unable to recover it. 00:26:28.881 [2024-05-15 11:18:25.913510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.881 [2024-05-15 11:18:25.913685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.881 [2024-05-15 11:18:25.913701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.881 qpair failed and we were unable to recover it. 00:26:28.881 [2024-05-15 11:18:25.913793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.881 [2024-05-15 11:18:25.913868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.881 [2024-05-15 11:18:25.913881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.881 qpair failed and we were unable to recover it. 00:26:28.881 [2024-05-15 11:18:25.914050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.881 [2024-05-15 11:18:25.914255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.881 [2024-05-15 11:18:25.914269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.881 qpair failed and we were unable to recover it. 00:26:28.881 [2024-05-15 11:18:25.914341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.881 [2024-05-15 11:18:25.914487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.881 [2024-05-15 11:18:25.914501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.881 qpair failed and we were unable to recover it. 00:26:28.881 [2024-05-15 11:18:25.914614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.881 [2024-05-15 11:18:25.914789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.881 [2024-05-15 11:18:25.914803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.881 qpair failed and we were unable to recover it. 00:26:28.881 [2024-05-15 11:18:25.915021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.881 [2024-05-15 11:18:25.915173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.881 [2024-05-15 11:18:25.915186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.881 qpair failed and we were unable to recover it. 00:26:28.881 [2024-05-15 11:18:25.915418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.881 [2024-05-15 11:18:25.915584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.881 [2024-05-15 11:18:25.915597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.881 qpair failed and we were unable to recover it. 00:26:28.881 [2024-05-15 11:18:25.915738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.881 [2024-05-15 11:18:25.915877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.881 [2024-05-15 11:18:25.915890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.881 qpair failed and we were unable to recover it. 00:26:28.881 [2024-05-15 11:18:25.915978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.881 [2024-05-15 11:18:25.916132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.881 [2024-05-15 11:18:25.916146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.881 qpair failed and we were unable to recover it. 00:26:28.881 [2024-05-15 11:18:25.916403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.881 [2024-05-15 11:18:25.916577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.881 [2024-05-15 11:18:25.916590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.881 qpair failed and we were unable to recover it. 00:26:28.881 [2024-05-15 11:18:25.916749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.881 [2024-05-15 11:18:25.916973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.881 [2024-05-15 11:18:25.916986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.881 qpair failed and we were unable to recover it. 00:26:28.881 [2024-05-15 11:18:25.917084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.881 [2024-05-15 11:18:25.917243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.881 [2024-05-15 11:18:25.917257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.881 qpair failed and we were unable to recover it. 00:26:28.882 [2024-05-15 11:18:25.917411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.917575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.917588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.882 qpair failed and we were unable to recover it. 00:26:28.882 [2024-05-15 11:18:25.917729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.917955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.917969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.882 qpair failed and we were unable to recover it. 00:26:28.882 [2024-05-15 11:18:25.918128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.918211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.918224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.882 qpair failed and we were unable to recover it. 00:26:28.882 [2024-05-15 11:18:25.918327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.918486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.918499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.882 qpair failed and we were unable to recover it. 00:26:28.882 [2024-05-15 11:18:25.918657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.918806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.918820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.882 qpair failed and we were unable to recover it. 00:26:28.882 [2024-05-15 11:18:25.918970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.919113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.919127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.882 qpair failed and we were unable to recover it. 00:26:28.882 [2024-05-15 11:18:25.919270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.919360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.919373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.882 qpair failed and we were unable to recover it. 00:26:28.882 [2024-05-15 11:18:25.919466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.919606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.919618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.882 qpair failed and we were unable to recover it. 00:26:28.882 [2024-05-15 11:18:25.919778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.919852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.919865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.882 qpair failed and we were unable to recover it. 00:26:28.882 [2024-05-15 11:18:25.920005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.920091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.920104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.882 qpair failed and we were unable to recover it. 00:26:28.882 [2024-05-15 11:18:25.920207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.920280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.920294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.882 qpair failed and we were unable to recover it. 00:26:28.882 [2024-05-15 11:18:25.920444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.920538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.920551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.882 qpair failed and we were unable to recover it. 00:26:28.882 [2024-05-15 11:18:25.920651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.920792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.920805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.882 qpair failed and we were unable to recover it. 00:26:28.882 [2024-05-15 11:18:25.921014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.921170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.921183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.882 qpair failed and we were unable to recover it. 00:26:28.882 [2024-05-15 11:18:25.921322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.921468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.921481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.882 qpair failed and we were unable to recover it. 00:26:28.882 [2024-05-15 11:18:25.921628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.921824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.921838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.882 qpair failed and we were unable to recover it. 00:26:28.882 [2024-05-15 11:18:25.922043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.922193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.922206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.882 qpair failed and we were unable to recover it. 00:26:28.882 [2024-05-15 11:18:25.922282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.922434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.922446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.882 qpair failed and we were unable to recover it. 00:26:28.882 [2024-05-15 11:18:25.922603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.922810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.922823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.882 qpair failed and we were unable to recover it. 00:26:28.882 [2024-05-15 11:18:25.922913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.923066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.923080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.882 qpair failed and we were unable to recover it. 00:26:28.882 [2024-05-15 11:18:25.923239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.923453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.923467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.882 qpair failed and we were unable to recover it. 00:26:28.882 [2024-05-15 11:18:25.923551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.923636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.923650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.882 qpair failed and we were unable to recover it. 00:26:28.882 [2024-05-15 11:18:25.923730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.923932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.923945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.882 qpair failed and we were unable to recover it. 00:26:28.882 [2024-05-15 11:18:25.924038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.924197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.924211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.882 qpair failed and we were unable to recover it. 00:26:28.882 [2024-05-15 11:18:25.924352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.924527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.924540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.882 qpair failed and we were unable to recover it. 00:26:28.882 [2024-05-15 11:18:25.924706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.924973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.924987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.882 qpair failed and we were unable to recover it. 00:26:28.882 [2024-05-15 11:18:25.925149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.925374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.925387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.882 qpair failed and we were unable to recover it. 00:26:28.882 [2024-05-15 11:18:25.925488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.925579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.925593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.882 qpair failed and we were unable to recover it. 00:26:28.882 [2024-05-15 11:18:25.925678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.925754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.925767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.882 qpair failed and we were unable to recover it. 00:26:28.882 [2024-05-15 11:18:25.925947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.882 [2024-05-15 11:18:25.926128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.926141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.883 qpair failed and we were unable to recover it. 00:26:28.883 [2024-05-15 11:18:25.926306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.926510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.926524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.883 qpair failed and we were unable to recover it. 00:26:28.883 [2024-05-15 11:18:25.926597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.926686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.926699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.883 qpair failed and we were unable to recover it. 00:26:28.883 [2024-05-15 11:18:25.926779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.926863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.926876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.883 qpair failed and we were unable to recover it. 00:26:28.883 [2024-05-15 11:18:25.926961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.927102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.927115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.883 qpair failed and we were unable to recover it. 00:26:28.883 [2024-05-15 11:18:25.927258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.927425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.927438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.883 qpair failed and we were unable to recover it. 00:26:28.883 [2024-05-15 11:18:25.927515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.927606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.927619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.883 qpair failed and we were unable to recover it. 00:26:28.883 [2024-05-15 11:18:25.927765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.927917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.927930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.883 qpair failed and we were unable to recover it. 00:26:28.883 [2024-05-15 11:18:25.928027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.928124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.928137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.883 qpair failed and we were unable to recover it. 00:26:28.883 [2024-05-15 11:18:25.928234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.928406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.928419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.883 qpair failed and we were unable to recover it. 00:26:28.883 [2024-05-15 11:18:25.928572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.928777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.928790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.883 qpair failed and we were unable to recover it. 00:26:28.883 [2024-05-15 11:18:25.928896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.928966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.928980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.883 qpair failed and we were unable to recover it. 00:26:28.883 [2024-05-15 11:18:25.929121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.929193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.929206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.883 qpair failed and we were unable to recover it. 00:26:28.883 [2024-05-15 11:18:25.929369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.929451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.929465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.883 qpair failed and we were unable to recover it. 00:26:28.883 [2024-05-15 11:18:25.929617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.929754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.929768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.883 qpair failed and we were unable to recover it. 00:26:28.883 [2024-05-15 11:18:25.929850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.930000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.930013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.883 qpair failed and we were unable to recover it. 00:26:28.883 [2024-05-15 11:18:25.930107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.930190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.930204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.883 qpair failed and we were unable to recover it. 00:26:28.883 [2024-05-15 11:18:25.930296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.930501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.930514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.883 qpair failed and we were unable to recover it. 00:26:28.883 [2024-05-15 11:18:25.930586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.930744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.930757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.883 qpair failed and we were unable to recover it. 00:26:28.883 [2024-05-15 11:18:25.930849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.930940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.930953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.883 qpair failed and we were unable to recover it. 00:26:28.883 [2024-05-15 11:18:25.931115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.931330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.931344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.883 qpair failed and we were unable to recover it. 00:26:28.883 [2024-05-15 11:18:25.931415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.931624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.931637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.883 qpair failed and we were unable to recover it. 00:26:28.883 [2024-05-15 11:18:25.931707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.931782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.931795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.883 qpair failed and we were unable to recover it. 00:26:28.883 [2024-05-15 11:18:25.931892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.932045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.932058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.883 qpair failed and we were unable to recover it. 00:26:28.883 [2024-05-15 11:18:25.932143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.932217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.932230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.883 qpair failed and we were unable to recover it. 00:26:28.883 [2024-05-15 11:18:25.932308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.932535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.932548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.883 qpair failed and we were unable to recover it. 00:26:28.883 [2024-05-15 11:18:25.932631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.932720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.932733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.883 qpair failed and we were unable to recover it. 00:26:28.883 [2024-05-15 11:18:25.932817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.932953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.932967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.883 qpair failed and we were unable to recover it. 00:26:28.883 [2024-05-15 11:18:25.933037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.933193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.933207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.883 qpair failed and we were unable to recover it. 00:26:28.883 [2024-05-15 11:18:25.933356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.933435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.883 [2024-05-15 11:18:25.933448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:28.883 qpair failed and we were unable to recover it. 00:26:28.883 [2024-05-15 11:18:25.933554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.933723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.933736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.884 qpair failed and we were unable to recover it. 00:26:28.884 [2024-05-15 11:18:25.933888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.933960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.933970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.884 qpair failed and we were unable to recover it. 00:26:28.884 [2024-05-15 11:18:25.934060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.934148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.934158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.884 qpair failed and we were unable to recover it. 00:26:28.884 [2024-05-15 11:18:25.934301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.934433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.934442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.884 qpair failed and we were unable to recover it. 00:26:28.884 [2024-05-15 11:18:25.934526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.934597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.934607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.884 qpair failed and we were unable to recover it. 00:26:28.884 [2024-05-15 11:18:25.934689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.934831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.934841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.884 qpair failed and we were unable to recover it. 00:26:28.884 [2024-05-15 11:18:25.934908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.934997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.935007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.884 qpair failed and we were unable to recover it. 00:26:28.884 [2024-05-15 11:18:25.935078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.935141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.935150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.884 qpair failed and we were unable to recover it. 00:26:28.884 [2024-05-15 11:18:25.935316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.935447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.935457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.884 qpair failed and we were unable to recover it. 00:26:28.884 [2024-05-15 11:18:25.935609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.935702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.935712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.884 qpair failed and we were unable to recover it. 00:26:28.884 [2024-05-15 11:18:25.935843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.935949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.935966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.884 qpair failed and we were unable to recover it. 00:26:28.884 [2024-05-15 11:18:25.936062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.936210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.936225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.884 qpair failed and we were unable to recover it. 00:26:28.884 [2024-05-15 11:18:25.936366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.936444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.936458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.884 qpair failed and we were unable to recover it. 00:26:28.884 [2024-05-15 11:18:25.936628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.936707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.936721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.884 qpair failed and we were unable to recover it. 00:26:28.884 [2024-05-15 11:18:25.936800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.936979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.936993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.884 qpair failed and we were unable to recover it. 00:26:28.884 [2024-05-15 11:18:25.937178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.937307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.937320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.884 qpair failed and we were unable to recover it. 00:26:28.884 [2024-05-15 11:18:25.937394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.937600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.937613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.884 qpair failed and we were unable to recover it. 00:26:28.884 [2024-05-15 11:18:25.937785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.937871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.937884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.884 qpair failed and we were unable to recover it. 00:26:28.884 [2024-05-15 11:18:25.938050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.938201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.938216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.884 qpair failed and we were unable to recover it. 00:26:28.884 [2024-05-15 11:18:25.938348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.938435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.938448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.884 qpair failed and we were unable to recover it. 00:26:28.884 [2024-05-15 11:18:25.938630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.938712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.938726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.884 qpair failed and we were unable to recover it. 00:26:28.884 [2024-05-15 11:18:25.938877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.939024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.939038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.884 qpair failed and we were unable to recover it. 00:26:28.884 [2024-05-15 11:18:25.939117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.939350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.939364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.884 qpair failed and we were unable to recover it. 00:26:28.884 [2024-05-15 11:18:25.939507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.939631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.939644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.884 qpair failed and we were unable to recover it. 00:26:28.884 [2024-05-15 11:18:25.939826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.939898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.939911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.884 qpair failed and we were unable to recover it. 00:26:28.884 [2024-05-15 11:18:25.940071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.940233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.940246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.884 qpair failed and we were unable to recover it. 00:26:28.884 [2024-05-15 11:18:25.940341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.940498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.940512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.884 qpair failed and we were unable to recover it. 00:26:28.884 [2024-05-15 11:18:25.940598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.940741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.940754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.884 qpair failed and we were unable to recover it. 00:26:28.884 [2024-05-15 11:18:25.940909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.941151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.941168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.884 qpair failed and we were unable to recover it. 00:26:28.884 [2024-05-15 11:18:25.941402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.884 [2024-05-15 11:18:25.941491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.941505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.885 qpair failed and we were unable to recover it. 00:26:28.885 [2024-05-15 11:18:25.941740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.941832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.941845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.885 qpair failed and we were unable to recover it. 00:26:28.885 [2024-05-15 11:18:25.942018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.942103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.942116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.885 qpair failed and we were unable to recover it. 00:26:28.885 [2024-05-15 11:18:25.942193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.942270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.942283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.885 qpair failed and we were unable to recover it. 00:26:28.885 [2024-05-15 11:18:25.942430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.942525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.942538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.885 qpair failed and we were unable to recover it. 00:26:28.885 [2024-05-15 11:18:25.942688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.942786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.942799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.885 qpair failed and we were unable to recover it. 00:26:28.885 [2024-05-15 11:18:25.942885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.943043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.943056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.885 qpair failed and we were unable to recover it. 00:26:28.885 [2024-05-15 11:18:25.943168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.943270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.943284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.885 qpair failed and we were unable to recover it. 00:26:28.885 [2024-05-15 11:18:25.943379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.943468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.943481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.885 qpair failed and we were unable to recover it. 00:26:28.885 [2024-05-15 11:18:25.943641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.943805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.943818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.885 qpair failed and we were unable to recover it. 00:26:28.885 [2024-05-15 11:18:25.943966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.944106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.944119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.885 qpair failed and we were unable to recover it. 00:26:28.885 [2024-05-15 11:18:25.944220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.944309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.944323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.885 qpair failed and we were unable to recover it. 00:26:28.885 [2024-05-15 11:18:25.944395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.944546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.944559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.885 qpair failed and we were unable to recover it. 00:26:28.885 [2024-05-15 11:18:25.944703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.944858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.944871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.885 qpair failed and we were unable to recover it. 00:26:28.885 [2024-05-15 11:18:25.944982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.945137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.945150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.885 qpair failed and we were unable to recover it. 00:26:28.885 [2024-05-15 11:18:25.945409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.945497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.945510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.885 qpair failed and we were unable to recover it. 00:26:28.885 [2024-05-15 11:18:25.945676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.945759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.945772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.885 qpair failed and we were unable to recover it. 00:26:28.885 [2024-05-15 11:18:25.945852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.946005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.946019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.885 qpair failed and we were unable to recover it. 00:26:28.885 [2024-05-15 11:18:25.946100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.946306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.946320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.885 qpair failed and we were unable to recover it. 00:26:28.885 [2024-05-15 11:18:25.946414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.946569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.946582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.885 qpair failed and we were unable to recover it. 00:26:28.885 [2024-05-15 11:18:25.946727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.946810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.946823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.885 qpair failed and we were unable to recover it. 00:26:28.885 [2024-05-15 11:18:25.946908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.947082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.947095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.885 qpair failed and we were unable to recover it. 00:26:28.885 [2024-05-15 11:18:25.947251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.947401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.947414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.885 qpair failed and we were unable to recover it. 00:26:28.885 [2024-05-15 11:18:25.947496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.947635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.947648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.885 qpair failed and we were unable to recover it. 00:26:28.885 [2024-05-15 11:18:25.947734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.947970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.947984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.885 qpair failed and we were unable to recover it. 00:26:28.885 [2024-05-15 11:18:25.948093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.948179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.948199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.885 qpair failed and we were unable to recover it. 00:26:28.885 [2024-05-15 11:18:25.948357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.948518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.948531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.885 qpair failed and we were unable to recover it. 00:26:28.885 [2024-05-15 11:18:25.948673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.948837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.948850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.885 qpair failed and we were unable to recover it. 00:26:28.885 [2024-05-15 11:18:25.948933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.949088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.949101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.885 qpair failed and we were unable to recover it. 00:26:28.885 [2024-05-15 11:18:25.949252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.949406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.885 [2024-05-15 11:18:25.949420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.886 qpair failed and we were unable to recover it. 00:26:28.886 [2024-05-15 11:18:25.949588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.949681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.949695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.886 qpair failed and we were unable to recover it. 00:26:28.886 [2024-05-15 11:18:25.949838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.949910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.949926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.886 qpair failed and we were unable to recover it. 00:26:28.886 [2024-05-15 11:18:25.950019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.950116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.950129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.886 qpair failed and we were unable to recover it. 00:26:28.886 [2024-05-15 11:18:25.950270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.950449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.950462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.886 qpair failed and we were unable to recover it. 00:26:28.886 [2024-05-15 11:18:25.950629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.950865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.950878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.886 qpair failed and we were unable to recover it. 00:26:28.886 [2024-05-15 11:18:25.951026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.951124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.951138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.886 qpair failed and we were unable to recover it. 00:26:28.886 [2024-05-15 11:18:25.951294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.951386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.951399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.886 qpair failed and we were unable to recover it. 00:26:28.886 [2024-05-15 11:18:25.951564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.951712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.951725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.886 qpair failed and we were unable to recover it. 00:26:28.886 [2024-05-15 11:18:25.951940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.952019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.952032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.886 qpair failed and we were unable to recover it. 00:26:28.886 [2024-05-15 11:18:25.952129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.952273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.952287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.886 qpair failed and we were unable to recover it. 00:26:28.886 [2024-05-15 11:18:25.952360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.952512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.952525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.886 qpair failed and we were unable to recover it. 00:26:28.886 [2024-05-15 11:18:25.952616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.952758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.952774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.886 qpair failed and we were unable to recover it. 00:26:28.886 [2024-05-15 11:18:25.952870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.953028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.953041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.886 qpair failed and we were unable to recover it. 00:26:28.886 [2024-05-15 11:18:25.953126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.953220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.953234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.886 qpair failed and we were unable to recover it. 00:26:28.886 [2024-05-15 11:18:25.953442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.953689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.953703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.886 qpair failed and we were unable to recover it. 00:26:28.886 [2024-05-15 11:18:25.953794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.953931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.953944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.886 qpair failed and we were unable to recover it. 00:26:28.886 [2024-05-15 11:18:25.954091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.954178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.954192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.886 qpair failed and we were unable to recover it. 00:26:28.886 [2024-05-15 11:18:25.954276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.954345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.954358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.886 qpair failed and we were unable to recover it. 00:26:28.886 [2024-05-15 11:18:25.954516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.954675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.954688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.886 qpair failed and we were unable to recover it. 00:26:28.886 [2024-05-15 11:18:25.954843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.954912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.954925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.886 qpair failed and we were unable to recover it. 00:26:28.886 [2024-05-15 11:18:25.955132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.955210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.955224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.886 qpair failed and we were unable to recover it. 00:26:28.886 [2024-05-15 11:18:25.955311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.955401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.955417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.886 qpair failed and we were unable to recover it. 00:26:28.886 [2024-05-15 11:18:25.955511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.955597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.955610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.886 qpair failed and we were unable to recover it. 00:26:28.886 [2024-05-15 11:18:25.955707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.955927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.955940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.886 qpair failed and we were unable to recover it. 00:26:28.886 [2024-05-15 11:18:25.956040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.956113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.956127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.886 qpair failed and we were unable to recover it. 00:26:28.886 [2024-05-15 11:18:25.956271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.956421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.956434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.886 qpair failed and we were unable to recover it. 00:26:28.886 [2024-05-15 11:18:25.956512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.956745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.956758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.886 qpair failed and we were unable to recover it. 00:26:28.886 [2024-05-15 11:18:25.956830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.956902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.956914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.886 qpair failed and we were unable to recover it. 00:26:28.886 [2024-05-15 11:18:25.957053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.886 [2024-05-15 11:18:25.957198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.957212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.887 qpair failed and we were unable to recover it. 00:26:28.887 [2024-05-15 11:18:25.957309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.957448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.957462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.887 qpair failed and we were unable to recover it. 00:26:28.887 [2024-05-15 11:18:25.957611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.957747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.957760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.887 qpair failed and we were unable to recover it. 00:26:28.887 [2024-05-15 11:18:25.957855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.958008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.958026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.887 qpair failed and we were unable to recover it. 00:26:28.887 [2024-05-15 11:18:25.958181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.958330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.958344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.887 qpair failed and we were unable to recover it. 00:26:28.887 [2024-05-15 11:18:25.958497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.958687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.958701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.887 qpair failed and we were unable to recover it. 00:26:28.887 [2024-05-15 11:18:25.958791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.958963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.958976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.887 qpair failed and we were unable to recover it. 00:26:28.887 [2024-05-15 11:18:25.959073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.959237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.959251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.887 qpair failed and we were unable to recover it. 00:26:28.887 [2024-05-15 11:18:25.959394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.959549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.959563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.887 qpair failed and we were unable to recover it. 00:26:28.887 [2024-05-15 11:18:25.959790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.959940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.959953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.887 qpair failed and we were unable to recover it. 00:26:28.887 [2024-05-15 11:18:25.960097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.960181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.960195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.887 qpair failed and we were unable to recover it. 00:26:28.887 [2024-05-15 11:18:25.960405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.960542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.960555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.887 qpair failed and we were unable to recover it. 00:26:28.887 [2024-05-15 11:18:25.960728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.960869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.960882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.887 qpair failed and we were unable to recover it. 00:26:28.887 [2024-05-15 11:18:25.961022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.961116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.961130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.887 qpair failed and we were unable to recover it. 00:26:28.887 [2024-05-15 11:18:25.961220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.961360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.961373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.887 qpair failed and we were unable to recover it. 00:26:28.887 [2024-05-15 11:18:25.961521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.961760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.961773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.887 qpair failed and we were unable to recover it. 00:26:28.887 [2024-05-15 11:18:25.961929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.962028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.962041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.887 qpair failed and we were unable to recover it. 00:26:28.887 [2024-05-15 11:18:25.962248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.962336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.962350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.887 qpair failed and we were unable to recover it. 00:26:28.887 [2024-05-15 11:18:25.962521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.962679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.962693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.887 qpair failed and we were unable to recover it. 00:26:28.887 [2024-05-15 11:18:25.962845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.962936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.962949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.887 qpair failed and we were unable to recover it. 00:26:28.887 [2024-05-15 11:18:25.963124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.963231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.963245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.887 qpair failed and we were unable to recover it. 00:26:28.887 [2024-05-15 11:18:25.963422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.963508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.963522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.887 qpair failed and we were unable to recover it. 00:26:28.887 [2024-05-15 11:18:25.963623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.963795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.963808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.887 qpair failed and we were unable to recover it. 00:26:28.887 [2024-05-15 11:18:25.963911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.964052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.964065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.887 qpair failed and we were unable to recover it. 00:26:28.887 [2024-05-15 11:18:25.964210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.964417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.964431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.887 qpair failed and we were unable to recover it. 00:26:28.887 [2024-05-15 11:18:25.964584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.964762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.964775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.887 qpair failed and we were unable to recover it. 00:26:28.887 [2024-05-15 11:18:25.964935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.965158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.965183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.887 qpair failed and we were unable to recover it. 00:26:28.887 [2024-05-15 11:18:25.965330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.965437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.965451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.887 qpair failed and we were unable to recover it. 00:26:28.887 [2024-05-15 11:18:25.965631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.965785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.965799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.887 qpair failed and we were unable to recover it. 00:26:28.887 [2024-05-15 11:18:25.965899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.965982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.965995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.887 qpair failed and we were unable to recover it. 00:26:28.887 [2024-05-15 11:18:25.966146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.966294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.966308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.887 qpair failed and we were unable to recover it. 00:26:28.887 [2024-05-15 11:18:25.966454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.966615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.966628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.887 qpair failed and we were unable to recover it. 00:26:28.887 [2024-05-15 11:18:25.966704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.887 [2024-05-15 11:18:25.966876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.966890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.888 qpair failed and we were unable to recover it. 00:26:28.888 [2024-05-15 11:18:25.966994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.967094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.967107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.888 qpair failed and we were unable to recover it. 00:26:28.888 [2024-05-15 11:18:25.967262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.967355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.967368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.888 qpair failed and we were unable to recover it. 00:26:28.888 [2024-05-15 11:18:25.967465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.967624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.967637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.888 qpair failed and we were unable to recover it. 00:26:28.888 [2024-05-15 11:18:25.967730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.967830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.967843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.888 qpair failed and we were unable to recover it. 00:26:28.888 [2024-05-15 11:18:25.968017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.968239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.968253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.888 qpair failed and we were unable to recover it. 00:26:28.888 [2024-05-15 11:18:25.968393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.968589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.968603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.888 qpair failed and we were unable to recover it. 00:26:28.888 [2024-05-15 11:18:25.968715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.968804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.968817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.888 qpair failed and we were unable to recover it. 00:26:28.888 [2024-05-15 11:18:25.968921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.969009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.969022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.888 qpair failed and we were unable to recover it. 00:26:28.888 [2024-05-15 11:18:25.969099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.969178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.969190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.888 qpair failed and we were unable to recover it. 00:26:28.888 [2024-05-15 11:18:25.969332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.969476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.969489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.888 qpair failed and we were unable to recover it. 00:26:28.888 [2024-05-15 11:18:25.969655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.969804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.969818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.888 qpair failed and we were unable to recover it. 00:26:28.888 [2024-05-15 11:18:25.969958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.970135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.970148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.888 qpair failed and we were unable to recover it. 00:26:28.888 [2024-05-15 11:18:25.970310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.970461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.970474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.888 qpair failed and we were unable to recover it. 00:26:28.888 [2024-05-15 11:18:25.970627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.970701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.970714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.888 qpair failed and we were unable to recover it. 00:26:28.888 [2024-05-15 11:18:25.970801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.970885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.970898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.888 qpair failed and we were unable to recover it. 00:26:28.888 [2024-05-15 11:18:25.971058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.971198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.971213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.888 qpair failed and we were unable to recover it. 00:26:28.888 [2024-05-15 11:18:25.971315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.971404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.971417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.888 qpair failed and we were unable to recover it. 00:26:28.888 [2024-05-15 11:18:25.971570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.971704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.971717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.888 qpair failed and we were unable to recover it. 00:26:28.888 [2024-05-15 11:18:25.971794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.971950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.971963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.888 qpair failed and we were unable to recover it. 00:26:28.888 [2024-05-15 11:18:25.972103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.972186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.972199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.888 qpair failed and we were unable to recover it. 00:26:28.888 [2024-05-15 11:18:25.972283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.972488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.972501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.888 qpair failed and we were unable to recover it. 00:26:28.888 [2024-05-15 11:18:25.972580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.972682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.972695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.888 qpair failed and we were unable to recover it. 00:26:28.888 [2024-05-15 11:18:25.972767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.972922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.972935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.888 qpair failed and we were unable to recover it. 00:26:28.888 [2024-05-15 11:18:25.973195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.973347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.973360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.888 qpair failed and we were unable to recover it. 00:26:28.888 [2024-05-15 11:18:25.973439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.973597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.973609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.888 qpair failed and we were unable to recover it. 00:26:28.888 [2024-05-15 11:18:25.973766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.973851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.973864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.888 qpair failed and we were unable to recover it. 00:26:28.888 [2024-05-15 11:18:25.974075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.974234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.974248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.888 qpair failed and we were unable to recover it. 00:26:28.888 [2024-05-15 11:18:25.974327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.974496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.974509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.888 qpair failed and we were unable to recover it. 00:26:28.888 [2024-05-15 11:18:25.974584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.974748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.974762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.888 qpair failed and we were unable to recover it. 00:26:28.888 [2024-05-15 11:18:25.974916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.975005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.975018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.888 qpair failed and we were unable to recover it. 00:26:28.888 [2024-05-15 11:18:25.975228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.975402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.975416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.888 qpair failed and we were unable to recover it. 00:26:28.888 [2024-05-15 11:18:25.975576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.975666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.975679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.888 qpair failed and we were unable to recover it. 00:26:28.888 [2024-05-15 11:18:25.975844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.975945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.888 [2024-05-15 11:18:25.975959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.888 qpair failed and we were unable to recover it. 00:26:28.888 [2024-05-15 11:18:25.976036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.976191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.976205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.889 qpair failed and we were unable to recover it. 00:26:28.889 [2024-05-15 11:18:25.976368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.976532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.976545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.889 qpair failed and we were unable to recover it. 00:26:28.889 [2024-05-15 11:18:25.976677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.976899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.976912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.889 qpair failed and we were unable to recover it. 00:26:28.889 [2024-05-15 11:18:25.977070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.977221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.977235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.889 qpair failed and we were unable to recover it. 00:26:28.889 [2024-05-15 11:18:25.977377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.977482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.977495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.889 qpair failed and we were unable to recover it. 00:26:28.889 [2024-05-15 11:18:25.977560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.977634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.977646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.889 qpair failed and we were unable to recover it. 00:26:28.889 [2024-05-15 11:18:25.977805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.978017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.978031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.889 qpair failed and we were unable to recover it. 00:26:28.889 [2024-05-15 11:18:25.978130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.978315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.978329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.889 qpair failed and we were unable to recover it. 00:26:28.889 [2024-05-15 11:18:25.978520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.978679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.978693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.889 qpair failed and we were unable to recover it. 00:26:28.889 [2024-05-15 11:18:25.978800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.978884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.978898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.889 qpair failed and we were unable to recover it. 00:26:28.889 [2024-05-15 11:18:25.978999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.979142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.979155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.889 qpair failed and we were unable to recover it. 00:26:28.889 [2024-05-15 11:18:25.979258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.979462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.979475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.889 qpair failed and we were unable to recover it. 00:26:28.889 [2024-05-15 11:18:25.979563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.979646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.979659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.889 qpair failed and we were unable to recover it. 00:26:28.889 [2024-05-15 11:18:25.979738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.979897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.979911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.889 qpair failed and we were unable to recover it. 00:26:28.889 [2024-05-15 11:18:25.980131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.980360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.980374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.889 qpair failed and we were unable to recover it. 00:26:28.889 [2024-05-15 11:18:25.980465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.980547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.980560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.889 qpair failed and we were unable to recover it. 00:26:28.889 [2024-05-15 11:18:25.980705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.980797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.980811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.889 qpair failed and we were unable to recover it. 00:26:28.889 [2024-05-15 11:18:25.980881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.980957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.980970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.889 qpair failed and we were unable to recover it. 00:26:28.889 [2024-05-15 11:18:25.981110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.981255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.981270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.889 qpair failed and we were unable to recover it. 00:26:28.889 [2024-05-15 11:18:25.981412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.981509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.981522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.889 qpair failed and we were unable to recover it. 00:26:28.889 [2024-05-15 11:18:25.981677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.981889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.981902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.889 qpair failed and we were unable to recover it. 00:26:28.889 [2024-05-15 11:18:25.982010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.982105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.982118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.889 qpair failed and we were unable to recover it. 00:26:28.889 [2024-05-15 11:18:25.982252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.982404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.982418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.889 qpair failed and we were unable to recover it. 00:26:28.889 [2024-05-15 11:18:25.982596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.982741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.982754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.889 qpair failed and we were unable to recover it. 00:26:28.889 [2024-05-15 11:18:25.982840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.982977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.982990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.889 qpair failed and we were unable to recover it. 00:26:28.889 [2024-05-15 11:18:25.983073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.983288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.983302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.889 qpair failed and we were unable to recover it. 00:26:28.889 [2024-05-15 11:18:25.983405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.983544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.983557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.889 qpair failed and we were unable to recover it. 00:26:28.889 [2024-05-15 11:18:25.983704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.983844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.983858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.889 qpair failed and we were unable to recover it. 00:26:28.889 [2024-05-15 11:18:25.983948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.984099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.984115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.889 qpair failed and we were unable to recover it. 00:26:28.889 [2024-05-15 11:18:25.984322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.984401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.984414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.889 qpair failed and we were unable to recover it. 00:26:28.889 [2024-05-15 11:18:25.984495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.984571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.984584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.889 qpair failed and we were unable to recover it. 00:26:28.889 [2024-05-15 11:18:25.984688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.984844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.984857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.889 qpair failed and we were unable to recover it. 00:26:28.889 [2024-05-15 11:18:25.985014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.985288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.889 [2024-05-15 11:18:25.985301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.889 qpair failed and we were unable to recover it. 00:26:28.889 [2024-05-15 11:18:25.985478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.985618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.985632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.890 qpair failed and we were unable to recover it. 00:26:28.890 [2024-05-15 11:18:25.985774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.985937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.985950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.890 qpair failed and we were unable to recover it. 00:26:28.890 [2024-05-15 11:18:25.986047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.986214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.986228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.890 qpair failed and we were unable to recover it. 00:26:28.890 [2024-05-15 11:18:25.986369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.986453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.986466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.890 qpair failed and we were unable to recover it. 00:26:28.890 [2024-05-15 11:18:25.986636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.986779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.986793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.890 qpair failed and we were unable to recover it. 00:26:28.890 [2024-05-15 11:18:25.986951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.987091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.987104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.890 qpair failed and we were unable to recover it. 00:26:28.890 [2024-05-15 11:18:25.987263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.987357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.987369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.890 qpair failed and we were unable to recover it. 00:26:28.890 [2024-05-15 11:18:25.987584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.987748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.987762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.890 qpair failed and we were unable to recover it. 00:26:28.890 [2024-05-15 11:18:25.987969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.988106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.988119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.890 qpair failed and we were unable to recover it. 00:26:28.890 [2024-05-15 11:18:25.988208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.988349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.988362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.890 qpair failed and we were unable to recover it. 00:26:28.890 [2024-05-15 11:18:25.988511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.988583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.988596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.890 qpair failed and we were unable to recover it. 00:26:28.890 [2024-05-15 11:18:25.988754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.988851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.988864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.890 qpair failed and we were unable to recover it. 00:26:28.890 [2024-05-15 11:18:25.989022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.989125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.989138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.890 qpair failed and we were unable to recover it. 00:26:28.890 [2024-05-15 11:18:25.989233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.989309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.989322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.890 qpair failed and we were unable to recover it. 00:26:28.890 [2024-05-15 11:18:25.989412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.989609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.989622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.890 qpair failed and we were unable to recover it. 00:26:28.890 [2024-05-15 11:18:25.989766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.989998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.990011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.890 qpair failed and we were unable to recover it. 00:26:28.890 [2024-05-15 11:18:25.990172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.990262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.990276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.890 qpair failed and we were unable to recover it. 00:26:28.890 [2024-05-15 11:18:25.990409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.990612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.990626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.890 qpair failed and we were unable to recover it. 00:26:28.890 [2024-05-15 11:18:25.990700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.990904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.990918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.890 qpair failed and we were unable to recover it. 00:26:28.890 [2024-05-15 11:18:25.991095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.991180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.991194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.890 qpair failed and we were unable to recover it. 00:26:28.890 [2024-05-15 11:18:25.991293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.991521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.991535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.890 qpair failed and we were unable to recover it. 00:26:28.890 [2024-05-15 11:18:25.991746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.991836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.991849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.890 qpair failed and we were unable to recover it. 00:26:28.890 [2024-05-15 11:18:25.992023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.992121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.992134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.890 qpair failed and we were unable to recover it. 00:26:28.890 [2024-05-15 11:18:25.992230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.992382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.992395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.890 qpair failed and we were unable to recover it. 00:26:28.890 [2024-05-15 11:18:25.992546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.992687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.992700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.890 qpair failed and we were unable to recover it. 00:26:28.890 [2024-05-15 11:18:25.992778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.992930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.992943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.890 qpair failed and we were unable to recover it. 00:26:28.890 [2024-05-15 11:18:25.993044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.993204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.993217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.890 qpair failed and we were unable to recover it. 00:26:28.890 [2024-05-15 11:18:25.993373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.993453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.993468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.890 qpair failed and we were unable to recover it. 00:26:28.890 [2024-05-15 11:18:25.993634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.993844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.993857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.890 qpair failed and we were unable to recover it. 00:26:28.890 [2024-05-15 11:18:25.993955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.994047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.994060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.890 qpair failed and we were unable to recover it. 00:26:28.890 [2024-05-15 11:18:25.994270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.890 [2024-05-15 11:18:25.994362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:25.994375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.891 qpair failed and we were unable to recover it. 00:26:28.891 [2024-05-15 11:18:25.994588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:25.994768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:25.994781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.891 qpair failed and we were unable to recover it. 00:26:28.891 [2024-05-15 11:18:25.994923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:25.995093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:25.995106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.891 qpair failed and we were unable to recover it. 00:26:28.891 [2024-05-15 11:18:25.995207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:25.995300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:25.995314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.891 qpair failed and we were unable to recover it. 00:26:28.891 [2024-05-15 11:18:25.995459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:25.995608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:25.995621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.891 qpair failed and we were unable to recover it. 00:26:28.891 [2024-05-15 11:18:25.995778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:25.995859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:25.995872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.891 qpair failed and we were unable to recover it. 00:26:28.891 [2024-05-15 11:18:25.995969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:25.996061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:25.996074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.891 qpair failed and we were unable to recover it. 00:26:28.891 [2024-05-15 11:18:25.996230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:25.996385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:25.996399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.891 qpair failed and we were unable to recover it. 00:26:28.891 [2024-05-15 11:18:25.996492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:25.996629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:25.996642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.891 qpair failed and we were unable to recover it. 00:26:28.891 [2024-05-15 11:18:25.996877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:25.996965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:25.996978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.891 qpair failed and we were unable to recover it. 00:26:28.891 [2024-05-15 11:18:25.997185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:25.997344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:25.997357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.891 qpair failed and we were unable to recover it. 00:26:28.891 [2024-05-15 11:18:25.997460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:25.997561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:25.997574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.891 qpair failed and we were unable to recover it. 00:26:28.891 [2024-05-15 11:18:25.997739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:25.997837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:25.997851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.891 qpair failed and we were unable to recover it. 00:26:28.891 [2024-05-15 11:18:25.997995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:25.998066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:25.998079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.891 qpair failed and we were unable to recover it. 00:26:28.891 [2024-05-15 11:18:25.998186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:25.998277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:25.998290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.891 qpair failed and we were unable to recover it. 00:26:28.891 [2024-05-15 11:18:25.998525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:25.998607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:25.998620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.891 qpair failed and we were unable to recover it. 00:26:28.891 [2024-05-15 11:18:25.998705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:25.998861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:25.998876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.891 qpair failed and we were unable to recover it. 00:26:28.891 [2024-05-15 11:18:25.999051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:25.999146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:25.999160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.891 qpair failed and we were unable to recover it. 00:26:28.891 [2024-05-15 11:18:25.999310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:25.999401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:25.999414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.891 qpair failed and we were unable to recover it. 00:26:28.891 [2024-05-15 11:18:25.999623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:25.999714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:25.999728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.891 qpair failed and we were unable to recover it. 00:26:28.891 [2024-05-15 11:18:25.999880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:25.999957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:25.999971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.891 qpair failed and we were unable to recover it. 00:26:28.891 [2024-05-15 11:18:26.000217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:26.000306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:26.000320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.891 qpair failed and we were unable to recover it. 00:26:28.891 [2024-05-15 11:18:26.000476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:26.000570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:26.000584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.891 qpair failed and we were unable to recover it. 00:26:28.891 [2024-05-15 11:18:26.000677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:26.000833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:26.000846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.891 qpair failed and we were unable to recover it. 00:26:28.891 [2024-05-15 11:18:26.000935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:26.001032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:26.001046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.891 qpair failed and we were unable to recover it. 00:26:28.891 [2024-05-15 11:18:26.001202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:26.001296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:26.001309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.891 qpair failed and we were unable to recover it. 00:26:28.891 [2024-05-15 11:18:26.001392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:26.001620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:26.001635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.891 qpair failed and we were unable to recover it. 00:26:28.891 [2024-05-15 11:18:26.001723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:26.001806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:26.001820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.891 qpair failed and we were unable to recover it. 00:26:28.891 [2024-05-15 11:18:26.001907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:26.001985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:26.001998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.891 qpair failed and we were unable to recover it. 00:26:28.891 [2024-05-15 11:18:26.002089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:26.002248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:26.002264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.891 qpair failed and we were unable to recover it. 00:26:28.891 [2024-05-15 11:18:26.002343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:26.002503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:26.002517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.891 qpair failed and we were unable to recover it. 00:26:28.891 [2024-05-15 11:18:26.002609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:26.002703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:26.002717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.891 qpair failed and we were unable to recover it. 00:26:28.891 [2024-05-15 11:18:26.002883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:26.003028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:26.003042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.891 qpair failed and we were unable to recover it. 00:26:28.891 [2024-05-15 11:18:26.003214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:26.003315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:26.003329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.891 qpair failed and we were unable to recover it. 00:26:28.891 [2024-05-15 11:18:26.003475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:26.003566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:26.003580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.891 qpair failed and we were unable to recover it. 00:26:28.891 [2024-05-15 11:18:26.003734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.891 [2024-05-15 11:18:26.003837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.003851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.892 qpair failed and we were unable to recover it. 00:26:28.892 [2024-05-15 11:18:26.003926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.004021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.004034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.892 qpair failed and we were unable to recover it. 00:26:28.892 [2024-05-15 11:18:26.004125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.004216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.004230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.892 qpair failed and we were unable to recover it. 00:26:28.892 [2024-05-15 11:18:26.004326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.004490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.004506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.892 qpair failed and we were unable to recover it. 00:26:28.892 [2024-05-15 11:18:26.004728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.004819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.004832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.892 qpair failed and we were unable to recover it. 00:26:28.892 [2024-05-15 11:18:26.004906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.005057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.005070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.892 qpair failed and we were unable to recover it. 00:26:28.892 [2024-05-15 11:18:26.005152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.005252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.005265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.892 qpair failed and we were unable to recover it. 00:26:28.892 [2024-05-15 11:18:26.005416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.005495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.005508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.892 qpair failed and we were unable to recover it. 00:26:28.892 [2024-05-15 11:18:26.005659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.005739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.005752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.892 qpair failed and we were unable to recover it. 00:26:28.892 [2024-05-15 11:18:26.005836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.005919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.005933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.892 qpair failed and we were unable to recover it. 00:26:28.892 [2024-05-15 11:18:26.006035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.006106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.006119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.892 qpair failed and we were unable to recover it. 00:26:28.892 [2024-05-15 11:18:26.006198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.006334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.006347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.892 qpair failed and we were unable to recover it. 00:26:28.892 [2024-05-15 11:18:26.006434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.006579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.006592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.892 qpair failed and we were unable to recover it. 00:26:28.892 [2024-05-15 11:18:26.006698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.006843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.006855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.892 qpair failed and we were unable to recover it. 00:26:28.892 [2024-05-15 11:18:26.007010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.007116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.007130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.892 qpair failed and we were unable to recover it. 00:26:28.892 [2024-05-15 11:18:26.007217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.007288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.007302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.892 qpair failed and we were unable to recover it. 00:26:28.892 [2024-05-15 11:18:26.007371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.007457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.007471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.892 qpair failed and we were unable to recover it. 00:26:28.892 [2024-05-15 11:18:26.007677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.007759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.007773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.892 qpair failed and we were unable to recover it. 00:26:28.892 [2024-05-15 11:18:26.007999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.008072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.008086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.892 qpair failed and we were unable to recover it. 00:26:28.892 [2024-05-15 11:18:26.008243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.008343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.008356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.892 qpair failed and we were unable to recover it. 00:26:28.892 [2024-05-15 11:18:26.008454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.008531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.008545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.892 qpair failed and we were unable to recover it. 00:26:28.892 [2024-05-15 11:18:26.008625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.008717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.008730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.892 qpair failed and we were unable to recover it. 00:26:28.892 [2024-05-15 11:18:26.008876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.009013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.009030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.892 qpair failed and we were unable to recover it. 00:26:28.892 [2024-05-15 11:18:26.009105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.009242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.009256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.892 qpair failed and we were unable to recover it. 00:26:28.892 [2024-05-15 11:18:26.009342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.009431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.009444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.892 qpair failed and we were unable to recover it. 00:26:28.892 [2024-05-15 11:18:26.009607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.009679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.009692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.892 qpair failed and we were unable to recover it. 00:26:28.892 [2024-05-15 11:18:26.009863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.009952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.009965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.892 qpair failed and we were unable to recover it. 00:26:28.892 [2024-05-15 11:18:26.010116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.010193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.010207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.892 qpair failed and we were unable to recover it. 00:26:28.892 [2024-05-15 11:18:26.010302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.010442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.010455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.892 qpair failed and we were unable to recover it. 00:26:28.892 [2024-05-15 11:18:26.010541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.010697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.010709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.892 qpair failed and we were unable to recover it. 00:26:28.892 [2024-05-15 11:18:26.010800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.010946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.010960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.892 qpair failed and we were unable to recover it. 00:26:28.892 [2024-05-15 11:18:26.011060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.011203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.011216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.892 qpair failed and we were unable to recover it. 00:26:28.892 [2024-05-15 11:18:26.011302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.011441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.011454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.892 qpair failed and we were unable to recover it. 00:26:28.892 [2024-05-15 11:18:26.011596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.011740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.011753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.892 qpair failed and we were unable to recover it. 00:26:28.892 [2024-05-15 11:18:26.011841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.011913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.892 [2024-05-15 11:18:26.011927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.892 qpair failed and we were unable to recover it. 00:26:28.892 [2024-05-15 11:18:26.012012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.012099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.012114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.893 qpair failed and we were unable to recover it. 00:26:28.893 [2024-05-15 11:18:26.012348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.012555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.012568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.893 qpair failed and we were unable to recover it. 00:26:28.893 [2024-05-15 11:18:26.012712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.012797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.012810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.893 qpair failed and we were unable to recover it. 00:26:28.893 [2024-05-15 11:18:26.012975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.013076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.013089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.893 qpair failed and we were unable to recover it. 00:26:28.893 [2024-05-15 11:18:26.013177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.013362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.013375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.893 qpair failed and we were unable to recover it. 00:26:28.893 [2024-05-15 11:18:26.013516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.013610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.013623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.893 qpair failed and we were unable to recover it. 00:26:28.893 [2024-05-15 11:18:26.013778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.013861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.013874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.893 qpair failed and we were unable to recover it. 00:26:28.893 [2024-05-15 11:18:26.014054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.014220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.014235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.893 qpair failed and we were unable to recover it. 00:26:28.893 [2024-05-15 11:18:26.014328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.014415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.014428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.893 qpair failed and we were unable to recover it. 00:26:28.893 [2024-05-15 11:18:26.014639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.014735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.014749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.893 qpair failed and we were unable to recover it. 00:26:28.893 [2024-05-15 11:18:26.014959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.015106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.015120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.893 qpair failed and we were unable to recover it. 00:26:28.893 [2024-05-15 11:18:26.015271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.015354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.015368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.893 qpair failed and we were unable to recover it. 00:26:28.893 [2024-05-15 11:18:26.015510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.015592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.015606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.893 qpair failed and we were unable to recover it. 00:26:28.893 [2024-05-15 11:18:26.015704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.015791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.015805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.893 qpair failed and we were unable to recover it. 00:26:28.893 [2024-05-15 11:18:26.015989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.016082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.016095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.893 qpair failed and we were unable to recover it. 00:26:28.893 [2024-05-15 11:18:26.016241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.016317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.016331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.893 qpair failed and we were unable to recover it. 00:26:28.893 [2024-05-15 11:18:26.016475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.016615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.016628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.893 qpair failed and we were unable to recover it. 00:26:28.893 [2024-05-15 11:18:26.016779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.016935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.016949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.893 qpair failed and we were unable to recover it. 00:26:28.893 [2024-05-15 11:18:26.017032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.017132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.017143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.893 qpair failed and we were unable to recover it. 00:26:28.893 [2024-05-15 11:18:26.017232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.017312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.017322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.893 qpair failed and we were unable to recover it. 00:26:28.893 [2024-05-15 11:18:26.017416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.017484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.017494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.893 qpair failed and we were unable to recover it. 00:26:28.893 [2024-05-15 11:18:26.017624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.017755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.017765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.893 qpair failed and we were unable to recover it. 00:26:28.893 [2024-05-15 11:18:26.017859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.017953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.017963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.893 qpair failed and we were unable to recover it. 00:26:28.893 [2024-05-15 11:18:26.018043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.018103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.018113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.893 qpair failed and we were unable to recover it. 00:26:28.893 [2024-05-15 11:18:26.018178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.018264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.018274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.893 qpair failed and we were unable to recover it. 00:26:28.893 [2024-05-15 11:18:26.018372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.018504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.018513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.893 qpair failed and we were unable to recover it. 00:26:28.893 [2024-05-15 11:18:26.018596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.018730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.018740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.893 qpair failed and we were unable to recover it. 00:26:28.893 [2024-05-15 11:18:26.018807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.018880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.018890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.893 qpair failed and we were unable to recover it. 00:26:28.893 [2024-05-15 11:18:26.018986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.019149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.019167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.893 qpair failed and we were unable to recover it. 00:26:28.893 [2024-05-15 11:18:26.019261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.019335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.019348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.893 qpair failed and we were unable to recover it. 00:26:28.893 [2024-05-15 11:18:26.019442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.893 [2024-05-15 11:18:26.019533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.019547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.894 qpair failed and we were unable to recover it. 00:26:28.894 [2024-05-15 11:18:26.019620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.019692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.019705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.894 qpair failed and we were unable to recover it. 00:26:28.894 [2024-05-15 11:18:26.019855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.019932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.019946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.894 qpair failed and we were unable to recover it. 00:26:28.894 [2024-05-15 11:18:26.020019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.020195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.020210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.894 qpair failed and we were unable to recover it. 00:26:28.894 [2024-05-15 11:18:26.020296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.020478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.020491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.894 qpair failed and we were unable to recover it. 00:26:28.894 [2024-05-15 11:18:26.020566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.020667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.020681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.894 qpair failed and we were unable to recover it. 00:26:28.894 [2024-05-15 11:18:26.020759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.020906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.020922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:28.894 qpair failed and we were unable to recover it. 00:26:28.894 [2024-05-15 11:18:26.020999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.021067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.021077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.894 qpair failed and we were unable to recover it. 00:26:28.894 [2024-05-15 11:18:26.021240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.021312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.021323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.894 qpair failed and we were unable to recover it. 00:26:28.894 [2024-05-15 11:18:26.021392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.021468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.021477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.894 qpair failed and we were unable to recover it. 00:26:28.894 [2024-05-15 11:18:26.021559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.021623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.021633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.894 qpair failed and we were unable to recover it. 00:26:28.894 [2024-05-15 11:18:26.021701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.021832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.021841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.894 qpair failed and we were unable to recover it. 00:26:28.894 [2024-05-15 11:18:26.021929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.022152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.022162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.894 qpair failed and we were unable to recover it. 00:26:28.894 [2024-05-15 11:18:26.022251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.022327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.022337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.894 qpair failed and we were unable to recover it. 00:26:28.894 [2024-05-15 11:18:26.022468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.022532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.022542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.894 qpair failed and we were unable to recover it. 00:26:28.894 [2024-05-15 11:18:26.022599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.022749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.022759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.894 qpair failed and we were unable to recover it. 00:26:28.894 [2024-05-15 11:18:26.022846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.022983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.022992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.894 qpair failed and we were unable to recover it. 00:26:28.894 [2024-05-15 11:18:26.023124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.023272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.023283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.894 qpair failed and we were unable to recover it. 00:26:28.894 [2024-05-15 11:18:26.023356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.023522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.023532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.894 qpair failed and we were unable to recover it. 00:26:28.894 [2024-05-15 11:18:26.023628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.023700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.023710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.894 qpair failed and we were unable to recover it. 00:26:28.894 [2024-05-15 11:18:26.023852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.023924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.023934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.894 qpair failed and we were unable to recover it. 00:26:28.894 [2024-05-15 11:18:26.024033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.024233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.024244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.894 qpair failed and we were unable to recover it. 00:26:28.894 [2024-05-15 11:18:26.024311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.024454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.024464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.894 qpair failed and we were unable to recover it. 00:26:28.894 [2024-05-15 11:18:26.024531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.024597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.024607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.894 qpair failed and we were unable to recover it. 00:26:28.894 [2024-05-15 11:18:26.024749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.024957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.024966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.894 qpair failed and we were unable to recover it. 00:26:28.894 [2024-05-15 11:18:26.025180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.025247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.025257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.894 qpair failed and we were unable to recover it. 00:26:28.894 [2024-05-15 11:18:26.025393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.025455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.025465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.894 qpair failed and we were unable to recover it. 00:26:28.894 [2024-05-15 11:18:26.025617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.025763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.025773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.894 qpair failed and we were unable to recover it. 00:26:28.894 [2024-05-15 11:18:26.025860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.025996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.026006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.894 qpair failed and we were unable to recover it. 00:26:28.894 [2024-05-15 11:18:26.026079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.026237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.026247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.894 qpair failed and we were unable to recover it. 00:26:28.894 [2024-05-15 11:18:26.026332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.026485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.026495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.894 qpair failed and we were unable to recover it. 00:26:28.894 [2024-05-15 11:18:26.026655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.026754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.026764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.894 qpair failed and we were unable to recover it. 00:26:28.894 [2024-05-15 11:18:26.026901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.026982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.026992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.894 qpair failed and we were unable to recover it. 00:26:28.894 [2024-05-15 11:18:26.027135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.027228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.027239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.894 qpair failed and we were unable to recover it. 00:26:28.894 [2024-05-15 11:18:26.027307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.027446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.027456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.894 qpair failed and we were unable to recover it. 00:26:28.894 [2024-05-15 11:18:26.027539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.027669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.027679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.894 qpair failed and we were unable to recover it. 00:26:28.894 [2024-05-15 11:18:26.027809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.027934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.027944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.894 qpair failed and we were unable to recover it. 00:26:28.894 [2024-05-15 11:18:26.028037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.894 [2024-05-15 11:18:26.028182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.028193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.895 qpair failed and we were unable to recover it. 00:26:28.895 [2024-05-15 11:18:26.028261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.028327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.028340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.895 qpair failed and we were unable to recover it. 00:26:28.895 [2024-05-15 11:18:26.028570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.028652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.028663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.895 qpair failed and we were unable to recover it. 00:26:28.895 [2024-05-15 11:18:26.028831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.028895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.028905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.895 qpair failed and we were unable to recover it. 00:26:28.895 [2024-05-15 11:18:26.028981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.029071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.029081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.895 qpair failed and we were unable to recover it. 00:26:28.895 [2024-05-15 11:18:26.029150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.029303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.029314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.895 qpair failed and we were unable to recover it. 00:26:28.895 [2024-05-15 11:18:26.029511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.029593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.029603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.895 qpair failed and we were unable to recover it. 00:26:28.895 [2024-05-15 11:18:26.029689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.029786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.029796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.895 qpair failed and we were unable to recover it. 00:26:28.895 [2024-05-15 11:18:26.029927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.029993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.030002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.895 qpair failed and we were unable to recover it. 00:26:28.895 [2024-05-15 11:18:26.030186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.030270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.030280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.895 qpair failed and we were unable to recover it. 00:26:28.895 [2024-05-15 11:18:26.030359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.030431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.030440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.895 qpair failed and we were unable to recover it. 00:26:28.895 [2024-05-15 11:18:26.030508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.030646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.030659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.895 qpair failed and we were unable to recover it. 00:26:28.895 [2024-05-15 11:18:26.030723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.030804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.030814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.895 qpair failed and we were unable to recover it. 00:26:28.895 [2024-05-15 11:18:26.030908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.030988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.030999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.895 qpair failed and we were unable to recover it. 00:26:28.895 [2024-05-15 11:18:26.031217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.031282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.031293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.895 qpair failed and we were unable to recover it. 00:26:28.895 [2024-05-15 11:18:26.031361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.031440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.031450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.895 qpair failed and we were unable to recover it. 00:26:28.895 [2024-05-15 11:18:26.031549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.031614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.031623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.895 qpair failed and we were unable to recover it. 00:26:28.895 [2024-05-15 11:18:26.031772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.031836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.031845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.895 qpair failed and we were unable to recover it. 00:26:28.895 [2024-05-15 11:18:26.032008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.032076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.032086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.895 qpair failed and we were unable to recover it. 00:26:28.895 [2024-05-15 11:18:26.032218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.032313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.032322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.895 qpair failed and we were unable to recover it. 00:26:28.895 [2024-05-15 11:18:26.032520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.032585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.032595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.895 qpair failed and we were unable to recover it. 00:26:28.895 [2024-05-15 11:18:26.032748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.032827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.032839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.895 qpair failed and we were unable to recover it. 00:26:28.895 [2024-05-15 11:18:26.033030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.033106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.033116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.895 qpair failed and we were unable to recover it. 00:26:28.895 [2024-05-15 11:18:26.033273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.033354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.033364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.895 qpair failed and we were unable to recover it. 00:26:28.895 [2024-05-15 11:18:26.033499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.033572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.033582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.895 qpair failed and we were unable to recover it. 00:26:28.895 [2024-05-15 11:18:26.033723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.033861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.033871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.895 qpair failed and we were unable to recover it. 00:26:28.895 [2024-05-15 11:18:26.034072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.034276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.034286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.895 qpair failed and we were unable to recover it. 00:26:28.895 [2024-05-15 11:18:26.034420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.034574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.034584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.895 qpair failed and we were unable to recover it. 00:26:28.895 [2024-05-15 11:18:26.034809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.034952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.034962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.895 qpair failed and we were unable to recover it. 00:26:28.895 [2024-05-15 11:18:26.035109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.035201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.035211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.895 qpair failed and we were unable to recover it. 00:26:28.895 [2024-05-15 11:18:26.035365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.035448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.035458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.895 qpair failed and we were unable to recover it. 00:26:28.895 [2024-05-15 11:18:26.035528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.035600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.035614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.895 qpair failed and we were unable to recover it. 00:26:28.895 [2024-05-15 11:18:26.035683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.035764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.035774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.895 qpair failed and we were unable to recover it. 00:26:28.895 [2024-05-15 11:18:26.035852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.035932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.035942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.895 qpair failed and we were unable to recover it. 00:26:28.895 [2024-05-15 11:18:26.036027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.036106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.036116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.895 qpair failed and we were unable to recover it. 00:26:28.895 [2024-05-15 11:18:26.036259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.036322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.036332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.895 qpair failed and we were unable to recover it. 00:26:28.895 [2024-05-15 11:18:26.036483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.036550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.895 [2024-05-15 11:18:26.036560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.895 qpair failed and we were unable to recover it. 00:26:28.895 [2024-05-15 11:18:26.036773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.036839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.036850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.896 qpair failed and we were unable to recover it. 00:26:28.896 [2024-05-15 11:18:26.037013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.037085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.037096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.896 qpair failed and we were unable to recover it. 00:26:28.896 [2024-05-15 11:18:26.037179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.037243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.037252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.896 qpair failed and we were unable to recover it. 00:26:28.896 [2024-05-15 11:18:26.037350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.037416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.037426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.896 qpair failed and we were unable to recover it. 00:26:28.896 [2024-05-15 11:18:26.037494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.037573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.037583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.896 qpair failed and we were unable to recover it. 00:26:28.896 [2024-05-15 11:18:26.037719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.037948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.037958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.896 qpair failed and we were unable to recover it. 00:26:28.896 [2024-05-15 11:18:26.038092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.038292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.038302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.896 qpair failed and we were unable to recover it. 00:26:28.896 [2024-05-15 11:18:26.038475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.038554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.038565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.896 qpair failed and we were unable to recover it. 00:26:28.896 [2024-05-15 11:18:26.038644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.038720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.038730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.896 qpair failed and we were unable to recover it. 00:26:28.896 [2024-05-15 11:18:26.038879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.038952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.038963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.896 qpair failed and we were unable to recover it. 00:26:28.896 [2024-05-15 11:18:26.039190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.039342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.039352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.896 qpair failed and we were unable to recover it. 00:26:28.896 [2024-05-15 11:18:26.039442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.039512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.039522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.896 qpair failed and we were unable to recover it. 00:26:28.896 [2024-05-15 11:18:26.039598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.039799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.039809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.896 qpair failed and we were unable to recover it. 00:26:28.896 [2024-05-15 11:18:26.039941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.039999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.040008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.896 qpair failed and we were unable to recover it. 00:26:28.896 [2024-05-15 11:18:26.040140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.040222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.040232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.896 qpair failed and we were unable to recover it. 00:26:28.896 [2024-05-15 11:18:26.040399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.040475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.040484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.896 qpair failed and we were unable to recover it. 00:26:28.896 [2024-05-15 11:18:26.040634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.040784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.040794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.896 qpair failed and we were unable to recover it. 00:26:28.896 [2024-05-15 11:18:26.040992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.041068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.041078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.896 qpair failed and we were unable to recover it. 00:26:28.896 [2024-05-15 11:18:26.041146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.041286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.041296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.896 qpair failed and we were unable to recover it. 00:26:28.896 [2024-05-15 11:18:26.041444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.041648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.041658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.896 qpair failed and we were unable to recover it. 00:26:28.896 [2024-05-15 11:18:26.041740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.041912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.041922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.896 qpair failed and we were unable to recover it. 00:26:28.896 [2024-05-15 11:18:26.042066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.042148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.042159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.896 qpair failed and we were unable to recover it. 00:26:28.896 [2024-05-15 11:18:26.042236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.042400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.042411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.896 qpair failed and we were unable to recover it. 00:26:28.896 [2024-05-15 11:18:26.042555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.042723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.042733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.896 qpair failed and we were unable to recover it. 00:26:28.896 [2024-05-15 11:18:26.042811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.042976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.042986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.896 qpair failed and we were unable to recover it. 00:26:28.896 [2024-05-15 11:18:26.043067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.043161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.043186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.896 qpair failed and we were unable to recover it. 00:26:28.896 [2024-05-15 11:18:26.043338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.043487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.043497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.896 qpair failed and we were unable to recover it. 00:26:28.896 [2024-05-15 11:18:26.043574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.043745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.043755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.896 qpair failed and we were unable to recover it. 00:26:28.896 [2024-05-15 11:18:26.043838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.043904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.043913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.896 qpair failed and we were unable to recover it. 00:26:28.896 [2024-05-15 11:18:26.044002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.044136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.044145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.896 qpair failed and we were unable to recover it. 00:26:28.896 [2024-05-15 11:18:26.044300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.044385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.044395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.896 qpair failed and we were unable to recover it. 00:26:28.896 [2024-05-15 11:18:26.044594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.044669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.044679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.896 qpair failed and we were unable to recover it. 00:26:28.896 [2024-05-15 11:18:26.044797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.044934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.044943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.896 qpair failed and we were unable to recover it. 00:26:28.896 [2024-05-15 11:18:26.045015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.045183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.045193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.896 qpair failed and we were unable to recover it. 00:26:28.896 [2024-05-15 11:18:26.045285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.045362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.045372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.896 qpair failed and we were unable to recover it. 00:26:28.896 [2024-05-15 11:18:26.045511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.045655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.045665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.896 qpair failed and we were unable to recover it. 00:26:28.896 [2024-05-15 11:18:26.045864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.045955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.045964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.896 qpair failed and we were unable to recover it. 00:26:28.896 [2024-05-15 11:18:26.046023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.046176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.046187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.896 qpair failed and we were unable to recover it. 00:26:28.896 [2024-05-15 11:18:26.046318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.896 [2024-05-15 11:18:26.046387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.046396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.897 qpair failed and we were unable to recover it. 00:26:28.897 [2024-05-15 11:18:26.046543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.046664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.046674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.897 qpair failed and we were unable to recover it. 00:26:28.897 [2024-05-15 11:18:26.046874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.047012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.047021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.897 qpair failed and we were unable to recover it. 00:26:28.897 [2024-05-15 11:18:26.047099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.047172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.047183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.897 qpair failed and we were unable to recover it. 00:26:28.897 [2024-05-15 11:18:26.047407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.047499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.047508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.897 qpair failed and we were unable to recover it. 00:26:28.897 [2024-05-15 11:18:26.047665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.047754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.047763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.897 qpair failed and we were unable to recover it. 00:26:28.897 [2024-05-15 11:18:26.047839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.047973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.047983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.897 qpair failed and we were unable to recover it. 00:26:28.897 [2024-05-15 11:18:26.048126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.048191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.048201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.897 qpair failed and we were unable to recover it. 00:26:28.897 [2024-05-15 11:18:26.048349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.048486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.048496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.897 qpair failed and we were unable to recover it. 00:26:28.897 [2024-05-15 11:18:26.048560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.048698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.048708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.897 qpair failed and we were unable to recover it. 00:26:28.897 [2024-05-15 11:18:26.048873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.049009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.049018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.897 qpair failed and we were unable to recover it. 00:26:28.897 [2024-05-15 11:18:26.049111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.049202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.049211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.897 qpair failed and we were unable to recover it. 00:26:28.897 [2024-05-15 11:18:26.049304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.049480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.049491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.897 qpair failed and we were unable to recover it. 00:26:28.897 [2024-05-15 11:18:26.049630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.049706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.049715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.897 qpair failed and we were unable to recover it. 00:26:28.897 [2024-05-15 11:18:26.049798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.049929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.049938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.897 qpair failed and we were unable to recover it. 00:26:28.897 [2024-05-15 11:18:26.050015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.050212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.050221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.897 qpair failed and we were unable to recover it. 00:26:28.897 [2024-05-15 11:18:26.050339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.050404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.050413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.897 qpair failed and we were unable to recover it. 00:26:28.897 [2024-05-15 11:18:26.050500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.050580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.050589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.897 qpair failed and we were unable to recover it. 00:26:28.897 [2024-05-15 11:18:26.050669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.050739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.050748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.897 qpair failed and we were unable to recover it. 00:26:28.897 [2024-05-15 11:18:26.050850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.050912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.050922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.897 qpair failed and we were unable to recover it. 00:26:28.897 [2024-05-15 11:18:26.050990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.051060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.051070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.897 qpair failed and we were unable to recover it. 00:26:28.897 [2024-05-15 11:18:26.051210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.051289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.051299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.897 qpair failed and we were unable to recover it. 00:26:28.897 [2024-05-15 11:18:26.051438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.051529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.051540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.897 qpair failed and we were unable to recover it. 00:26:28.897 [2024-05-15 11:18:26.051622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.051691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.051701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.897 qpair failed and we were unable to recover it. 00:26:28.897 [2024-05-15 11:18:26.051765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.051844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.051854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.897 qpair failed and we were unable to recover it. 00:26:28.897 [2024-05-15 11:18:26.051921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.051981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.051990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.897 qpair failed and we were unable to recover it. 00:26:28.897 [2024-05-15 11:18:26.052067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.052300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.052312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.897 qpair failed and we were unable to recover it. 00:26:28.897 [2024-05-15 11:18:26.052409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.052482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.052491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.897 qpair failed and we were unable to recover it. 00:26:28.897 [2024-05-15 11:18:26.052573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.052703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.052712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.897 qpair failed and we were unable to recover it. 00:26:28.897 [2024-05-15 11:18:26.052792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.052858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.052867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.897 qpair failed and we were unable to recover it. 00:26:28.897 [2024-05-15 11:18:26.052936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.052997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.053006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.897 qpair failed and we were unable to recover it. 00:26:28.897 [2024-05-15 11:18:26.053140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.053219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.053229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.897 qpair failed and we were unable to recover it. 00:26:28.897 [2024-05-15 11:18:26.053310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.053389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.053398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.897 qpair failed and we were unable to recover it. 00:26:28.897 [2024-05-15 11:18:26.053553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.053631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.897 [2024-05-15 11:18:26.053640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.897 qpair failed and we were unable to recover it. 00:26:28.898 [2024-05-15 11:18:26.053722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.053852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.053862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.898 qpair failed and we were unable to recover it. 00:26:28.898 [2024-05-15 11:18:26.053935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.054078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.054088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.898 qpair failed and we were unable to recover it. 00:26:28.898 [2024-05-15 11:18:26.054225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.054312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.054322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.898 qpair failed and we were unable to recover it. 00:26:28.898 [2024-05-15 11:18:26.054409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.054565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.054575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.898 qpair failed and we were unable to recover it. 00:26:28.898 [2024-05-15 11:18:26.054676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.054767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.054776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.898 qpair failed and we were unable to recover it. 00:26:28.898 [2024-05-15 11:18:26.054864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.055009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.055018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.898 qpair failed and we were unable to recover it. 00:26:28.898 [2024-05-15 11:18:26.055155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.055225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.055234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.898 qpair failed and we were unable to recover it. 00:26:28.898 [2024-05-15 11:18:26.055337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.055491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.055501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.898 qpair failed and we were unable to recover it. 00:26:28.898 [2024-05-15 11:18:26.055564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.055633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.055642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.898 qpair failed and we were unable to recover it. 00:26:28.898 [2024-05-15 11:18:26.055718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.055860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.055870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.898 qpair failed and we were unable to recover it. 00:26:28.898 [2024-05-15 11:18:26.056002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.056077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.056086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.898 qpair failed and we were unable to recover it. 00:26:28.898 [2024-05-15 11:18:26.056219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.056357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.056367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.898 qpair failed and we were unable to recover it. 00:26:28.898 [2024-05-15 11:18:26.056440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.056589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.056598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.898 qpair failed and we were unable to recover it. 00:26:28.898 [2024-05-15 11:18:26.056673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.056758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.056767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.898 qpair failed and we were unable to recover it. 00:26:28.898 [2024-05-15 11:18:26.056837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.056902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.056911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.898 qpair failed and we were unable to recover it. 00:26:28.898 [2024-05-15 11:18:26.057008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.057151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.057162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.898 qpair failed and we were unable to recover it. 00:26:28.898 [2024-05-15 11:18:26.057318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.057397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.057406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.898 qpair failed and we were unable to recover it. 00:26:28.898 [2024-05-15 11:18:26.057484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.057615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.057625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.898 qpair failed and we were unable to recover it. 00:26:28.898 [2024-05-15 11:18:26.057770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.057903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.057913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.898 qpair failed and we were unable to recover it. 00:26:28.898 [2024-05-15 11:18:26.058005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.058143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.058152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.898 qpair failed and we were unable to recover it. 00:26:28.898 [2024-05-15 11:18:26.058300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.058380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.058390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.898 qpair failed and we were unable to recover it. 00:26:28.898 [2024-05-15 11:18:26.058459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.058531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.058540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.898 qpair failed and we were unable to recover it. 00:26:28.898 [2024-05-15 11:18:26.058611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.058836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.058846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.898 qpair failed and we were unable to recover it. 00:26:28.898 [2024-05-15 11:18:26.058983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.059053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.059063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.898 qpair failed and we were unable to recover it. 00:26:28.898 [2024-05-15 11:18:26.059131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.059268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.059277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.898 qpair failed and we were unable to recover it. 00:26:28.898 [2024-05-15 11:18:26.059354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.059431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.059441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.898 qpair failed and we were unable to recover it. 00:26:28.898 [2024-05-15 11:18:26.059521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.059572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.059581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.898 qpair failed and we were unable to recover it. 00:26:28.898 [2024-05-15 11:18:26.059649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.059731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.059740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.898 qpair failed and we were unable to recover it. 00:26:28.898 [2024-05-15 11:18:26.059815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.059881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.059890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.898 qpair failed and we were unable to recover it. 00:26:28.898 [2024-05-15 11:18:26.059946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.060101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.060111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.898 qpair failed and we were unable to recover it. 00:26:28.898 [2024-05-15 11:18:26.060245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.060312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.060323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.898 qpair failed and we were unable to recover it. 00:26:28.898 [2024-05-15 11:18:26.060377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.060511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.060520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.898 qpair failed and we were unable to recover it. 00:26:28.898 [2024-05-15 11:18:26.060579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.060723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.060733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.898 qpair failed and we were unable to recover it. 00:26:28.898 [2024-05-15 11:18:26.060815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.060913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.060922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.898 qpair failed and we were unable to recover it. 00:26:28.898 [2024-05-15 11:18:26.061062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.061126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.061137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.898 qpair failed and we were unable to recover it. 00:26:28.898 [2024-05-15 11:18:26.061206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.061285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.061294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.898 qpair failed and we were unable to recover it. 00:26:28.898 [2024-05-15 11:18:26.061446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.061538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.061547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.898 qpair failed and we were unable to recover it. 00:26:28.898 [2024-05-15 11:18:26.061613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.061692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.061702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.898 qpair failed and we were unable to recover it. 00:26:28.898 [2024-05-15 11:18:26.061780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.898 [2024-05-15 11:18:26.061864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.061874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.899 qpair failed and we were unable to recover it. 00:26:28.899 [2024-05-15 11:18:26.062027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.062102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.062111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.899 qpair failed and we were unable to recover it. 00:26:28.899 [2024-05-15 11:18:26.062247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.062379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.062389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.899 qpair failed and we were unable to recover it. 00:26:28.899 [2024-05-15 11:18:26.062550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.062615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.062624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.899 qpair failed and we were unable to recover it. 00:26:28.899 [2024-05-15 11:18:26.062694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.062783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.062792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.899 qpair failed and we were unable to recover it. 00:26:28.899 [2024-05-15 11:18:26.062887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.063086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.063095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.899 qpair failed and we were unable to recover it. 00:26:28.899 [2024-05-15 11:18:26.063185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.063315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.063324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.899 qpair failed and we were unable to recover it. 00:26:28.899 [2024-05-15 11:18:26.063402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.063533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.063542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.899 qpair failed and we were unable to recover it. 00:26:28.899 [2024-05-15 11:18:26.063628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.063689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.063698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.899 qpair failed and we were unable to recover it. 00:26:28.899 [2024-05-15 11:18:26.063856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.063929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.063939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.899 qpair failed and we were unable to recover it. 00:26:28.899 [2024-05-15 11:18:26.064034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.064098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.064108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.899 qpair failed and we were unable to recover it. 00:26:28.899 [2024-05-15 11:18:26.064245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.064308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.064319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.899 qpair failed and we were unable to recover it. 00:26:28.899 [2024-05-15 11:18:26.064468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.064543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.064553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.899 qpair failed and we were unable to recover it. 00:26:28.899 [2024-05-15 11:18:26.064697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.064768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.064777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.899 qpair failed and we were unable to recover it. 00:26:28.899 [2024-05-15 11:18:26.064911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.065130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.065140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.899 qpair failed and we were unable to recover it. 00:26:28.899 [2024-05-15 11:18:26.065218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.065414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.065426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.899 qpair failed and we were unable to recover it. 00:26:28.899 [2024-05-15 11:18:26.065559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.065640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.065649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.899 qpair failed and we were unable to recover it. 00:26:28.899 [2024-05-15 11:18:26.065731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.065866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.065875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.899 qpair failed and we were unable to recover it. 00:26:28.899 [2024-05-15 11:18:26.065955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.066103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.066112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.899 qpair failed and we were unable to recover it. 00:26:28.899 [2024-05-15 11:18:26.066250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.066325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.066334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.899 qpair failed and we were unable to recover it. 00:26:28.899 [2024-05-15 11:18:26.066420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.066496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.066506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.899 qpair failed and we were unable to recover it. 00:26:28.899 [2024-05-15 11:18:26.066670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.066736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.066747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.899 qpair failed and we were unable to recover it. 00:26:28.899 [2024-05-15 11:18:26.066826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.066898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.066907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.899 qpair failed and we were unable to recover it. 00:26:28.899 [2024-05-15 11:18:26.066986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.067121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.067130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.899 qpair failed and we were unable to recover it. 00:26:28.899 [2024-05-15 11:18:26.067272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.067357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.067366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.899 qpair failed and we were unable to recover it. 00:26:28.899 [2024-05-15 11:18:26.067446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.067513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.067525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.899 qpair failed and we were unable to recover it. 00:26:28.899 [2024-05-15 11:18:26.067663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.067732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.067741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.899 qpair failed and we were unable to recover it. 00:26:28.899 [2024-05-15 11:18:26.067819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.067896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.067905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.899 qpair failed and we were unable to recover it. 00:26:28.899 [2024-05-15 11:18:26.067970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.068107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.068117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.899 qpair failed and we were unable to recover it. 00:26:28.899 [2024-05-15 11:18:26.068203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.068277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.068286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.899 qpair failed and we were unable to recover it. 00:26:28.899 [2024-05-15 11:18:26.068358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.068426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.068435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.899 qpair failed and we were unable to recover it. 00:26:28.899 [2024-05-15 11:18:26.068560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.068646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.068655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.899 qpair failed and we were unable to recover it. 00:26:28.899 [2024-05-15 11:18:26.068729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.068875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.068885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.899 qpair failed and we were unable to recover it. 00:26:28.899 [2024-05-15 11:18:26.068953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.069026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.069035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.899 qpair failed and we were unable to recover it. 00:26:28.899 [2024-05-15 11:18:26.069171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.069433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.069443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.899 qpair failed and we were unable to recover it. 00:26:28.899 [2024-05-15 11:18:26.069633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.069709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.069723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.899 qpair failed and we were unable to recover it. 00:26:28.899 [2024-05-15 11:18:26.069800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.069950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.069959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.899 qpair failed and we were unable to recover it. 00:26:28.899 [2024-05-15 11:18:26.070091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.070145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.070154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.899 qpair failed and we were unable to recover it. 00:26:28.899 [2024-05-15 11:18:26.070239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.070340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.070354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.899 qpair failed and we were unable to recover it. 00:26:28.899 [2024-05-15 11:18:26.070426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.070510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.899 [2024-05-15 11:18:26.070524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.899 qpair failed and we were unable to recover it. 00:26:28.900 [2024-05-15 11:18:26.070668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.070876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.070889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.900 qpair failed and we were unable to recover it. 00:26:28.900 [2024-05-15 11:18:26.070979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.071132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.071145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.900 qpair failed and we were unable to recover it. 00:26:28.900 [2024-05-15 11:18:26.071236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.071313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.071326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.900 qpair failed and we were unable to recover it. 00:26:28.900 [2024-05-15 11:18:26.071416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.071494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.071508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.900 qpair failed and we were unable to recover it. 00:26:28.900 [2024-05-15 11:18:26.071575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.071646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.071659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.900 qpair failed and we were unable to recover it. 00:26:28.900 [2024-05-15 11:18:26.071801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.071887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.071900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.900 qpair failed and we were unable to recover it. 00:26:28.900 [2024-05-15 11:18:26.071985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.072070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.072082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.900 qpair failed and we were unable to recover it. 00:26:28.900 [2024-05-15 11:18:26.072173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.072317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.072331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.900 qpair failed and we were unable to recover it. 00:26:28.900 [2024-05-15 11:18:26.072397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.072473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.072486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.900 qpair failed and we were unable to recover it. 00:26:28.900 [2024-05-15 11:18:26.072561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.072777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.072792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.900 qpair failed and we were unable to recover it. 00:26:28.900 [2024-05-15 11:18:26.072945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.073035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.073050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.900 qpair failed and we were unable to recover it. 00:26:28.900 [2024-05-15 11:18:26.073186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.073338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.073352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.900 qpair failed and we were unable to recover it. 00:26:28.900 [2024-05-15 11:18:26.073505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.073682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.073696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.900 qpair failed and we were unable to recover it. 00:26:28.900 [2024-05-15 11:18:26.073859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.073946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.073959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.900 qpair failed and we were unable to recover it. 00:26:28.900 [2024-05-15 11:18:26.074051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.074185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.074199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.900 qpair failed and we were unable to recover it. 00:26:28.900 [2024-05-15 11:18:26.074283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.074368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.074382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.900 qpair failed and we were unable to recover it. 00:26:28.900 [2024-05-15 11:18:26.074562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.074642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.074655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.900 qpair failed and we were unable to recover it. 00:26:28.900 [2024-05-15 11:18:26.074759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.074831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.074844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.900 qpair failed and we were unable to recover it. 00:26:28.900 [2024-05-15 11:18:26.075059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.075233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.075248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.900 qpair failed and we were unable to recover it. 00:26:28.900 [2024-05-15 11:18:26.075412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.075578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.075591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.900 qpair failed and we were unable to recover it. 00:26:28.900 [2024-05-15 11:18:26.075734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.075827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.075840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.900 qpair failed and we were unable to recover it. 00:26:28.900 [2024-05-15 11:18:26.075991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.076129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.076143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.900 qpair failed and we were unable to recover it. 00:26:28.900 [2024-05-15 11:18:26.076234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.076329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.076342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.900 qpair failed and we were unable to recover it. 00:26:28.900 [2024-05-15 11:18:26.076509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.076640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.076654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.900 qpair failed and we were unable to recover it. 00:26:28.900 [2024-05-15 11:18:26.076742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.076827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.076840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.900 qpair failed and we were unable to recover it. 00:26:28.900 [2024-05-15 11:18:26.076980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.077141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.077154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.900 qpair failed and we were unable to recover it. 00:26:28.900 [2024-05-15 11:18:26.077307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.077395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.077408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.900 qpair failed and we were unable to recover it. 00:26:28.900 [2024-05-15 11:18:26.077491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.077574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.077586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.900 qpair failed and we were unable to recover it. 00:26:28.900 [2024-05-15 11:18:26.077753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.077844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.077856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.900 qpair failed and we were unable to recover it. 00:26:28.900 [2024-05-15 11:18:26.077929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.078029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.078042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.900 qpair failed and we were unable to recover it. 00:26:28.900 [2024-05-15 11:18:26.078207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.078283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.078296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.900 qpair failed and we were unable to recover it. 00:26:28.900 [2024-05-15 11:18:26.078440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.078519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.078532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.900 qpair failed and we were unable to recover it. 00:26:28.900 [2024-05-15 11:18:26.078615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.078715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.078729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.900 qpair failed and we were unable to recover it. 00:26:28.900 [2024-05-15 11:18:26.078824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.078916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.078928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.900 qpair failed and we were unable to recover it. 00:26:28.900 [2024-05-15 11:18:26.079013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.079088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.079101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.900 qpair failed and we were unable to recover it. 00:26:28.900 [2024-05-15 11:18:26.079189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.079269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.900 [2024-05-15 11:18:26.079283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.901 qpair failed and we were unable to recover it. 00:26:28.901 [2024-05-15 11:18:26.079357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.079459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.079473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.901 qpair failed and we were unable to recover it. 00:26:28.901 [2024-05-15 11:18:26.079547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.079623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.079635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.901 qpair failed and we were unable to recover it. 00:26:28.901 [2024-05-15 11:18:26.079714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.079810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.079824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.901 qpair failed and we were unable to recover it. 00:26:28.901 [2024-05-15 11:18:26.079917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.080017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.080030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.901 qpair failed and we were unable to recover it. 00:26:28.901 [2024-05-15 11:18:26.080115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.080257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.080272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.901 qpair failed and we were unable to recover it. 00:26:28.901 [2024-05-15 11:18:26.080362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.080437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.080451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.901 qpair failed and we were unable to recover it. 00:26:28.901 [2024-05-15 11:18:26.080529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.080621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.080634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.901 qpair failed and we were unable to recover it. 00:26:28.901 [2024-05-15 11:18:26.080776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.080846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.080858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.901 qpair failed and we were unable to recover it. 00:26:28.901 [2024-05-15 11:18:26.081076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.081223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.081237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.901 qpair failed and we were unable to recover it. 00:26:28.901 [2024-05-15 11:18:26.081318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.081456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.081469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.901 qpair failed and we were unable to recover it. 00:26:28.901 [2024-05-15 11:18:26.081546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.081626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.081641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.901 qpair failed and we were unable to recover it. 00:26:28.901 [2024-05-15 11:18:26.081789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.081863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.081875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.901 qpair failed and we were unable to recover it. 00:26:28.901 [2024-05-15 11:18:26.081951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.082107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.082120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.901 qpair failed and we were unable to recover it. 00:26:28.901 [2024-05-15 11:18:26.082273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.082352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.082364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.901 qpair failed and we were unable to recover it. 00:26:28.901 [2024-05-15 11:18:26.082438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.082618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.082632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.901 qpair failed and we were unable to recover it. 00:26:28.901 [2024-05-15 11:18:26.082744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.082844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.082857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.901 qpair failed and we were unable to recover it. 00:26:28.901 [2024-05-15 11:18:26.082947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.083121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.083134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.901 qpair failed and we were unable to recover it. 00:26:28.901 [2024-05-15 11:18:26.083289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.083442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.083455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.901 qpair failed and we were unable to recover it. 00:26:28.901 [2024-05-15 11:18:26.083607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.083681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.083695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.901 qpair failed and we were unable to recover it. 00:26:28.901 [2024-05-15 11:18:26.083834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.083923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.083937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.901 qpair failed and we were unable to recover it. 00:26:28.901 [2024-05-15 11:18:26.084105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.084195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.084211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.901 qpair failed and we were unable to recover it. 00:26:28.901 [2024-05-15 11:18:26.084395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.084561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.084574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.901 qpair failed and we were unable to recover it. 00:26:28.901 [2024-05-15 11:18:26.084653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.084725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.084738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.901 qpair failed and we were unable to recover it. 00:26:28.901 [2024-05-15 11:18:26.084884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.084968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.084980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.901 qpair failed and we were unable to recover it. 00:26:28.901 [2024-05-15 11:18:26.085195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.085349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.085362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.901 qpair failed and we were unable to recover it. 00:26:28.901 [2024-05-15 11:18:26.085522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.085705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.085718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.901 qpair failed and we were unable to recover it. 00:26:28.901 [2024-05-15 11:18:26.085804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.085882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.085896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.901 qpair failed and we were unable to recover it. 00:26:28.901 [2024-05-15 11:18:26.086050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.086128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.086141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:28.901 qpair failed and we were unable to recover it. 00:26:28.901 [2024-05-15 11:18:26.086326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.086467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.086476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.901 qpair failed and we were unable to recover it. 00:26:28.901 [2024-05-15 11:18:26.086558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.086713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.086723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.901 qpair failed and we were unable to recover it. 00:26:28.901 [2024-05-15 11:18:26.086799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.086864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.086873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.901 qpair failed and we were unable to recover it. 00:26:28.901 [2024-05-15 11:18:26.086963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.087056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.087066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.901 qpair failed and we were unable to recover it. 00:26:28.901 [2024-05-15 11:18:26.087146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.087217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.087226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.901 qpair failed and we were unable to recover it. 00:26:28.901 [2024-05-15 11:18:26.087394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.087535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.087545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.901 qpair failed and we were unable to recover it. 00:26:28.901 [2024-05-15 11:18:26.087629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.087705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.087715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.901 qpair failed and we were unable to recover it. 00:26:28.901 [2024-05-15 11:18:26.087846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.087925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.087934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.901 qpair failed and we were unable to recover it. 00:26:28.901 [2024-05-15 11:18:26.088084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.088149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.088158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.901 qpair failed and we were unable to recover it. 00:26:28.901 [2024-05-15 11:18:26.088307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.088371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.088381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.901 qpair failed and we were unable to recover it. 00:26:28.901 [2024-05-15 11:18:26.088453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.088515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.088525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.901 qpair failed and we were unable to recover it. 00:26:28.901 [2024-05-15 11:18:26.088601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.088675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.088684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.901 qpair failed and we were unable to recover it. 00:26:28.901 [2024-05-15 11:18:26.088847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.088929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.088939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.901 qpair failed and we were unable to recover it. 00:26:28.901 [2024-05-15 11:18:26.089019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.089150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.901 [2024-05-15 11:18:26.089160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.901 qpair failed and we were unable to recover it. 00:26:28.902 [2024-05-15 11:18:26.089247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.089328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.089338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.902 qpair failed and we were unable to recover it. 00:26:28.902 [2024-05-15 11:18:26.089414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.089566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.089576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.902 qpair failed and we were unable to recover it. 00:26:28.902 [2024-05-15 11:18:26.089707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.089840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.089850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.902 qpair failed and we were unable to recover it. 00:26:28.902 [2024-05-15 11:18:26.089915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.089967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.089975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.902 qpair failed and we were unable to recover it. 00:26:28.902 [2024-05-15 11:18:26.090055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.090124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.090133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.902 qpair failed and we were unable to recover it. 00:26:28.902 [2024-05-15 11:18:26.090266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.090360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.090370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.902 qpair failed and we were unable to recover it. 00:26:28.902 [2024-05-15 11:18:26.090455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.090536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.090545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.902 qpair failed and we were unable to recover it. 00:26:28.902 [2024-05-15 11:18:26.090626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.090824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.090833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.902 qpair failed and we were unable to recover it. 00:26:28.902 [2024-05-15 11:18:26.090905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.090983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.090992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.902 qpair failed and we were unable to recover it. 00:26:28.902 [2024-05-15 11:18:26.091129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.091272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.091282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.902 qpair failed and we were unable to recover it. 00:26:28.902 [2024-05-15 11:18:26.091523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.091654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.091664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.902 qpair failed and we were unable to recover it. 00:26:28.902 [2024-05-15 11:18:26.091756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.091828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.091838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.902 qpair failed and we were unable to recover it. 00:26:28.902 [2024-05-15 11:18:26.091971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.092124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.092134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.902 qpair failed and we were unable to recover it. 00:26:28.902 [2024-05-15 11:18:26.092224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.092377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.092387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.902 qpair failed and we were unable to recover it. 00:26:28.902 [2024-05-15 11:18:26.092537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.092608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.092617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.902 qpair failed and we were unable to recover it. 00:26:28.902 [2024-05-15 11:18:26.092760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.092895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.092905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.902 qpair failed and we were unable to recover it. 00:26:28.902 [2024-05-15 11:18:26.093040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.093183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.093193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.902 qpair failed and we were unable to recover it. 00:26:28.902 [2024-05-15 11:18:26.093271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.093365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.093375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.902 qpair failed and we were unable to recover it. 00:26:28.902 [2024-05-15 11:18:26.093446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.093543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.093556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.902 qpair failed and we were unable to recover it. 00:26:28.902 [2024-05-15 11:18:26.093828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.093911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.093921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.902 qpair failed and we were unable to recover it. 00:26:28.902 [2024-05-15 11:18:26.094064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.094149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.094159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.902 qpair failed and we were unable to recover it. 00:26:28.902 [2024-05-15 11:18:26.094266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.094335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.094345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.902 qpair failed and we were unable to recover it. 00:26:28.902 [2024-05-15 11:18:26.094507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.094576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.094585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.902 qpair failed and we were unable to recover it. 00:26:28.902 [2024-05-15 11:18:26.094688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.094836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.094846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.902 qpair failed and we were unable to recover it. 00:26:28.902 [2024-05-15 11:18:26.094934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.094999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.095008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.902 qpair failed and we were unable to recover it. 00:26:28.902 [2024-05-15 11:18:26.095095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.095172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.095182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.902 qpair failed and we were unable to recover it. 00:26:28.902 [2024-05-15 11:18:26.095320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.095479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.095492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.902 qpair failed and we were unable to recover it. 00:26:28.902 [2024-05-15 11:18:26.095630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.095726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.095737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.902 qpair failed and we were unable to recover it. 00:26:28.902 [2024-05-15 11:18:26.095838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.095971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.095982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.902 qpair failed and we were unable to recover it. 00:26:28.902 [2024-05-15 11:18:26.096046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.096127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.096137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.902 qpair failed and we were unable to recover it. 00:26:28.902 [2024-05-15 11:18:26.096283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.096373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.096382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.902 qpair failed and we were unable to recover it. 00:26:28.902 [2024-05-15 11:18:26.096452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.096529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.096539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.902 qpair failed and we were unable to recover it. 00:26:28.902 [2024-05-15 11:18:26.096623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.096761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.902 [2024-05-15 11:18:26.096771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:28.902 qpair failed and we were unable to recover it. 00:26:28.902 [2024-05-15 11:18:26.096842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.183 [2024-05-15 11:18:26.097022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.183 [2024-05-15 11:18:26.097054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.183 qpair failed and we were unable to recover it. 00:26:29.183 [2024-05-15 11:18:26.097143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.183 [2024-05-15 11:18:26.097259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.183 [2024-05-15 11:18:26.097278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.183 qpair failed and we were unable to recover it. 00:26:29.183 [2024-05-15 11:18:26.097462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.183 [2024-05-15 11:18:26.097563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.183 [2024-05-15 11:18:26.097575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.183 qpair failed and we were unable to recover it. 00:26:29.183 [2024-05-15 11:18:26.097637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.183 [2024-05-15 11:18:26.097789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.183 [2024-05-15 11:18:26.097806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.183 qpair failed and we were unable to recover it. 00:26:29.183 [2024-05-15 11:18:26.097921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.183 [2024-05-15 11:18:26.098009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.183 [2024-05-15 11:18:26.098023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.183 qpair failed and we were unable to recover it. 00:26:29.183 [2024-05-15 11:18:26.098112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.183 [2024-05-15 11:18:26.098294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.183 [2024-05-15 11:18:26.098306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.183 qpair failed and we were unable to recover it. 00:26:29.183 [2024-05-15 11:18:26.098447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.183 [2024-05-15 11:18:26.098533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.183 [2024-05-15 11:18:26.098543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.183 qpair failed and we were unable to recover it. 00:26:29.183 [2024-05-15 11:18:26.098633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.183 [2024-05-15 11:18:26.098897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.183 [2024-05-15 11:18:26.098908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.183 qpair failed and we were unable to recover it. 00:26:29.183 [2024-05-15 11:18:26.099005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.183 [2024-05-15 11:18:26.099152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.183 [2024-05-15 11:18:26.099163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.183 qpair failed and we were unable to recover it. 00:26:29.183 [2024-05-15 11:18:26.099277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.183 [2024-05-15 11:18:26.099450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.183 [2024-05-15 11:18:26.099473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.183 qpair failed and we were unable to recover it. 00:26:29.183 [2024-05-15 11:18:26.099611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.183 [2024-05-15 11:18:26.099787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.183 [2024-05-15 11:18:26.099803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:29.183 qpair failed and we were unable to recover it. 00:26:29.183 [2024-05-15 11:18:26.099954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.183 [2024-05-15 11:18:26.100032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.183 [2024-05-15 11:18:26.100045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:29.183 qpair failed and we were unable to recover it. 00:26:29.183 [2024-05-15 11:18:26.100121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.183 [2024-05-15 11:18:26.100269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.183 [2024-05-15 11:18:26.100283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:29.183 qpair failed and we were unable to recover it. 00:26:29.183 [2024-05-15 11:18:26.100451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.183 [2024-05-15 11:18:26.100527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.183 [2024-05-15 11:18:26.100540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:29.183 qpair failed and we were unable to recover it. 00:26:29.183 [2024-05-15 11:18:26.100711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.183 [2024-05-15 11:18:26.100872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.183 [2024-05-15 11:18:26.100885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:29.183 qpair failed and we were unable to recover it. 00:26:29.183 [2024-05-15 11:18:26.101060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.183 [2024-05-15 11:18:26.101154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.183 [2024-05-15 11:18:26.101172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:29.183 qpair failed and we were unable to recover it. 00:26:29.183 [2024-05-15 11:18:26.101340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.183 [2024-05-15 11:18:26.101436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.183 [2024-05-15 11:18:26.101456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:29.183 qpair failed and we were unable to recover it. 00:26:29.183 [2024-05-15 11:18:26.101541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.183 [2024-05-15 11:18:26.101694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.183 [2024-05-15 11:18:26.101707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:29.183 qpair failed and we were unable to recover it. 00:26:29.183 [2024-05-15 11:18:26.101798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.183 [2024-05-15 11:18:26.101886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.183 [2024-05-15 11:18:26.101899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:29.183 qpair failed and we were unable to recover it. 00:26:29.183 [2024-05-15 11:18:26.101990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.183 [2024-05-15 11:18:26.102218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.183 [2024-05-15 11:18:26.102232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:29.183 qpair failed and we were unable to recover it. 00:26:29.183 [2024-05-15 11:18:26.102335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.183 [2024-05-15 11:18:26.102411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.183 [2024-05-15 11:18:26.102424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:29.183 qpair failed and we were unable to recover it. 00:26:29.183 [2024-05-15 11:18:26.102599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.183 [2024-05-15 11:18:26.102809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.183 [2024-05-15 11:18:26.102822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:29.183 qpair failed and we were unable to recover it. 00:26:29.183 [2024-05-15 11:18:26.102914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.183 [2024-05-15 11:18:26.103004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.183 [2024-05-15 11:18:26.103018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:29.183 qpair failed and we were unable to recover it. 00:26:29.184 [2024-05-15 11:18:26.103161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.103314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.103327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:29.184 qpair failed and we were unable to recover it. 00:26:29.184 [2024-05-15 11:18:26.103408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.103507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.103520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:29.184 qpair failed and we were unable to recover it. 00:26:29.184 [2024-05-15 11:18:26.103734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.103898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.103911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:29.184 qpair failed and we were unable to recover it. 00:26:29.184 [2024-05-15 11:18:26.103994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.104094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.104108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:29.184 qpair failed and we were unable to recover it. 00:26:29.184 [2024-05-15 11:18:26.104271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.104365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.104379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:29.184 qpair failed and we were unable to recover it. 00:26:29.184 [2024-05-15 11:18:26.104521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.104656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.104672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:29.184 qpair failed and we were unable to recover it. 00:26:29.184 [2024-05-15 11:18:26.104754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.104883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.104896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:29.184 qpair failed and we were unable to recover it. 00:26:29.184 [2024-05-15 11:18:26.104971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.105059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.105072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:29.184 qpair failed and we were unable to recover it. 00:26:29.184 [2024-05-15 11:18:26.105170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.105333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.105344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.184 qpair failed and we were unable to recover it. 00:26:29.184 [2024-05-15 11:18:26.105438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.105515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.105525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.184 qpair failed and we were unable to recover it. 00:26:29.184 [2024-05-15 11:18:26.105659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.105745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.105755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.184 qpair failed and we were unable to recover it. 00:26:29.184 [2024-05-15 11:18:26.105904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.106037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.106058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.184 qpair failed and we were unable to recover it. 00:26:29.184 [2024-05-15 11:18:26.106135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.106204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.106214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.184 qpair failed and we were unable to recover it. 00:26:29.184 [2024-05-15 11:18:26.106344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.106407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.106416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.184 qpair failed and we were unable to recover it. 00:26:29.184 [2024-05-15 11:18:26.106553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.106629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.106639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.184 qpair failed and we were unable to recover it. 00:26:29.184 [2024-05-15 11:18:26.106718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.106836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.106863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.184 qpair failed and we were unable to recover it. 00:26:29.184 [2024-05-15 11:18:26.106995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.107074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.107084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.184 qpair failed and we were unable to recover it. 00:26:29.184 [2024-05-15 11:18:26.107161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.107247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.107257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.184 qpair failed and we were unable to recover it. 00:26:29.184 [2024-05-15 11:18:26.107406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.107491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.107500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.184 qpair failed and we were unable to recover it. 00:26:29.184 [2024-05-15 11:18:26.107700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.107841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.107850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.184 qpair failed and we were unable to recover it. 00:26:29.184 [2024-05-15 11:18:26.107993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.108078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.108088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.184 qpair failed and we were unable to recover it. 00:26:29.184 [2024-05-15 11:18:26.108151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.108295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.108305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.184 qpair failed and we were unable to recover it. 00:26:29.184 [2024-05-15 11:18:26.108453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.108531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.108540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.184 qpair failed and we were unable to recover it. 00:26:29.184 [2024-05-15 11:18:26.108612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.108762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.108772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.184 qpair failed and we were unable to recover it. 00:26:29.184 [2024-05-15 11:18:26.108844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.109018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.109028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.184 qpair failed and we were unable to recover it. 00:26:29.184 [2024-05-15 11:18:26.109159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.109243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.109253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.184 qpair failed and we were unable to recover it. 00:26:29.184 [2024-05-15 11:18:26.109344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.109400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.109410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.184 qpair failed and we were unable to recover it. 00:26:29.184 [2024-05-15 11:18:26.109476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.184 [2024-05-15 11:18:26.109608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.185 [2024-05-15 11:18:26.109617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.185 qpair failed and we were unable to recover it. 00:26:29.185 [2024-05-15 11:18:26.109761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.185 [2024-05-15 11:18:26.109892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.185 [2024-05-15 11:18:26.109902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.185 qpair failed and we were unable to recover it. 00:26:29.185 [2024-05-15 11:18:26.110043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.185 [2024-05-15 11:18:26.110138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.185 [2024-05-15 11:18:26.110148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.185 qpair failed and we were unable to recover it. 00:26:29.185 [2024-05-15 11:18:26.110304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.185 [2024-05-15 11:18:26.110402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.185 [2024-05-15 11:18:26.110412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.185 qpair failed and we were unable to recover it. 00:26:29.185 [2024-05-15 11:18:26.110549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.185 [2024-05-15 11:18:26.110742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.185 [2024-05-15 11:18:26.110752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.185 qpair failed and we were unable to recover it. 00:26:29.185 [2024-05-15 11:18:26.110898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.185 [2024-05-15 11:18:26.111050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.185 [2024-05-15 11:18:26.111060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.185 qpair failed and we were unable to recover it. 00:26:29.185 [2024-05-15 11:18:26.111129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.185 [2024-05-15 11:18:26.111212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.185 [2024-05-15 11:18:26.111223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.185 qpair failed and we were unable to recover it. 00:26:29.185 [2024-05-15 11:18:26.111368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.185 [2024-05-15 11:18:26.111466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.185 [2024-05-15 11:18:26.111476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.185 qpair failed and we were unable to recover it. 00:26:29.185 [2024-05-15 11:18:26.111558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.185 [2024-05-15 11:18:26.111623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.185 [2024-05-15 11:18:26.111632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.185 qpair failed and we were unable to recover it. 00:26:29.185 [2024-05-15 11:18:26.111801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.185 [2024-05-15 11:18:26.111871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.185 [2024-05-15 11:18:26.111880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.185 qpair failed and we were unable to recover it. 00:26:29.185 [2024-05-15 11:18:26.112025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.185 [2024-05-15 11:18:26.112094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.185 [2024-05-15 11:18:26.112103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.185 qpair failed and we were unable to recover it. 00:26:29.185 [2024-05-15 11:18:26.112304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.185 [2024-05-15 11:18:26.112399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.185 [2024-05-15 11:18:26.112409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.185 qpair failed and we were unable to recover it. 00:26:29.185 [2024-05-15 11:18:26.112484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.185 [2024-05-15 11:18:26.112582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.185 [2024-05-15 11:18:26.112591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.185 qpair failed and we were unable to recover it. 00:26:29.185 [2024-05-15 11:18:26.112672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.185 [2024-05-15 11:18:26.112735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.185 [2024-05-15 11:18:26.112746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.185 qpair failed and we were unable to recover it. 00:26:29.185 [2024-05-15 11:18:26.112822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.185 [2024-05-15 11:18:26.112913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.185 [2024-05-15 11:18:26.112922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.185 qpair failed and we were unable to recover it. 00:26:29.185 [2024-05-15 11:18:26.113006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.185 [2024-05-15 11:18:26.113078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.185 [2024-05-15 11:18:26.113087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.185 qpair failed and we were unable to recover it. 00:26:29.185 [2024-05-15 11:18:26.113184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.185 [2024-05-15 11:18:26.113272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.185 [2024-05-15 11:18:26.113283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.185 qpair failed and we were unable to recover it. 00:26:29.185 [2024-05-15 11:18:26.113351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.185 [2024-05-15 11:18:26.113487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.185 [2024-05-15 11:18:26.113497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.185 qpair failed and we were unable to recover it. 00:26:29.185 [2024-05-15 11:18:26.113575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.185 [2024-05-15 11:18:26.113724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.185 [2024-05-15 11:18:26.113734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.185 qpair failed and we were unable to recover it. 00:26:29.185 [2024-05-15 11:18:26.113867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.185 [2024-05-15 11:18:26.113958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.185 [2024-05-15 11:18:26.113967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.185 qpair failed and we were unable to recover it. 00:26:29.185 [2024-05-15 11:18:26.114032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.185 [2024-05-15 11:18:26.114166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.185 [2024-05-15 11:18:26.114176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.185 qpair failed and we were unable to recover it. 00:26:29.185 [2024-05-15 11:18:26.114322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.185 [2024-05-15 11:18:26.114405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.185 [2024-05-15 11:18:26.114414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.185 qpair failed and we were unable to recover it. 00:26:29.185 [2024-05-15 11:18:26.114481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.185 [2024-05-15 11:18:26.114638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.185 [2024-05-15 11:18:26.114649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.185 qpair failed and we were unable to recover it. 00:26:29.185 [2024-05-15 11:18:26.114717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.185 [2024-05-15 11:18:26.114880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.185 [2024-05-15 11:18:26.114890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.185 qpair failed and we were unable to recover it. 00:26:29.185 [2024-05-15 11:18:26.114959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.185 [2024-05-15 11:18:26.115093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.185 [2024-05-15 11:18:26.115103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.185 qpair failed and we were unable to recover it. 00:26:29.185 [2024-05-15 11:18:26.115259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.185 [2024-05-15 11:18:26.115338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.185 [2024-05-15 11:18:26.115348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.185 qpair failed and we were unable to recover it. 00:26:29.186 [2024-05-15 11:18:26.115429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.186 [2024-05-15 11:18:26.115509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.186 [2024-05-15 11:18:26.115519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.186 qpair failed and we were unable to recover it. 00:26:29.186 [2024-05-15 11:18:26.115608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.186 [2024-05-15 11:18:26.115693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.186 [2024-05-15 11:18:26.115702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.186 qpair failed and we were unable to recover it. 00:26:29.186 [2024-05-15 11:18:26.115772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.186 [2024-05-15 11:18:26.115848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.186 [2024-05-15 11:18:26.115858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.186 qpair failed and we were unable to recover it. 00:26:29.186 [2024-05-15 11:18:26.115936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.186 [2024-05-15 11:18:26.116014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.186 [2024-05-15 11:18:26.116023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.186 qpair failed and we were unable to recover it. 00:26:29.186 [2024-05-15 11:18:26.116090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.186 [2024-05-15 11:18:26.116223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.186 [2024-05-15 11:18:26.116233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.186 qpair failed and we were unable to recover it. 00:26:29.186 [2024-05-15 11:18:26.116305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.186 [2024-05-15 11:18:26.116371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.186 [2024-05-15 11:18:26.116380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.186 qpair failed and we were unable to recover it. 00:26:29.186 [2024-05-15 11:18:26.116510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.186 [2024-05-15 11:18:26.116640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.186 [2024-05-15 11:18:26.116650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.186 qpair failed and we were unable to recover it. 00:26:29.186 [2024-05-15 11:18:26.116805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.186 [2024-05-15 11:18:26.116886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.186 [2024-05-15 11:18:26.116896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.186 qpair failed and we were unable to recover it. 00:26:29.186 [2024-05-15 11:18:26.116974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.186 [2024-05-15 11:18:26.117099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.186 [2024-05-15 11:18:26.117109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.186 qpair failed and we were unable to recover it. 00:26:29.186 [2024-05-15 11:18:26.117195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.186 [2024-05-15 11:18:26.117392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.186 [2024-05-15 11:18:26.117402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.186 qpair failed and we were unable to recover it. 00:26:29.186 [2024-05-15 11:18:26.117546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.186 [2024-05-15 11:18:26.117621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.186 [2024-05-15 11:18:26.117631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.186 qpair failed and we were unable to recover it. 00:26:29.186 [2024-05-15 11:18:26.117795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.186 [2024-05-15 11:18:26.117885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.186 [2024-05-15 11:18:26.117895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.186 qpair failed and we were unable to recover it. 00:26:29.186 [2024-05-15 11:18:26.118043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.186 [2024-05-15 11:18:26.118107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.186 [2024-05-15 11:18:26.118116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.186 qpair failed and we were unable to recover it. 00:26:29.186 [2024-05-15 11:18:26.118184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.186 [2024-05-15 11:18:26.118266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.186 [2024-05-15 11:18:26.118275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.186 qpair failed and we were unable to recover it. 00:26:29.186 [2024-05-15 11:18:26.118367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.186 [2024-05-15 11:18:26.118592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.186 [2024-05-15 11:18:26.118601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.186 qpair failed and we were unable to recover it. 00:26:29.186 [2024-05-15 11:18:26.118741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.186 [2024-05-15 11:18:26.118822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.186 [2024-05-15 11:18:26.118831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.186 qpair failed and we were unable to recover it. 00:26:29.186 [2024-05-15 11:18:26.118975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.186 [2024-05-15 11:18:26.119050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.186 [2024-05-15 11:18:26.119059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.186 qpair failed and we were unable to recover it. 00:26:29.186 [2024-05-15 11:18:26.119222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.186 [2024-05-15 11:18:26.119356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.186 [2024-05-15 11:18:26.119366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.186 qpair failed and we were unable to recover it. 00:26:29.186 [2024-05-15 11:18:26.119457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.186 [2024-05-15 11:18:26.119585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.186 [2024-05-15 11:18:26.119595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.186 qpair failed and we were unable to recover it. 00:26:29.186 [2024-05-15 11:18:26.119664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.186 [2024-05-15 11:18:26.119795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.186 [2024-05-15 11:18:26.119804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.186 qpair failed and we were unable to recover it. 00:26:29.186 [2024-05-15 11:18:26.119882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.186 [2024-05-15 11:18:26.119973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.186 [2024-05-15 11:18:26.119983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.186 qpair failed and we were unable to recover it. 00:26:29.186 [2024-05-15 11:18:26.120070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.186 [2024-05-15 11:18:26.120217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.186 [2024-05-15 11:18:26.120228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.186 qpair failed and we were unable to recover it. 00:26:29.186 [2024-05-15 11:18:26.120310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.186 [2024-05-15 11:18:26.120453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.186 [2024-05-15 11:18:26.120463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.186 qpair failed and we were unable to recover it. 00:26:29.186 [2024-05-15 11:18:26.120531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.186 [2024-05-15 11:18:26.120718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.186 [2024-05-15 11:18:26.120727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.186 qpair failed and we were unable to recover it. 00:26:29.186 [2024-05-15 11:18:26.120807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.186 [2024-05-15 11:18:26.120881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.186 [2024-05-15 11:18:26.120890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.187 qpair failed and we were unable to recover it. 00:26:29.187 [2024-05-15 11:18:26.121038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.121184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.121193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.187 qpair failed and we were unable to recover it. 00:26:29.187 [2024-05-15 11:18:26.121353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.121486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.121495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.187 qpair failed and we were unable to recover it. 00:26:29.187 [2024-05-15 11:18:26.121665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.121729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.121738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.187 qpair failed and we were unable to recover it. 00:26:29.187 [2024-05-15 11:18:26.121808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.121896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.121905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.187 qpair failed and we were unable to recover it. 00:26:29.187 [2024-05-15 11:18:26.122041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.122175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.122184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.187 qpair failed and we were unable to recover it. 00:26:29.187 [2024-05-15 11:18:26.122335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.122483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.122493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.187 qpair failed and we were unable to recover it. 00:26:29.187 [2024-05-15 11:18:26.122707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.122855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.122864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.187 qpair failed and we were unable to recover it. 00:26:29.187 [2024-05-15 11:18:26.123010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.123148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.123158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.187 qpair failed and we were unable to recover it. 00:26:29.187 [2024-05-15 11:18:26.123275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.123411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.123420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.187 qpair failed and we were unable to recover it. 00:26:29.187 [2024-05-15 11:18:26.123521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.123599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.123609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.187 qpair failed and we were unable to recover it. 00:26:29.187 [2024-05-15 11:18:26.123679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.123820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.123831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.187 qpair failed and we were unable to recover it. 00:26:29.187 [2024-05-15 11:18:26.123910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.124045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.124054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.187 qpair failed and we were unable to recover it. 00:26:29.187 [2024-05-15 11:18:26.124127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.124326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.124335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.187 qpair failed and we were unable to recover it. 00:26:29.187 [2024-05-15 11:18:26.124401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.124475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.124485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.187 qpair failed and we were unable to recover it. 00:26:29.187 [2024-05-15 11:18:26.124576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.124706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.124717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.187 qpair failed and we were unable to recover it. 00:26:29.187 [2024-05-15 11:18:26.124791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.124962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.124973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.187 qpair failed and we were unable to recover it. 00:26:29.187 [2024-05-15 11:18:26.125105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.125176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.125186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.187 qpair failed and we were unable to recover it. 00:26:29.187 [2024-05-15 11:18:26.125343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.125500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.125511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.187 qpair failed and we were unable to recover it. 00:26:29.187 [2024-05-15 11:18:26.125668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.125751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.125760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.187 qpair failed and we were unable to recover it. 00:26:29.187 [2024-05-15 11:18:26.125840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.125901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.125911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.187 qpair failed and we were unable to recover it. 00:26:29.187 [2024-05-15 11:18:26.126045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.126184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.126195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.187 qpair failed and we were unable to recover it. 00:26:29.187 [2024-05-15 11:18:26.126288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.126504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.126514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.187 qpair failed and we were unable to recover it. 00:26:29.187 [2024-05-15 11:18:26.126652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.126731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.126740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.187 qpair failed and we were unable to recover it. 00:26:29.187 [2024-05-15 11:18:26.126938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.127145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.127155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.187 qpair failed and we were unable to recover it. 00:26:29.187 [2024-05-15 11:18:26.127322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.127469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.127479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.187 qpair failed and we were unable to recover it. 00:26:29.187 [2024-05-15 11:18:26.127645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.127740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.127749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.187 qpair failed and we were unable to recover it. 00:26:29.187 [2024-05-15 11:18:26.127817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.128007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.187 [2024-05-15 11:18:26.128017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.188 qpair failed and we were unable to recover it. 00:26:29.188 [2024-05-15 11:18:26.128162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.188 [2024-05-15 11:18:26.128242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.188 [2024-05-15 11:18:26.128252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.188 qpair failed and we were unable to recover it. 00:26:29.188 [2024-05-15 11:18:26.128395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.188 [2024-05-15 11:18:26.128538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.188 [2024-05-15 11:18:26.128548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.188 qpair failed and we were unable to recover it. 00:26:29.188 [2024-05-15 11:18:26.128678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.188 [2024-05-15 11:18:26.128826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.188 [2024-05-15 11:18:26.128837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.188 qpair failed and we were unable to recover it. 00:26:29.188 [2024-05-15 11:18:26.128971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.188 [2024-05-15 11:18:26.129051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.188 [2024-05-15 11:18:26.129060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.188 qpair failed and we were unable to recover it. 00:26:29.188 [2024-05-15 11:18:26.129131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.188 [2024-05-15 11:18:26.129353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.188 [2024-05-15 11:18:26.129363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.188 qpair failed and we were unable to recover it. 00:26:29.188 [2024-05-15 11:18:26.129441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.188 [2024-05-15 11:18:26.129503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.188 [2024-05-15 11:18:26.129512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.188 qpair failed and we were unable to recover it. 00:26:29.188 [2024-05-15 11:18:26.129590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.188 [2024-05-15 11:18:26.129739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.188 [2024-05-15 11:18:26.129749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.188 qpair failed and we were unable to recover it. 00:26:29.188 [2024-05-15 11:18:26.129819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.188 [2024-05-15 11:18:26.129895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.188 [2024-05-15 11:18:26.129904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.188 qpair failed and we were unable to recover it. 00:26:29.188 [2024-05-15 11:18:26.129977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.188 [2024-05-15 11:18:26.130108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.188 [2024-05-15 11:18:26.130119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.188 qpair failed and we were unable to recover it. 00:26:29.188 [2024-05-15 11:18:26.130201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.188 [2024-05-15 11:18:26.130358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.188 [2024-05-15 11:18:26.130370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.188 qpair failed and we were unable to recover it. 00:26:29.188 [2024-05-15 11:18:26.130448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.188 [2024-05-15 11:18:26.130586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.188 [2024-05-15 11:18:26.130595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.188 qpair failed and we were unable to recover it. 00:26:29.188 [2024-05-15 11:18:26.130661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.188 [2024-05-15 11:18:26.130811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.188 [2024-05-15 11:18:26.130820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.188 qpair failed and we were unable to recover it. 00:26:29.188 [2024-05-15 11:18:26.130956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.188 [2024-05-15 11:18:26.131110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.188 [2024-05-15 11:18:26.131120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.188 qpair failed and we were unable to recover it. 00:26:29.188 [2024-05-15 11:18:26.131199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.188 [2024-05-15 11:18:26.131409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.188 [2024-05-15 11:18:26.131418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.188 qpair failed and we were unable to recover it. 00:26:29.188 [2024-05-15 11:18:26.131512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.188 [2024-05-15 11:18:26.131598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.188 [2024-05-15 11:18:26.131607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.188 qpair failed and we were unable to recover it. 00:26:29.188 [2024-05-15 11:18:26.131695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.188 [2024-05-15 11:18:26.131781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.188 [2024-05-15 11:18:26.131791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.188 qpair failed and we were unable to recover it. 00:26:29.188 [2024-05-15 11:18:26.131866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.188 [2024-05-15 11:18:26.131937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.188 [2024-05-15 11:18:26.131946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.188 qpair failed and we were unable to recover it. 00:26:29.189 [2024-05-15 11:18:26.132150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.132294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.132305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.189 qpair failed and we were unable to recover it. 00:26:29.189 [2024-05-15 11:18:26.132378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.132456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.132465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.189 qpair failed and we were unable to recover it. 00:26:29.189 [2024-05-15 11:18:26.132536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.132601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.132614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.189 qpair failed and we were unable to recover it. 00:26:29.189 [2024-05-15 11:18:26.132756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.132847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.132856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.189 qpair failed and we were unable to recover it. 00:26:29.189 [2024-05-15 11:18:26.133018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.133084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.133093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.189 qpair failed and we were unable to recover it. 00:26:29.189 [2024-05-15 11:18:26.133157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.133302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.133312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.189 qpair failed and we were unable to recover it. 00:26:29.189 [2024-05-15 11:18:26.133402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.133484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.133493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.189 qpair failed and we were unable to recover it. 00:26:29.189 [2024-05-15 11:18:26.133623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.133694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.133705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.189 qpair failed and we were unable to recover it. 00:26:29.189 [2024-05-15 11:18:26.133785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.133915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.133925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.189 qpair failed and we were unable to recover it. 00:26:29.189 [2024-05-15 11:18:26.134009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.134087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.134096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.189 qpair failed and we were unable to recover it. 00:26:29.189 [2024-05-15 11:18:26.134227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.134307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.134316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.189 qpair failed and we were unable to recover it. 00:26:29.189 [2024-05-15 11:18:26.134449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.134516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.134526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.189 qpair failed and we were unable to recover it. 00:26:29.189 [2024-05-15 11:18:26.134773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.134869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.134881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.189 qpair failed and we were unable to recover it. 00:26:29.189 [2024-05-15 11:18:26.134958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.135095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.135105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.189 qpair failed and we were unable to recover it. 00:26:29.189 [2024-05-15 11:18:26.135254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.135328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.135338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.189 qpair failed and we were unable to recover it. 00:26:29.189 [2024-05-15 11:18:26.135482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.135637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.135647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.189 qpair failed and we were unable to recover it. 00:26:29.189 [2024-05-15 11:18:26.135727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.135883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.135892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.189 qpair failed and we were unable to recover it. 00:26:29.189 [2024-05-15 11:18:26.136030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.136095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.136105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.189 qpair failed and we were unable to recover it. 00:26:29.189 [2024-05-15 11:18:26.136196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.136336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.136347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.189 qpair failed and we were unable to recover it. 00:26:29.189 [2024-05-15 11:18:26.136438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.136641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.136651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.189 qpair failed and we were unable to recover it. 00:26:29.189 [2024-05-15 11:18:26.136782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.136851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.136861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.189 qpair failed and we were unable to recover it. 00:26:29.189 [2024-05-15 11:18:26.136942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.137024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.137034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.189 qpair failed and we were unable to recover it. 00:26:29.189 [2024-05-15 11:18:26.137092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.137177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.137189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.189 qpair failed and we were unable to recover it. 00:26:29.189 [2024-05-15 11:18:26.137260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.137338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.137348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.189 qpair failed and we were unable to recover it. 00:26:29.189 [2024-05-15 11:18:26.137568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.137654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.137664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.189 qpair failed and we were unable to recover it. 00:26:29.189 [2024-05-15 11:18:26.137740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.137827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.137837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.189 qpair failed and we were unable to recover it. 00:26:29.189 [2024-05-15 11:18:26.137903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.137969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.137977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.189 qpair failed and we were unable to recover it. 00:26:29.189 [2024-05-15 11:18:26.138120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.138221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.138231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.189 qpair failed and we were unable to recover it. 00:26:29.189 [2024-05-15 11:18:26.138304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.138395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.138405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.189 qpair failed and we were unable to recover it. 00:26:29.189 [2024-05-15 11:18:26.138485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.138560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.189 [2024-05-15 11:18:26.138570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.190 qpair failed and we were unable to recover it. 00:26:29.190 [2024-05-15 11:18:26.138705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.190 [2024-05-15 11:18:26.138799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.190 [2024-05-15 11:18:26.138809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.190 qpair failed and we were unable to recover it. 00:26:29.190 [2024-05-15 11:18:26.138940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.190 [2024-05-15 11:18:26.139100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.190 [2024-05-15 11:18:26.139110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.190 qpair failed and we were unable to recover it. 00:26:29.190 [2024-05-15 11:18:26.139178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.190 [2024-05-15 11:18:26.139319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.190 [2024-05-15 11:18:26.139329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.190 qpair failed and we were unable to recover it. 00:26:29.190 [2024-05-15 11:18:26.139415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.190 [2024-05-15 11:18:26.139633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.190 [2024-05-15 11:18:26.139643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.190 qpair failed and we were unable to recover it. 00:26:29.190 [2024-05-15 11:18:26.139857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.190 [2024-05-15 11:18:26.139988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.190 [2024-05-15 11:18:26.139997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.190 qpair failed and we were unable to recover it. 00:26:29.190 [2024-05-15 11:18:26.140090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.190 [2024-05-15 11:18:26.140227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.190 [2024-05-15 11:18:26.140237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.190 qpair failed and we were unable to recover it. 00:26:29.190 [2024-05-15 11:18:26.140313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.190 [2024-05-15 11:18:26.140546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.190 [2024-05-15 11:18:26.140556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.190 qpair failed and we were unable to recover it. 00:26:29.190 [2024-05-15 11:18:26.140652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.190 [2024-05-15 11:18:26.140874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.190 [2024-05-15 11:18:26.140884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.190 qpair failed and we were unable to recover it. 00:26:29.190 [2024-05-15 11:18:26.140975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.190 [2024-05-15 11:18:26.141069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.190 [2024-05-15 11:18:26.141078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.190 qpair failed and we were unable to recover it. 00:26:29.190 [2024-05-15 11:18:26.141211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.190 [2024-05-15 11:18:26.141278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.190 [2024-05-15 11:18:26.141287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.190 qpair failed and we were unable to recover it. 00:26:29.190 [2024-05-15 11:18:26.141356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.190 [2024-05-15 11:18:26.141500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.190 [2024-05-15 11:18:26.141510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.190 qpair failed and we were unable to recover it. 00:26:29.190 [2024-05-15 11:18:26.141593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.190 [2024-05-15 11:18:26.141722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.190 [2024-05-15 11:18:26.141731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.190 qpair failed and we were unable to recover it. 00:26:29.190 [2024-05-15 11:18:26.141868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.190 [2024-05-15 11:18:26.141933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.190 [2024-05-15 11:18:26.141942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.190 qpair failed and we were unable to recover it. 00:26:29.190 [2024-05-15 11:18:26.142002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.190 [2024-05-15 11:18:26.142076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.190 [2024-05-15 11:18:26.142086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.190 qpair failed and we were unable to recover it. 00:26:29.190 [2024-05-15 11:18:26.142179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.190 [2024-05-15 11:18:26.142378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.190 [2024-05-15 11:18:26.142388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.190 qpair failed and we were unable to recover it. 00:26:29.190 [2024-05-15 11:18:26.142474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.190 [2024-05-15 11:18:26.142546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.190 [2024-05-15 11:18:26.142555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.190 qpair failed and we were unable to recover it. 00:26:29.190 [2024-05-15 11:18:26.142623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.190 [2024-05-15 11:18:26.142714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.190 [2024-05-15 11:18:26.142724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.190 qpair failed and we were unable to recover it. 00:26:29.190 [2024-05-15 11:18:26.142866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.190 [2024-05-15 11:18:26.142941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.190 [2024-05-15 11:18:26.142950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.190 qpair failed and we were unable to recover it. 00:26:29.190 [2024-05-15 11:18:26.143017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.190 [2024-05-15 11:18:26.143086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.190 [2024-05-15 11:18:26.143095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.190 qpair failed and we were unable to recover it. 00:26:29.190 [2024-05-15 11:18:26.143177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.190 [2024-05-15 11:18:26.143314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.190 [2024-05-15 11:18:26.143324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.190 qpair failed and we were unable to recover it. 00:26:29.190 [2024-05-15 11:18:26.143524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.190 [2024-05-15 11:18:26.143591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.190 [2024-05-15 11:18:26.143600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.190 qpair failed and we were unable to recover it. 00:26:29.190 [2024-05-15 11:18:26.143668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.190 [2024-05-15 11:18:26.143747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.190 [2024-05-15 11:18:26.143757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.190 qpair failed and we were unable to recover it. 00:26:29.190 [2024-05-15 11:18:26.143899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.190 [2024-05-15 11:18:26.143991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.144000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.191 qpair failed and we were unable to recover it. 00:26:29.191 [2024-05-15 11:18:26.144146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.144304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.144314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.191 qpair failed and we were unable to recover it. 00:26:29.191 [2024-05-15 11:18:26.144390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.144464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.144474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.191 qpair failed and we were unable to recover it. 00:26:29.191 [2024-05-15 11:18:26.144609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.144756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.144766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.191 qpair failed and we were unable to recover it. 00:26:29.191 [2024-05-15 11:18:26.144909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.144992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.145001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.191 qpair failed and we were unable to recover it. 00:26:29.191 [2024-05-15 11:18:26.145068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.145148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.145158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.191 qpair failed and we were unable to recover it. 00:26:29.191 [2024-05-15 11:18:26.145297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.145375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.145385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.191 qpair failed and we were unable to recover it. 00:26:29.191 [2024-05-15 11:18:26.145457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.145583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.145592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.191 qpair failed and we were unable to recover it. 00:26:29.191 [2024-05-15 11:18:26.145658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.145734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.145744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.191 qpair failed and we were unable to recover it. 00:26:29.191 [2024-05-15 11:18:26.145821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.145886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.145896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.191 qpair failed and we were unable to recover it. 00:26:29.191 [2024-05-15 11:18:26.146028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.146161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.146181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.191 qpair failed and we were unable to recover it. 00:26:29.191 [2024-05-15 11:18:26.146326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.146411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.146420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.191 qpair failed and we were unable to recover it. 00:26:29.191 [2024-05-15 11:18:26.146512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.146605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.146614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.191 qpair failed and we were unable to recover it. 00:26:29.191 [2024-05-15 11:18:26.146694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.146787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.146797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.191 qpair failed and we were unable to recover it. 00:26:29.191 [2024-05-15 11:18:26.146887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.146951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.146961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.191 qpair failed and we were unable to recover it. 00:26:29.191 [2024-05-15 11:18:26.147087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.147194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.147204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.191 qpair failed and we were unable to recover it. 00:26:29.191 [2024-05-15 11:18:26.147280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.147368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.147377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.191 qpair failed and we were unable to recover it. 00:26:29.191 [2024-05-15 11:18:26.147445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.147577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.147586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.191 qpair failed and we were unable to recover it. 00:26:29.191 [2024-05-15 11:18:26.147659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.147809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.147819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.191 qpair failed and we were unable to recover it. 00:26:29.191 [2024-05-15 11:18:26.147906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.148060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.148070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.191 qpair failed and we were unable to recover it. 00:26:29.191 [2024-05-15 11:18:26.148141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.148223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.148233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.191 qpair failed and we were unable to recover it. 00:26:29.191 [2024-05-15 11:18:26.148333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.148533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.148543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.191 qpair failed and we were unable to recover it. 00:26:29.191 [2024-05-15 11:18:26.148613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.148700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.148709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.191 qpair failed and we were unable to recover it. 00:26:29.191 [2024-05-15 11:18:26.148855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.149002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.149011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.191 qpair failed and we were unable to recover it. 00:26:29.191 [2024-05-15 11:18:26.149095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.149172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.149181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.191 qpair failed and we were unable to recover it. 00:26:29.191 [2024-05-15 11:18:26.149322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.149456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.149465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.191 qpair failed and we were unable to recover it. 00:26:29.191 [2024-05-15 11:18:26.149531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.149666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.149675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.191 qpair failed and we were unable to recover it. 00:26:29.191 [2024-05-15 11:18:26.149752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.191 [2024-05-15 11:18:26.149817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.192 [2024-05-15 11:18:26.149826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.192 qpair failed and we were unable to recover it. 00:26:29.192 [2024-05-15 11:18:26.149904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.192 [2024-05-15 11:18:26.149977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.192 [2024-05-15 11:18:26.149987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.192 qpair failed and we were unable to recover it. 00:26:29.192 [2024-05-15 11:18:26.150072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.192 [2024-05-15 11:18:26.150286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.192 [2024-05-15 11:18:26.150296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.192 qpair failed and we were unable to recover it. 00:26:29.192 [2024-05-15 11:18:26.150377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.192 [2024-05-15 11:18:26.150443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.192 [2024-05-15 11:18:26.150453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.192 qpair failed and we were unable to recover it. 00:26:29.192 [2024-05-15 11:18:26.150520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.192 [2024-05-15 11:18:26.150696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.192 [2024-05-15 11:18:26.150706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.192 qpair failed and we were unable to recover it. 00:26:29.192 [2024-05-15 11:18:26.150849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.192 [2024-05-15 11:18:26.151005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.192 [2024-05-15 11:18:26.151014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.192 qpair failed and we were unable to recover it. 00:26:29.192 [2024-05-15 11:18:26.151158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.192 [2024-05-15 11:18:26.151260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.192 [2024-05-15 11:18:26.151269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.192 qpair failed and we were unable to recover it. 00:26:29.192 [2024-05-15 11:18:26.151353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.192 [2024-05-15 11:18:26.151416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.192 [2024-05-15 11:18:26.151426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.192 qpair failed and we were unable to recover it. 00:26:29.192 [2024-05-15 11:18:26.151493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.192 [2024-05-15 11:18:26.151634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.192 [2024-05-15 11:18:26.151644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.192 qpair failed and we were unable to recover it. 00:26:29.192 [2024-05-15 11:18:26.151785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.192 [2024-05-15 11:18:26.151919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.192 [2024-05-15 11:18:26.151929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.192 qpair failed and we were unable to recover it. 00:26:29.192 [2024-05-15 11:18:26.152010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.192 [2024-05-15 11:18:26.152102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.192 [2024-05-15 11:18:26.152112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.192 qpair failed and we were unable to recover it. 00:26:29.192 [2024-05-15 11:18:26.152208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.192 [2024-05-15 11:18:26.152282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.192 [2024-05-15 11:18:26.152291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.192 qpair failed and we were unable to recover it. 00:26:29.192 [2024-05-15 11:18:26.152374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.192 [2024-05-15 11:18:26.152459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.192 [2024-05-15 11:18:26.152469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.192 qpair failed and we were unable to recover it. 00:26:29.192 [2024-05-15 11:18:26.152545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.192 [2024-05-15 11:18:26.152680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.192 [2024-05-15 11:18:26.152690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.192 qpair failed and we were unable to recover it. 00:26:29.192 [2024-05-15 11:18:26.152759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.192 [2024-05-15 11:18:26.152894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.192 [2024-05-15 11:18:26.152904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.192 qpair failed and we were unable to recover it. 00:26:29.192 [2024-05-15 11:18:26.153034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.192 [2024-05-15 11:18:26.153260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.192 [2024-05-15 11:18:26.153271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.192 qpair failed and we were unable to recover it. 00:26:29.192 [2024-05-15 11:18:26.153340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.192 [2024-05-15 11:18:26.153485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.192 [2024-05-15 11:18:26.153494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.192 qpair failed and we were unable to recover it. 00:26:29.192 [2024-05-15 11:18:26.153640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.192 [2024-05-15 11:18:26.153793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.192 [2024-05-15 11:18:26.153802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.192 qpair failed and we were unable to recover it. 00:26:29.192 [2024-05-15 11:18:26.153978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.192 [2024-05-15 11:18:26.154069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.192 [2024-05-15 11:18:26.154078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.192 qpair failed and we were unable to recover it. 00:26:29.192 [2024-05-15 11:18:26.154212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.192 [2024-05-15 11:18:26.154422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.192 [2024-05-15 11:18:26.154432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.192 qpair failed and we were unable to recover it. 00:26:29.192 [2024-05-15 11:18:26.154502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.192 [2024-05-15 11:18:26.154673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.192 [2024-05-15 11:18:26.154683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.192 qpair failed and we were unable to recover it. 00:26:29.192 [2024-05-15 11:18:26.154893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.192 [2024-05-15 11:18:26.155026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.192 [2024-05-15 11:18:26.155036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.192 qpair failed and we were unable to recover it. 00:26:29.192 [2024-05-15 11:18:26.155194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.192 [2024-05-15 11:18:26.155329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.192 [2024-05-15 11:18:26.155338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.192 qpair failed and we were unable to recover it. 00:26:29.192 [2024-05-15 11:18:26.155442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.192 [2024-05-15 11:18:26.155529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.192 [2024-05-15 11:18:26.155539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.193 qpair failed and we were unable to recover it. 00:26:29.193 [2024-05-15 11:18:26.155671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.193 [2024-05-15 11:18:26.155762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.193 [2024-05-15 11:18:26.155771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.193 qpair failed and we were unable to recover it. 00:26:29.193 [2024-05-15 11:18:26.155905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.193 [2024-05-15 11:18:26.156062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.193 [2024-05-15 11:18:26.156072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.193 qpair failed and we were unable to recover it. 00:26:29.193 [2024-05-15 11:18:26.156234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.193 [2024-05-15 11:18:26.156331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.193 [2024-05-15 11:18:26.156341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.193 qpair failed and we were unable to recover it. 00:26:29.193 [2024-05-15 11:18:26.156424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.193 [2024-05-15 11:18:26.156504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.193 [2024-05-15 11:18:26.156513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.193 qpair failed and we were unable to recover it. 00:26:29.193 [2024-05-15 11:18:26.156589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.193 [2024-05-15 11:18:26.156674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.193 [2024-05-15 11:18:26.156683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.193 qpair failed and we were unable to recover it. 00:26:29.193 [2024-05-15 11:18:26.156761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.193 [2024-05-15 11:18:26.156916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.193 [2024-05-15 11:18:26.156926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.193 qpair failed and we were unable to recover it. 00:26:29.193 [2024-05-15 11:18:26.156999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.193 [2024-05-15 11:18:26.157065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.193 [2024-05-15 11:18:26.157074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.193 qpair failed and we were unable to recover it. 00:26:29.193 [2024-05-15 11:18:26.157147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.193 [2024-05-15 11:18:26.157280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.193 [2024-05-15 11:18:26.157290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.193 qpair failed and we were unable to recover it. 00:26:29.193 [2024-05-15 11:18:26.157371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.193 [2024-05-15 11:18:26.157538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.193 [2024-05-15 11:18:26.157547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.193 qpair failed and we were unable to recover it. 00:26:29.193 [2024-05-15 11:18:26.157615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.193 [2024-05-15 11:18:26.157706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.193 [2024-05-15 11:18:26.157715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.193 qpair failed and we were unable to recover it. 00:26:29.193 [2024-05-15 11:18:26.157857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.193 [2024-05-15 11:18:26.158059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.193 [2024-05-15 11:18:26.158070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.193 qpair failed and we were unable to recover it. 00:26:29.193 [2024-05-15 11:18:26.158208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.193 [2024-05-15 11:18:26.158270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.193 [2024-05-15 11:18:26.158280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.193 qpair failed and we were unable to recover it. 00:26:29.193 [2024-05-15 11:18:26.158411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.193 [2024-05-15 11:18:26.158487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.193 [2024-05-15 11:18:26.158496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.193 qpair failed and we were unable to recover it. 00:26:29.193 [2024-05-15 11:18:26.158570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.193 [2024-05-15 11:18:26.158794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.193 [2024-05-15 11:18:26.158804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.193 qpair failed and we were unable to recover it. 00:26:29.193 [2024-05-15 11:18:26.158936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.193 [2024-05-15 11:18:26.159015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.194 [2024-05-15 11:18:26.159025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.194 qpair failed and we were unable to recover it. 00:26:29.194 [2024-05-15 11:18:26.159158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.194 [2024-05-15 11:18:26.159252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.194 [2024-05-15 11:18:26.159262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.194 qpair failed and we were unable to recover it. 00:26:29.194 [2024-05-15 11:18:26.159343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.194 [2024-05-15 11:18:26.159506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.194 [2024-05-15 11:18:26.159516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.194 qpair failed and we were unable to recover it. 00:26:29.194 [2024-05-15 11:18:26.159719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.194 [2024-05-15 11:18:26.159848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.194 [2024-05-15 11:18:26.159857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.194 qpair failed and we were unable to recover it. 00:26:29.194 [2024-05-15 11:18:26.159938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.194 [2024-05-15 11:18:26.160012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.194 [2024-05-15 11:18:26.160021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.194 qpair failed and we were unable to recover it. 00:26:29.194 [2024-05-15 11:18:26.160093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.194 [2024-05-15 11:18:26.160173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.194 [2024-05-15 11:18:26.160183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.194 qpair failed and we were unable to recover it. 00:26:29.194 [2024-05-15 11:18:26.160321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.194 [2024-05-15 11:18:26.160396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.194 [2024-05-15 11:18:26.160406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.194 qpair failed and we were unable to recover it. 00:26:29.194 [2024-05-15 11:18:26.160484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.194 [2024-05-15 11:18:26.160686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.194 [2024-05-15 11:18:26.160696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.194 qpair failed and we were unable to recover it. 00:26:29.194 [2024-05-15 11:18:26.160773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.194 [2024-05-15 11:18:26.160835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.194 [2024-05-15 11:18:26.160846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.194 qpair failed and we were unable to recover it. 00:26:29.194 [2024-05-15 11:18:26.160997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.194 [2024-05-15 11:18:26.161062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.194 [2024-05-15 11:18:26.161072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.194 qpair failed and we were unable to recover it. 00:26:29.194 [2024-05-15 11:18:26.161206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.194 [2024-05-15 11:18:26.161344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.194 [2024-05-15 11:18:26.161354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.194 qpair failed and we were unable to recover it. 00:26:29.194 [2024-05-15 11:18:26.161506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.194 [2024-05-15 11:18:26.161581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.194 [2024-05-15 11:18:26.161590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.194 qpair failed and we were unable to recover it. 00:26:29.194 [2024-05-15 11:18:26.161678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.194 [2024-05-15 11:18:26.161807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.194 [2024-05-15 11:18:26.161817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.194 qpair failed and we were unable to recover it. 00:26:29.194 [2024-05-15 11:18:26.161892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.194 [2024-05-15 11:18:26.161965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.194 [2024-05-15 11:18:26.161974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.194 qpair failed and we were unable to recover it. 00:26:29.194 [2024-05-15 11:18:26.162053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.194 [2024-05-15 11:18:26.162200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.194 [2024-05-15 11:18:26.162210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.194 qpair failed and we were unable to recover it. 00:26:29.194 [2024-05-15 11:18:26.162293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.194 [2024-05-15 11:18:26.162371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.194 [2024-05-15 11:18:26.162380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.194 qpair failed and we were unable to recover it. 00:26:29.194 [2024-05-15 11:18:26.162462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.194 [2024-05-15 11:18:26.162611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.194 [2024-05-15 11:18:26.162622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.194 qpair failed and we were unable to recover it. 00:26:29.194 [2024-05-15 11:18:26.162823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.194 [2024-05-15 11:18:26.162886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.194 [2024-05-15 11:18:26.162895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.194 qpair failed and we were unable to recover it. 00:26:29.194 [2024-05-15 11:18:26.163049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.194 [2024-05-15 11:18:26.163138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.194 [2024-05-15 11:18:26.163148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.194 qpair failed and we were unable to recover it. 00:26:29.194 [2024-05-15 11:18:26.163236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.194 [2024-05-15 11:18:26.163379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.194 [2024-05-15 11:18:26.163389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.194 qpair failed and we were unable to recover it. 00:26:29.194 [2024-05-15 11:18:26.163587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.194 [2024-05-15 11:18:26.163770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.194 [2024-05-15 11:18:26.163779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.194 qpair failed and we were unable to recover it. 00:26:29.194 [2024-05-15 11:18:26.164003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.194 [2024-05-15 11:18:26.164149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.194 [2024-05-15 11:18:26.164159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.194 qpair failed and we were unable to recover it. 00:26:29.194 [2024-05-15 11:18:26.164262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.194 [2024-05-15 11:18:26.164394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.194 [2024-05-15 11:18:26.164404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.194 qpair failed and we were unable to recover it. 00:26:29.195 [2024-05-15 11:18:26.164556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.164646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.164656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.195 qpair failed and we were unable to recover it. 00:26:29.195 [2024-05-15 11:18:26.164811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.164906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.164917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.195 qpair failed and we were unable to recover it. 00:26:29.195 [2024-05-15 11:18:26.165092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.165173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.165183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.195 qpair failed and we were unable to recover it. 00:26:29.195 [2024-05-15 11:18:26.165263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.165339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.165349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.195 qpair failed and we were unable to recover it. 00:26:29.195 [2024-05-15 11:18:26.165488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.165565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.165575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.195 qpair failed and we were unable to recover it. 00:26:29.195 [2024-05-15 11:18:26.165653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.165737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.165747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.195 qpair failed and we were unable to recover it. 00:26:29.195 [2024-05-15 11:18:26.165884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.166018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.166028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.195 qpair failed and we were unable to recover it. 00:26:29.195 [2024-05-15 11:18:26.166180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.166259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.166270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.195 qpair failed and we were unable to recover it. 00:26:29.195 [2024-05-15 11:18:26.166345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.166405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.166414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.195 qpair failed and we were unable to recover it. 00:26:29.195 [2024-05-15 11:18:26.166549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.166691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.166701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.195 qpair failed and we were unable to recover it. 00:26:29.195 [2024-05-15 11:18:26.166833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.166917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.166926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.195 qpair failed and we were unable to recover it. 00:26:29.195 [2024-05-15 11:18:26.166993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.167150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.167159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.195 qpair failed and we were unable to recover it. 00:26:29.195 [2024-05-15 11:18:26.167233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.167363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.167372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.195 qpair failed and we were unable to recover it. 00:26:29.195 [2024-05-15 11:18:26.167571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.167649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.167660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.195 qpair failed and we were unable to recover it. 00:26:29.195 [2024-05-15 11:18:26.167732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.167817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.167826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.195 qpair failed and we were unable to recover it. 00:26:29.195 [2024-05-15 11:18:26.167955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.168035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.168044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.195 qpair failed and we were unable to recover it. 00:26:29.195 [2024-05-15 11:18:26.168113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.168184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.168193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.195 qpair failed and we were unable to recover it. 00:26:29.195 [2024-05-15 11:18:26.168340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.168481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.168491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.195 qpair failed and we were unable to recover it. 00:26:29.195 [2024-05-15 11:18:26.168566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.168639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.168648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.195 qpair failed and we were unable to recover it. 00:26:29.195 [2024-05-15 11:18:26.168861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.169009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.169019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.195 qpair failed and we were unable to recover it. 00:26:29.195 [2024-05-15 11:18:26.169091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.169228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.169261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.195 qpair failed and we were unable to recover it. 00:26:29.195 [2024-05-15 11:18:26.169432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.169517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.169528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.195 qpair failed and we were unable to recover it. 00:26:29.195 [2024-05-15 11:18:26.169602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.169679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.169689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.195 qpair failed and we were unable to recover it. 00:26:29.195 [2024-05-15 11:18:26.169756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.169825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.169838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.195 qpair failed and we were unable to recover it. 00:26:29.195 [2024-05-15 11:18:26.169923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.170002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.170011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.195 qpair failed and we were unable to recover it. 00:26:29.195 [2024-05-15 11:18:26.170174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.170336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.170346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.195 qpair failed and we were unable to recover it. 00:26:29.195 [2024-05-15 11:18:26.170477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.170543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.170552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.195 qpair failed and we were unable to recover it. 00:26:29.195 [2024-05-15 11:18:26.170622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.170774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.170783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.195 qpair failed and we were unable to recover it. 00:26:29.195 [2024-05-15 11:18:26.170963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.171145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.171155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.195 qpair failed and we were unable to recover it. 00:26:29.195 [2024-05-15 11:18:26.171349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.171517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.171532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.195 qpair failed and we were unable to recover it. 00:26:29.195 [2024-05-15 11:18:26.171768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.171971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.171984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.195 qpair failed and we were unable to recover it. 00:26:29.195 [2024-05-15 11:18:26.172212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.172296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.172310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.195 qpair failed and we were unable to recover it. 00:26:29.195 [2024-05-15 11:18:26.172398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.172562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.172575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.195 qpair failed and we were unable to recover it. 00:26:29.195 [2024-05-15 11:18:26.172733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.172831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.172848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.195 qpair failed and we were unable to recover it. 00:26:29.195 [2024-05-15 11:18:26.172988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.173148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.173161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.195 qpair failed and we were unable to recover it. 00:26:29.195 [2024-05-15 11:18:26.173264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.173359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.195 [2024-05-15 11:18:26.173372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.195 qpair failed and we were unable to recover it. 00:26:29.195 [2024-05-15 11:18:26.173560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.173741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.173754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.196 qpair failed and we were unable to recover it. 00:26:29.196 [2024-05-15 11:18:26.173845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.173985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.173998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.196 qpair failed and we were unable to recover it. 00:26:29.196 [2024-05-15 11:18:26.174089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.174174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.174188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.196 qpair failed and we were unable to recover it. 00:26:29.196 [2024-05-15 11:18:26.174343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.174483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.174497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.196 qpair failed and we were unable to recover it. 00:26:29.196 [2024-05-15 11:18:26.174684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.174833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.174847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.196 qpair failed and we were unable to recover it. 00:26:29.196 [2024-05-15 11:18:26.175054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.175137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.175150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.196 qpair failed and we were unable to recover it. 00:26:29.196 [2024-05-15 11:18:26.175259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.175420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.175435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:29.196 qpair failed and we were unable to recover it. 00:26:29.196 [2024-05-15 11:18:26.175533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.175672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.175686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:29.196 qpair failed and we were unable to recover it. 00:26:29.196 [2024-05-15 11:18:26.175778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.175868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.175879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.196 qpair failed and we were unable to recover it. 00:26:29.196 [2024-05-15 11:18:26.176015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.176159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.176173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.196 qpair failed and we were unable to recover it. 00:26:29.196 [2024-05-15 11:18:26.176254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.176336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.176345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.196 qpair failed and we were unable to recover it. 00:26:29.196 [2024-05-15 11:18:26.176422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.176550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.176560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.196 qpair failed and we were unable to recover it. 00:26:29.196 [2024-05-15 11:18:26.176650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.176715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.176725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.196 qpair failed and we were unable to recover it. 00:26:29.196 [2024-05-15 11:18:26.176786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.176939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.176949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.196 qpair failed and we were unable to recover it. 00:26:29.196 [2024-05-15 11:18:26.177170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.177337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.177347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.196 qpair failed and we were unable to recover it. 00:26:29.196 [2024-05-15 11:18:26.177436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.177570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.177580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.196 qpair failed and we were unable to recover it. 00:26:29.196 [2024-05-15 11:18:26.177721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.177801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.177811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.196 qpair failed and we were unable to recover it. 00:26:29.196 [2024-05-15 11:18:26.177898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.177975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.177984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.196 qpair failed and we were unable to recover it. 00:26:29.196 [2024-05-15 11:18:26.178132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.178226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.178236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.196 qpair failed and we were unable to recover it. 00:26:29.196 [2024-05-15 11:18:26.178313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.178393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.178402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.196 qpair failed and we were unable to recover it. 00:26:29.196 [2024-05-15 11:18:26.178473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.178604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.178614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.196 qpair failed and we were unable to recover it. 00:26:29.196 [2024-05-15 11:18:26.178698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.178846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.178856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.196 qpair failed and we were unable to recover it. 00:26:29.196 [2024-05-15 11:18:26.178930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.179086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.179095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.196 qpair failed and we were unable to recover it. 00:26:29.196 [2024-05-15 11:18:26.179177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.179260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.179269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.196 qpair failed and we were unable to recover it. 00:26:29.196 [2024-05-15 11:18:26.179470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.179626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.179636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.196 qpair failed and we were unable to recover it. 00:26:29.196 [2024-05-15 11:18:26.179729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.179796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.179806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.196 qpair failed and we were unable to recover it. 00:26:29.196 [2024-05-15 11:18:26.179989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.180139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.180149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.196 qpair failed and we were unable to recover it. 00:26:29.196 [2024-05-15 11:18:26.180353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.180501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.180511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.196 qpair failed and we were unable to recover it. 00:26:29.196 [2024-05-15 11:18:26.180599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.180751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.180760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.196 qpair failed and we were unable to recover it. 00:26:29.196 [2024-05-15 11:18:26.180839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.180962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.180971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.196 qpair failed and we were unable to recover it. 00:26:29.196 [2024-05-15 11:18:26.181111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.181265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.181275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.196 qpair failed and we were unable to recover it. 00:26:29.196 [2024-05-15 11:18:26.181426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.181570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.181580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.196 qpair failed and we were unable to recover it. 00:26:29.196 [2024-05-15 11:18:26.181652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.181799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.181809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.196 qpair failed and we were unable to recover it. 00:26:29.196 [2024-05-15 11:18:26.182033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.182303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.182313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.196 qpair failed and we were unable to recover it. 00:26:29.196 [2024-05-15 11:18:26.182534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.182701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.196 [2024-05-15 11:18:26.182711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.196 qpair failed and we were unable to recover it. 00:26:29.196 [2024-05-15 11:18:26.182880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.183048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.183057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.197 qpair failed and we were unable to recover it. 00:26:29.197 [2024-05-15 11:18:26.183201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.183349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.183358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.197 qpair failed and we were unable to recover it. 00:26:29.197 [2024-05-15 11:18:26.183502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.183637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.183647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.197 qpair failed and we were unable to recover it. 00:26:29.197 [2024-05-15 11:18:26.183820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.183952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.183962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.197 qpair failed and we were unable to recover it. 00:26:29.197 [2024-05-15 11:18:26.184111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.184313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.184323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.197 qpair failed and we were unable to recover it. 00:26:29.197 [2024-05-15 11:18:26.184416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.184560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.184569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.197 qpair failed and we were unable to recover it. 00:26:29.197 [2024-05-15 11:18:26.184638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.184723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.184732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.197 qpair failed and we were unable to recover it. 00:26:29.197 [2024-05-15 11:18:26.184821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.184960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.184970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.197 qpair failed and we were unable to recover it. 00:26:29.197 [2024-05-15 11:18:26.185042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.185172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.185182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.197 qpair failed and we were unable to recover it. 00:26:29.197 [2024-05-15 11:18:26.185255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.185323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.185331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.197 qpair failed and we were unable to recover it. 00:26:29.197 [2024-05-15 11:18:26.185409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.185470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.185479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.197 qpair failed and we were unable to recover it. 00:26:29.197 [2024-05-15 11:18:26.185632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.185774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.185784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.197 qpair failed and we were unable to recover it. 00:26:29.197 [2024-05-15 11:18:26.185958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.186023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.186031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.197 qpair failed and we were unable to recover it. 00:26:29.197 [2024-05-15 11:18:26.186254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.186402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.186411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.197 qpair failed and we were unable to recover it. 00:26:29.197 [2024-05-15 11:18:26.186571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.186745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.186753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.197 qpair failed and we were unable to recover it. 00:26:29.197 [2024-05-15 11:18:26.186838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.186922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.186930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.197 qpair failed and we were unable to recover it. 00:26:29.197 [2024-05-15 11:18:26.187129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.187286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.187295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.197 qpair failed and we were unable to recover it. 00:26:29.197 [2024-05-15 11:18:26.187506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.187577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.187586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.197 qpair failed and we were unable to recover it. 00:26:29.197 [2024-05-15 11:18:26.187670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.187806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.187814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.197 qpair failed and we were unable to recover it. 00:26:29.197 [2024-05-15 11:18:26.187908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.187992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.188000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.197 qpair failed and we were unable to recover it. 00:26:29.197 [2024-05-15 11:18:26.188078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.188157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.188169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.197 qpair failed and we were unable to recover it. 00:26:29.197 [2024-05-15 11:18:26.188273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.188401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.188410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.197 qpair failed and we were unable to recover it. 00:26:29.197 [2024-05-15 11:18:26.188612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.188683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.188692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.197 qpair failed and we were unable to recover it. 00:26:29.197 [2024-05-15 11:18:26.188772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.188916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.188926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.197 qpair failed and we were unable to recover it. 00:26:29.197 [2024-05-15 11:18:26.189062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.189214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.189225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.197 qpair failed and we were unable to recover it. 00:26:29.197 [2024-05-15 11:18:26.189411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.189630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.189639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.197 qpair failed and we were unable to recover it. 00:26:29.197 [2024-05-15 11:18:26.189726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.189792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.189801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.197 qpair failed and we were unable to recover it. 00:26:29.197 [2024-05-15 11:18:26.189877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.190017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.190026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.197 qpair failed and we were unable to recover it. 00:26:29.197 [2024-05-15 11:18:26.190231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.190310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.190319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.197 qpair failed and we were unable to recover it. 00:26:29.197 [2024-05-15 11:18:26.190408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.190492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.190500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.197 qpair failed and we were unable to recover it. 00:26:29.197 [2024-05-15 11:18:26.190674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.190757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.190765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.197 qpair failed and we were unable to recover it. 00:26:29.197 [2024-05-15 11:18:26.190836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.190898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.190907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.197 qpair failed and we were unable to recover it. 00:26:29.197 [2024-05-15 11:18:26.191056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.191195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.191205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.197 qpair failed and we were unable to recover it. 00:26:29.197 [2024-05-15 11:18:26.191273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.197 [2024-05-15 11:18:26.191404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.191413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.191509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.191586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.191595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.191765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.191908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.191917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.192012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.192225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.192234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.192325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.192408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.192418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.192562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.192652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.192661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.192728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.192857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.192866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.192952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.193036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.193046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.193251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.193329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.193339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.193416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.193517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.193527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.193616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.193696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.193705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.193776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.193847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.193856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.193939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.194008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.194017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.194172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.194306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.194316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.194460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.194547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.194559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.194638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.194709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.194719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.194799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.194929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.194938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.195207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.195285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.195294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.195454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.195588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.195597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.195687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.195772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.195781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.195863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.196082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.196092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.196217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.196307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.196316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.196394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.196493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.196503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.196583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.196717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.196727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.196803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.196959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.196969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.197038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.197172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.197182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.197247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.197313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.197323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.197398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.197478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.197487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.197562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.197640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.197649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.197782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.197850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.197859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.197998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.198136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.198147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.198241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.198312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.198322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.198397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.198455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.198464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.198530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.198602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.198612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.198704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.198852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.198862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.199024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.199158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.199182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.199266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.199351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.199361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.199441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.199512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.199521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.199603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.199764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.199774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.199916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.200001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.200010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.200211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.200344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.200354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.200559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.200635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.200645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.200794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.200952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.200962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.201126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.201288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.201299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.201381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.201538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.201549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.201689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.201756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.201766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.201830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.201977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.201987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.202064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.202196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.202207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.202368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.202462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.202472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.202560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.202720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.202730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.202827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.202898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.202908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.202984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.203073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.203083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.203173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.203304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.203313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.203445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.203613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.203625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.203843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.204070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.204080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.204155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.204290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.204300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.198 qpair failed and we were unable to recover it. 00:26:29.198 [2024-05-15 11:18:26.204388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.204561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.198 [2024-05-15 11:18:26.204572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.204648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.204793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.204804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.204901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.205084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.205094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.205180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.205254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.205263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.205417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.205496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.205509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.205602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.205672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.205681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.205826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.205887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.205896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.206043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.206102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.206112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.206256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.206335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.206345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.206498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.206611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.206621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.206688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.206823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.206833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.206980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.207068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.207078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.207217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.207349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.207359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.207450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.207516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.207526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.207603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.207682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.207693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.207839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.207908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.207917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.208054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.208146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.208156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.208320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.208387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.208396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.208472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.208526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.208535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.208682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.208814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.208824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.208888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.209087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.209097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.209175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.209264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.209274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.209418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.209500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.209509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.209662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.209801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.209810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.209910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.209975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.209986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.210058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.210140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.210149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.210237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.210307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.210316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.210462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.210551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.210561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.210643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.210718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.210727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.210810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.210895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.210904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.210992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.211061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.211071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.211158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.211234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.211244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.211331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.211417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.211427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.211589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.211650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.211660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.211733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.211858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.211871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.212011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.212105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.212115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.212323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.212409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.212419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.212559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.212643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.212653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.212729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.212897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.212907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.213007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.213072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.213083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.213183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.213254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.213264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.213333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.213468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.213478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.213548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.213641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.213651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.213783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.213859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.213869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.213951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.214172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.214182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.214277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.214456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.214467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.214604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.214832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.214842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.214928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.215092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.215101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.215250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.215329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.215338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.215421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.215497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.215506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.215587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.215662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.215673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.215754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.215842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.215852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.215923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.216002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.216012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.216096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.216187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.216197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.216281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.216345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.216355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.216456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.216530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.216540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.216609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.216674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.199 [2024-05-15 11:18:26.216684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.199 qpair failed and we were unable to recover it. 00:26:29.199 [2024-05-15 11:18:26.216748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.216811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.216820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.200 qpair failed and we were unable to recover it. 00:26:29.200 [2024-05-15 11:18:26.216956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.217045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.217055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.200 qpair failed and we were unable to recover it. 00:26:29.200 [2024-05-15 11:18:26.217199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.217280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.217290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.200 qpair failed and we were unable to recover it. 00:26:29.200 [2024-05-15 11:18:26.217358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.217424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.217433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.200 qpair failed and we were unable to recover it. 00:26:29.200 [2024-05-15 11:18:26.217573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.217655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.217665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.200 qpair failed and we were unable to recover it. 00:26:29.200 [2024-05-15 11:18:26.217740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.217813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.217823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.200 qpair failed and we were unable to recover it. 00:26:29.200 [2024-05-15 11:18:26.217992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.218134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.218144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.200 qpair failed and we were unable to recover it. 00:26:29.200 [2024-05-15 11:18:26.218305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.218390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.218400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.200 qpair failed and we were unable to recover it. 00:26:29.200 [2024-05-15 11:18:26.218537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.218597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.218607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.200 qpair failed and we were unable to recover it. 00:26:29.200 [2024-05-15 11:18:26.218703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.218770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.218780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.200 qpair failed and we were unable to recover it. 00:26:29.200 [2024-05-15 11:18:26.218846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.218930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.218940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.200 qpair failed and we were unable to recover it. 00:26:29.200 [2024-05-15 11:18:26.219080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.219159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.219174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.200 qpair failed and we were unable to recover it. 00:26:29.200 [2024-05-15 11:18:26.219313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.219380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.219390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.200 qpair failed and we were unable to recover it. 00:26:29.200 [2024-05-15 11:18:26.219470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.219542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.219553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.200 qpair failed and we were unable to recover it. 00:26:29.200 [2024-05-15 11:18:26.219634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.219770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.219781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.200 qpair failed and we were unable to recover it. 00:26:29.200 [2024-05-15 11:18:26.219867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.219943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.219956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.200 qpair failed and we were unable to recover it. 00:26:29.200 [2024-05-15 11:18:26.220026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.220172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.220182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.200 qpair failed and we were unable to recover it. 00:26:29.200 [2024-05-15 11:18:26.220254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.220330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.220343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.200 qpair failed and we were unable to recover it. 00:26:29.200 [2024-05-15 11:18:26.220409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.220478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.220489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.200 qpair failed and we were unable to recover it. 00:26:29.200 [2024-05-15 11:18:26.220575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.220761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.220771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.200 qpair failed and we were unable to recover it. 00:26:29.200 [2024-05-15 11:18:26.220904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.220975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.220985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.200 qpair failed and we were unable to recover it. 00:26:29.200 [2024-05-15 11:18:26.221053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.221194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.221204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.200 qpair failed and we were unable to recover it. 00:26:29.200 [2024-05-15 11:18:26.221279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.221417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.221427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.200 qpair failed and we were unable to recover it. 00:26:29.200 [2024-05-15 11:18:26.221511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.221591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.221601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.200 qpair failed and we were unable to recover it. 00:26:29.200 [2024-05-15 11:18:26.221693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.221776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.221785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.200 qpair failed and we were unable to recover it. 00:26:29.200 [2024-05-15 11:18:26.221872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.221956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.221966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.200 qpair failed and we were unable to recover it. 00:26:29.200 [2024-05-15 11:18:26.222092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.222172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.222183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.200 qpair failed and we were unable to recover it. 00:26:29.200 [2024-05-15 11:18:26.222261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.222350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.222362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.200 qpair failed and we were unable to recover it. 00:26:29.200 [2024-05-15 11:18:26.222444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.222558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.222568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.200 qpair failed and we were unable to recover it. 00:26:29.200 [2024-05-15 11:18:26.222635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.222725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.222735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.200 qpair failed and we were unable to recover it. 00:26:29.200 [2024-05-15 11:18:26.222800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.222950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.222960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.200 qpair failed and we were unable to recover it. 00:26:29.200 [2024-05-15 11:18:26.223038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.223113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.223122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.200 qpair failed and we were unable to recover it. 00:26:29.200 [2024-05-15 11:18:26.223201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.223339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.223349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.200 qpair failed and we were unable to recover it. 00:26:29.200 [2024-05-15 11:18:26.223420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.223624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.223635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.200 qpair failed and we were unable to recover it. 00:26:29.200 [2024-05-15 11:18:26.223706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.223777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.223787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.200 qpair failed and we were unable to recover it. 00:26:29.200 [2024-05-15 11:18:26.223934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.224018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.224027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.200 qpair failed and we were unable to recover it. 00:26:29.200 [2024-05-15 11:18:26.224130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.224234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.224245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.200 qpair failed and we were unable to recover it. 00:26:29.200 [2024-05-15 11:18:26.224312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.224383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.224392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.200 qpair failed and we were unable to recover it. 00:26:29.200 [2024-05-15 11:18:26.224480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.224606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.224616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.200 qpair failed and we were unable to recover it. 00:26:29.200 [2024-05-15 11:18:26.224701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.224881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.224891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.200 qpair failed and we were unable to recover it. 00:26:29.200 [2024-05-15 11:18:26.224995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.225080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.225089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.200 qpair failed and we were unable to recover it. 00:26:29.200 [2024-05-15 11:18:26.225169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.225253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.225264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.200 qpair failed and we were unable to recover it. 00:26:29.200 [2024-05-15 11:18:26.225366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.225421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.225431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.200 qpair failed and we were unable to recover it. 00:26:29.200 [2024-05-15 11:18:26.225487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.225627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.225637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.200 qpair failed and we were unable to recover it. 00:26:29.200 [2024-05-15 11:18:26.225713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.225847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.225856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.200 qpair failed and we were unable to recover it. 00:26:29.200 [2024-05-15 11:18:26.225989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.226061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.226070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.200 qpair failed and we were unable to recover it. 00:26:29.200 [2024-05-15 11:18:26.226159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.226237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.226249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.200 qpair failed and we were unable to recover it. 00:26:29.200 [2024-05-15 11:18:26.226335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.226407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.226417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.200 qpair failed and we were unable to recover it. 00:26:29.200 [2024-05-15 11:18:26.226494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.226566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.226575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.200 qpair failed and we were unable to recover it. 00:26:29.200 [2024-05-15 11:18:26.226640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.226769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.226779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.200 qpair failed and we were unable to recover it. 00:26:29.200 [2024-05-15 11:18:26.226861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.227003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.200 [2024-05-15 11:18:26.227013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.200 qpair failed and we were unable to recover it. 00:26:29.200 [2024-05-15 11:18:26.227093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.227188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.227198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.227277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.227343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.227352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.227433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.227499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.227509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.227646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.227729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.227738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.227819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.227893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.227903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.228049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.228186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.228196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.228276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.228344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.228353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.228426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.228492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.228501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.228563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.228639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.228648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.228706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.228840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.228850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.228919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.228996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.229006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.229154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.229243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.229253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.229339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.229478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.229487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.229545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.229619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.229629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.229706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.229775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.229784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.229854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.229930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.229939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.230010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.230079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.230090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.230163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.230244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.230253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.230319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.230389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.230399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.230481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.230574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.230584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.230717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.230782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.230791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.230876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.230941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.230950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.231018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.231088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.231097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.231163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.231307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.231316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.231373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.231451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.231461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.231597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.231729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.231739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.231870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.232002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.232012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.232078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.232308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.232318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.232387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.232455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.232464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.232640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.232785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.232796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.232891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.233029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.233039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.233105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.233245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.233255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.233334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.233394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.233404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.233552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.233749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.233759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.233842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.233920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.233929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.234017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.234094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.234103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.234183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.234256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.234266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.234398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.234469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.234479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.234618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.234750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.234759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.234893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.234957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.234967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.235034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.235109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.235119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.235265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.235332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.235341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.235410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.235479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.235488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.235555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.235705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.235715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.235780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.235845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.235854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.235923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.236054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.236063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.236130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.236261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.236271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.236361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.236430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.236441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.236513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.236643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.236652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.236786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.236860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.236870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.237006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.237183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.237194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.237285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.237418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.237427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.237501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.237576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.237585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.237679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.237744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.237754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.237835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.237966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.237976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.201 [2024-05-15 11:18:26.238069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.238133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.201 [2024-05-15 11:18:26.238142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.201 qpair failed and we were unable to recover it. 00:26:29.202 [2024-05-15 11:18:26.238210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.238273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.238282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.202 qpair failed and we were unable to recover it. 00:26:29.202 [2024-05-15 11:18:26.238347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.238409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.238421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.202 qpair failed and we were unable to recover it. 00:26:29.202 [2024-05-15 11:18:26.238556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.238622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.238631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.202 qpair failed and we were unable to recover it. 00:26:29.202 [2024-05-15 11:18:26.238707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.238776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.238786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.202 qpair failed and we were unable to recover it. 00:26:29.202 [2024-05-15 11:18:26.238861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.239015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.239025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.202 qpair failed and we were unable to recover it. 00:26:29.202 [2024-05-15 11:18:26.239110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.239176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.239186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.202 qpair failed and we were unable to recover it. 00:26:29.202 [2024-05-15 11:18:26.239252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.239336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.239346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.202 qpair failed and we were unable to recover it. 00:26:29.202 [2024-05-15 11:18:26.239479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.239618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.239628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.202 qpair failed and we were unable to recover it. 00:26:29.202 [2024-05-15 11:18:26.239693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.239779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.239788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.202 qpair failed and we were unable to recover it. 00:26:29.202 [2024-05-15 11:18:26.239854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.239930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.239940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.202 qpair failed and we were unable to recover it. 00:26:29.202 [2024-05-15 11:18:26.240015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.240105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.240115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.202 qpair failed and we were unable to recover it. 00:26:29.202 [2024-05-15 11:18:26.240252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.240326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.240338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.202 qpair failed and we were unable to recover it. 00:26:29.202 [2024-05-15 11:18:26.240411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.240544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.240553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.202 qpair failed and we were unable to recover it. 00:26:29.202 [2024-05-15 11:18:26.240648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.240728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.240738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.202 qpair failed and we were unable to recover it. 00:26:29.202 [2024-05-15 11:18:26.240802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.240886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.240895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.202 qpair failed and we were unable to recover it. 00:26:29.202 [2024-05-15 11:18:26.240972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.241037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.241046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.202 qpair failed and we were unable to recover it. 00:26:29.202 [2024-05-15 11:18:26.241113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.241180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.241190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.202 qpair failed and we were unable to recover it. 00:26:29.202 [2024-05-15 11:18:26.241322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.241387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.241396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.202 qpair failed and we were unable to recover it. 00:26:29.202 [2024-05-15 11:18:26.241543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.241623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.241632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.202 qpair failed and we were unable to recover it. 00:26:29.202 [2024-05-15 11:18:26.241853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.241926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.241936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.202 qpair failed and we were unable to recover it. 00:26:29.202 [2024-05-15 11:18:26.242040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.242108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.242117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.202 qpair failed and we were unable to recover it. 00:26:29.202 [2024-05-15 11:18:26.242265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.242352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.242363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.202 qpair failed and we were unable to recover it. 00:26:29.202 [2024-05-15 11:18:26.242451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.242516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.242526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.202 qpair failed and we were unable to recover it. 00:26:29.202 [2024-05-15 11:18:26.242586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.242648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.242657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.202 qpair failed and we were unable to recover it. 00:26:29.202 [2024-05-15 11:18:26.242731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.242807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.242816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.202 qpair failed and we were unable to recover it. 00:26:29.202 [2024-05-15 11:18:26.242905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.242973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.242982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.202 qpair failed and we were unable to recover it. 00:26:29.202 [2024-05-15 11:18:26.243118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.243257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.243268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.202 qpair failed and we were unable to recover it. 00:26:29.202 [2024-05-15 11:18:26.243345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.243430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.243439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.202 qpair failed and we were unable to recover it. 00:26:29.202 [2024-05-15 11:18:26.243579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.243712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.243722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.202 qpair failed and we were unable to recover it. 00:26:29.202 [2024-05-15 11:18:26.243789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.243855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.243864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.202 qpair failed and we were unable to recover it. 00:26:29.202 [2024-05-15 11:18:26.243933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.243998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.244007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.202 qpair failed and we were unable to recover it. 00:26:29.202 [2024-05-15 11:18:26.244152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.244237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.244248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.202 qpair failed and we were unable to recover it. 00:26:29.202 [2024-05-15 11:18:26.244332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.244470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.244479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.202 qpair failed and we were unable to recover it. 00:26:29.202 [2024-05-15 11:18:26.244543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.244619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.244629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.202 qpair failed and we were unable to recover it. 00:26:29.202 [2024-05-15 11:18:26.244765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.244848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.244858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.202 qpair failed and we were unable to recover it. 00:26:29.202 [2024-05-15 11:18:26.244989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.245129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.245139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.202 qpair failed and we were unable to recover it. 00:26:29.202 [2024-05-15 11:18:26.245230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.245297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.245308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.202 qpair failed and we were unable to recover it. 00:26:29.202 [2024-05-15 11:18:26.245385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.245481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.245492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.202 qpair failed and we were unable to recover it. 00:26:29.202 [2024-05-15 11:18:26.245644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.245723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.245732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.202 qpair failed and we were unable to recover it. 00:26:29.202 [2024-05-15 11:18:26.245877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.245955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.245965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.202 qpair failed and we were unable to recover it. 00:26:29.202 [2024-05-15 11:18:26.246034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.246101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.246110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.202 qpair failed and we were unable to recover it. 00:26:29.202 [2024-05-15 11:18:26.246194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.246344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.246355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.202 qpair failed and we were unable to recover it. 00:26:29.202 [2024-05-15 11:18:26.246510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.246582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.246591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.202 qpair failed and we were unable to recover it. 00:26:29.202 [2024-05-15 11:18:26.246682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.246749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.246759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.202 qpair failed and we were unable to recover it. 00:26:29.202 [2024-05-15 11:18:26.246852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.246931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.246941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.202 qpair failed and we were unable to recover it. 00:26:29.202 [2024-05-15 11:18:26.247023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.247221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.247231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.202 qpair failed and we were unable to recover it. 00:26:29.202 [2024-05-15 11:18:26.247302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.247366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.247376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.202 qpair failed and we were unable to recover it. 00:26:29.202 [2024-05-15 11:18:26.247452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.247542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.247551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.202 qpair failed and we were unable to recover it. 00:26:29.202 [2024-05-15 11:18:26.247692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.247792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.247803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.202 qpair failed and we were unable to recover it. 00:26:29.202 [2024-05-15 11:18:26.247870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.247944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.247955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.202 qpair failed and we were unable to recover it. 00:26:29.202 [2024-05-15 11:18:26.248035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.248106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.202 [2024-05-15 11:18:26.248115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.202 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.248206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.248271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.248280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.248414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.248551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.248560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.248717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.248790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.248800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.248889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.248951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.248961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.249047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.249118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.249128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.249229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.249367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.249376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.249512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.249802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.249812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.249979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.250176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.250186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.250337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.250402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.250411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.250537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.250604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.250613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.250759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.250837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.250847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.250930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.251012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.251022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.251244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.251329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.251339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.251434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.251532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.251543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.251608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.251680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.251690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.251765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.251850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.251859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.251997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.252073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.252083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.252220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.252309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.252319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.252454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.252589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.252600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.252670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.252735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.252744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.252807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.252948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.252957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.253100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.253304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.253314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.253397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.253537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.253547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.253719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.253860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.253870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.253959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.254099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.254109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.254184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.254315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.254325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.254466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.254546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.254555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.254636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.254772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.254782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.254849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.254918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.254927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.255007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.255101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.255110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.255186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.255319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.255329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.255411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.255543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.255553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.255700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.255768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.255777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.255912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.256110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.256120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.256199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.256417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.256426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.256511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.256587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.256597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.256736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.256830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.256839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.257036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.257174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.257184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.257263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.257341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.257350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.257498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.257649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.257659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.257866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.258016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.258026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.258107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.258210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.258219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.258432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.258499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.258509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.258576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.258658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.258668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.258807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.258887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.258896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.259065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.259151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.259161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.259313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.259473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.259483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.259552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.259615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.259625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.259694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.259782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.259793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.259863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.259994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.260003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.260083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.260170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.260180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.260265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.260371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.260385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.260462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.260556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.260570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.260656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.260732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.260746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.203 [2024-05-15 11:18:26.260906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.260980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.203 [2024-05-15 11:18:26.260994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.203 qpair failed and we were unable to recover it. 00:26:29.204 [2024-05-15 11:18:26.261180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.261287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.261300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.204 qpair failed and we were unable to recover it. 00:26:29.204 [2024-05-15 11:18:26.261461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.261552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.261565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.204 qpair failed and we were unable to recover it. 00:26:29.204 [2024-05-15 11:18:26.261656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.261868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.261881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.204 qpair failed and we were unable to recover it. 00:26:29.204 [2024-05-15 11:18:26.261962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.262047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.262060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.204 qpair failed and we were unable to recover it. 00:26:29.204 [2024-05-15 11:18:26.262152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.262317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.262331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.204 qpair failed and we were unable to recover it. 00:26:29.204 [2024-05-15 11:18:26.262417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.262493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.262506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.204 qpair failed and we were unable to recover it. 00:26:29.204 [2024-05-15 11:18:26.262625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.262740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.262757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:29.204 qpair failed and we were unable to recover it. 00:26:29.204 [2024-05-15 11:18:26.262848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.262940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.262953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:29.204 qpair failed and we were unable to recover it. 00:26:29.204 [2024-05-15 11:18:26.263024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.263092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.263102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.204 qpair failed and we were unable to recover it. 00:26:29.204 [2024-05-15 11:18:26.263248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.263405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.263415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.204 qpair failed and we were unable to recover it. 00:26:29.204 [2024-05-15 11:18:26.263489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.263624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.263634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.204 qpair failed and we were unable to recover it. 00:26:29.204 [2024-05-15 11:18:26.263786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.263868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.263877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.204 qpair failed and we were unable to recover it. 00:26:29.204 [2024-05-15 11:18:26.263951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.264155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.264169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.204 qpair failed and we were unable to recover it. 00:26:29.204 [2024-05-15 11:18:26.264253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.264406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.264415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.204 qpair failed and we were unable to recover it. 00:26:29.204 [2024-05-15 11:18:26.264564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.264660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.264670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.204 qpair failed and we were unable to recover it. 00:26:29.204 [2024-05-15 11:18:26.264767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.264842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.264851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.204 qpair failed and we were unable to recover it. 00:26:29.204 [2024-05-15 11:18:26.264935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.265108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.265118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.204 qpair failed and we were unable to recover it. 00:26:29.204 [2024-05-15 11:18:26.265193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.265367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.265378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.204 qpair failed and we were unable to recover it. 00:26:29.204 [2024-05-15 11:18:26.265464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.265594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.265604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.204 qpair failed and we were unable to recover it. 00:26:29.204 [2024-05-15 11:18:26.265805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.265876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.265885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.204 qpair failed and we were unable to recover it. 00:26:29.204 [2024-05-15 11:18:26.266064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.266138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.266147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.204 qpair failed and we were unable to recover it. 00:26:29.204 [2024-05-15 11:18:26.266348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.266413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.266424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.204 qpair failed and we were unable to recover it. 00:26:29.204 [2024-05-15 11:18:26.266561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.266721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.266731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.204 qpair failed and we were unable to recover it. 00:26:29.204 [2024-05-15 11:18:26.266952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.267028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.267038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.204 qpair failed and we were unable to recover it. 00:26:29.204 [2024-05-15 11:18:26.267125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.267194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.267205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.204 qpair failed and we were unable to recover it. 00:26:29.204 [2024-05-15 11:18:26.267293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.267394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.267404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.204 qpair failed and we were unable to recover it. 00:26:29.204 [2024-05-15 11:18:26.267536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.267629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.267639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.204 qpair failed and we were unable to recover it. 00:26:29.204 [2024-05-15 11:18:26.267728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.267814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.267824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.204 qpair failed and we were unable to recover it. 00:26:29.204 [2024-05-15 11:18:26.267916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.268056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.268065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.204 qpair failed and we were unable to recover it. 00:26:29.204 [2024-05-15 11:18:26.268205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.268277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.268287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.204 qpair failed and we were unable to recover it. 00:26:29.204 [2024-05-15 11:18:26.268430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.268572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.268582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.204 qpair failed and we were unable to recover it. 00:26:29.204 [2024-05-15 11:18:26.268658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.268788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.268798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.204 qpair failed and we were unable to recover it. 00:26:29.204 [2024-05-15 11:18:26.268891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.268969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.268978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.204 qpair failed and we were unable to recover it. 00:26:29.204 [2024-05-15 11:18:26.269115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.269186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.269196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.204 qpair failed and we were unable to recover it. 00:26:29.204 [2024-05-15 11:18:26.269329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.269459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.269469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.204 qpair failed and we were unable to recover it. 00:26:29.204 [2024-05-15 11:18:26.269541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.269621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.269630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.204 qpair failed and we were unable to recover it. 00:26:29.204 [2024-05-15 11:18:26.269686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.269767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.269777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.204 qpair failed and we were unable to recover it. 00:26:29.204 [2024-05-15 11:18:26.269847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.270012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.270022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.204 qpair failed and we were unable to recover it. 00:26:29.204 [2024-05-15 11:18:26.270163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.270299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.270308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.204 qpair failed and we were unable to recover it. 00:26:29.204 [2024-05-15 11:18:26.270395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.270548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.270558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.204 qpair failed and we were unable to recover it. 00:26:29.204 [2024-05-15 11:18:26.270624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.270690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.270700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.204 qpair failed and we were unable to recover it. 00:26:29.204 [2024-05-15 11:18:26.270766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.270848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.270858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.204 qpair failed and we were unable to recover it. 00:26:29.204 [2024-05-15 11:18:26.271005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.271076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.271086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.204 qpair failed and we were unable to recover it. 00:26:29.204 [2024-05-15 11:18:26.271154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.271257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.271268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.204 qpair failed and we were unable to recover it. 00:26:29.204 [2024-05-15 11:18:26.271365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.271443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.271452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.204 qpair failed and we were unable to recover it. 00:26:29.204 [2024-05-15 11:18:26.271536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.271604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.271613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.204 qpair failed and we were unable to recover it. 00:26:29.204 [2024-05-15 11:18:26.271690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.271764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.271776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.204 qpair failed and we were unable to recover it. 00:26:29.204 [2024-05-15 11:18:26.271844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.271911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.204 [2024-05-15 11:18:26.271921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.272002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.272078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.272088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.272156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.272288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.272298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.272382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.272442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.272451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.272526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.272618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.272628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.272773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.272850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.272861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.272941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.273104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.273114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.273181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.273264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.273273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.273368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.273512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.273521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.273584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.273712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.273724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.273820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.273891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.273901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.273985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.274052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.274061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.274129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.274210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.274220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.274304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.274378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.274388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.274535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.274600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.274609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.274807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.274880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.274890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.274977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.275042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.275052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.275117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.275253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.275263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.275341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.275407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.275417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.275554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.275631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.275643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.275720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.275810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.275819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.275892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.275964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.275974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.276040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.276197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.276208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.276367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.276433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.276442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.276525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.276689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.276698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.276776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.276886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.276896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.276974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.277104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.277114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.277182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.277249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.277260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.277332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.277465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.277474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.277552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.277683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.277694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.277873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.278000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.278011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.278069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.278208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.278218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.278289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.278419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.278428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.278506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.278635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.278645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.278716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.278865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.278876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.278949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.279086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.279097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.279318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.279409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.279420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.279570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.279711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.279721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.279879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.279947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.279956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.280050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.280182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.280193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.280274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.280341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.280351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.280521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.280603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.280613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.280686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.280832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.280842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.280917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.280982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.280992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.281133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.281188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.281200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.281280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.281447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.281457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.281518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.281674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.281684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.281746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.281814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.281823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.281905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.281969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.281979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.282123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.282269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.282279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.282426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.282501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.282510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.282586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.282664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.282675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.282809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.282955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.282965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.205 qpair failed and we were unable to recover it. 00:26:29.205 [2024-05-15 11:18:26.283166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.283232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.205 [2024-05-15 11:18:26.283241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.283320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.283518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.283528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.283603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.283658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.283668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.283903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.283982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.283991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.284079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.284219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.284229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.284306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.284432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.284442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.284524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.284657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.284667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.284751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.284989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.284999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.285063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.285153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.285162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.285269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.285402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.285411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.285481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.285610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.285619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.285695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.285843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.285853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.286006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.286150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.286160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.286248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.286315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.286324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.286465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.286534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.286544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.286614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.286747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.286758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.286891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.286954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.286963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.287046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.287113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.287123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.287189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.287251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.287260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.287393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.287522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.287532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.287597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.287731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.287744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.287818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.287899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.287909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.287993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.288079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.288089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.288223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.288292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.288301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.288377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.288512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.288521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.288595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.288668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.288677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.288838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.288909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.288918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.289056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.289149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.289158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.289299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.289428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.289438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.289523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.289653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.289663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.289732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.289881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.289891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.290033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.290112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.290121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.290188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.290250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.290259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.290411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.290555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.290564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.290668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.290803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.290812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.290878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.290954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.290963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.291112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.291253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.291264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.291347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.291486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.291496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.291671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.291735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.291745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.291820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.291892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.291901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.291978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.292052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.292062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.292152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.292247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.292257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.292325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.292462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.292479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.292552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.292645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.292655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.292721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.292850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.292860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.293005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.293106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.293115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.293187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.293265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.293276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.293432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.293574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.293584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.293661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.293728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.293738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.293810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.293877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.293886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.293948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.294031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.294041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.294173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.294255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.294264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.294341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.294412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.294422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.294579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.294642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.294652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.294791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.294919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.294929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.295007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.295068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.295078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.206 [2024-05-15 11:18:26.295213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.295289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.206 [2024-05-15 11:18:26.295298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.206 qpair failed and we were unable to recover it. 00:26:29.207 [2024-05-15 11:18:26.295360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.295441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.295451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.207 qpair failed and we were unable to recover it. 00:26:29.207 [2024-05-15 11:18:26.295604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.295679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.295689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.207 qpair failed and we were unable to recover it. 00:26:29.207 [2024-05-15 11:18:26.295768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.295910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.295919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.207 qpair failed and we were unable to recover it. 00:26:29.207 [2024-05-15 11:18:26.295985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.296108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.296118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.207 qpair failed and we were unable to recover it. 00:26:29.207 [2024-05-15 11:18:26.296202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.296337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.296346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.207 qpair failed and we were unable to recover it. 00:26:29.207 [2024-05-15 11:18:26.296433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.296521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.296531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.207 qpair failed and we were unable to recover it. 00:26:29.207 [2024-05-15 11:18:26.296607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.296694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.296703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.207 qpair failed and we were unable to recover it. 00:26:29.207 [2024-05-15 11:18:26.296853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.296997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.297007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.207 qpair failed and we were unable to recover it. 00:26:29.207 [2024-05-15 11:18:26.297138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.297334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.297344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.207 qpair failed and we were unable to recover it. 00:26:29.207 [2024-05-15 11:18:26.297436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.297582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.297592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.207 qpair failed and we were unable to recover it. 00:26:29.207 [2024-05-15 11:18:26.297663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.297725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.297739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.207 qpair failed and we were unable to recover it. 00:26:29.207 [2024-05-15 11:18:26.297891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.298105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.298115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.207 qpair failed and we were unable to recover it. 00:26:29.207 [2024-05-15 11:18:26.298207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.298355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.298365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.207 qpair failed and we were unable to recover it. 00:26:29.207 [2024-05-15 11:18:26.298500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.298569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.298579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.207 qpair failed and we were unable to recover it. 00:26:29.207 [2024-05-15 11:18:26.298796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.298874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.298884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.207 qpair failed and we were unable to recover it. 00:26:29.207 [2024-05-15 11:18:26.299038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.299115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.299127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.207 qpair failed and we were unable to recover it. 00:26:29.207 [2024-05-15 11:18:26.299272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.299355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.299365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.207 qpair failed and we were unable to recover it. 00:26:29.207 [2024-05-15 11:18:26.299508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.299643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.299654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.207 qpair failed and we were unable to recover it. 00:26:29.207 [2024-05-15 11:18:26.299740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.299826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.299836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.207 qpair failed and we were unable to recover it. 00:26:29.207 [2024-05-15 11:18:26.300002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.300059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.300070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.207 qpair failed and we were unable to recover it. 00:26:29.207 [2024-05-15 11:18:26.300160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.300370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.300381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.207 qpair failed and we were unable to recover it. 00:26:29.207 [2024-05-15 11:18:26.300527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.300661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.300670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.207 qpair failed and we were unable to recover it. 00:26:29.207 [2024-05-15 11:18:26.300746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.300841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.300851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.207 qpair failed and we were unable to recover it. 00:26:29.207 [2024-05-15 11:18:26.300993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.301073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.301082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.207 qpair failed and we were unable to recover it. 00:26:29.207 [2024-05-15 11:18:26.301213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.301287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.301296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.207 qpair failed and we were unable to recover it. 00:26:29.207 [2024-05-15 11:18:26.301363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.301530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.301541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.207 qpair failed and we were unable to recover it. 00:26:29.207 [2024-05-15 11:18:26.301611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.301740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.301750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.207 qpair failed and we were unable to recover it. 00:26:29.207 [2024-05-15 11:18:26.301833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.301969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.301980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.207 qpair failed and we were unable to recover it. 00:26:29.207 [2024-05-15 11:18:26.302050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.302202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.302212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.207 qpair failed and we were unable to recover it. 00:26:29.207 [2024-05-15 11:18:26.302279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.302362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.302372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.207 qpair failed and we were unable to recover it. 00:26:29.207 [2024-05-15 11:18:26.302442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.302578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.302588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.207 qpair failed and we were unable to recover it. 00:26:29.207 [2024-05-15 11:18:26.302665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.302736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.302747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.207 qpair failed and we were unable to recover it. 00:26:29.207 [2024-05-15 11:18:26.302806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.302933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.302942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.207 qpair failed and we were unable to recover it. 00:26:29.207 [2024-05-15 11:18:26.303074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.303218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.303230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.207 qpair failed and we were unable to recover it. 00:26:29.207 [2024-05-15 11:18:26.303308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.303465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.303476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.207 qpair failed and we were unable to recover it. 00:26:29.207 [2024-05-15 11:18:26.303641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.303720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.303730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.207 qpair failed and we were unable to recover it. 00:26:29.207 [2024-05-15 11:18:26.303825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.304034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.304044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.207 qpair failed and we were unable to recover it. 00:26:29.207 [2024-05-15 11:18:26.304189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.304279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.304289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.207 qpair failed and we were unable to recover it. 00:26:29.207 [2024-05-15 11:18:26.304379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.304511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.304523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.207 qpair failed and we were unable to recover it. 00:26:29.207 [2024-05-15 11:18:26.304693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.304774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.304784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.207 qpair failed and we were unable to recover it. 00:26:29.207 [2024-05-15 11:18:26.304860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.304937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.304948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.207 qpair failed and we were unable to recover it. 00:26:29.207 [2024-05-15 11:18:26.305029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.305216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.305226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.207 qpair failed and we were unable to recover it. 00:26:29.207 [2024-05-15 11:18:26.305325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.305453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.305463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.207 qpair failed and we were unable to recover it. 00:26:29.207 [2024-05-15 11:18:26.305624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.305693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.305703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.207 qpair failed and we were unable to recover it. 00:26:29.207 [2024-05-15 11:18:26.305787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.305977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.305987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.207 qpair failed and we were unable to recover it. 00:26:29.207 [2024-05-15 11:18:26.306073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.306170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.306180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.207 qpair failed and we were unable to recover it. 00:26:29.207 [2024-05-15 11:18:26.306397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.306480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.306490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.207 qpair failed and we were unable to recover it. 00:26:29.207 [2024-05-15 11:18:26.306729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.306821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.306831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.207 qpair failed and we were unable to recover it. 00:26:29.207 [2024-05-15 11:18:26.306922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.307059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.207 [2024-05-15 11:18:26.307069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.207 qpair failed and we were unable to recover it. 00:26:29.207 [2024-05-15 11:18:26.307239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.307384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.307395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.307473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.307567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.307579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.307666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.307804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.307814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.307939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.308074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.308083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.308322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.308393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.308403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.308536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.308681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.308692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.308770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.308860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.308870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.308964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.309111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.309121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.309260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.309350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.309360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.309491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.309625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.309634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.309725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.309804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.309814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.309948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.310085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.310096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.310168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.310254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.310265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.310361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.310584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.310595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.310670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.310744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.310754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.310953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.311083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.311092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.311186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.311275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.311286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.311356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.311429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.311439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.311640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.311869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.311880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.312014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.312074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.312085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.312170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.312254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.312264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.312411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.312543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.312556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.312700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.312862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.312872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.312941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.313080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.313091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.313229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.313306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.313318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.313453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.313596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.313606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.313681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.313817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.313827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.313971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.314038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.314048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.314125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.314206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.314216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.314278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.314409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.314419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.314558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.314624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.314634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.314771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.314863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.314874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.315003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.315083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.315093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.315233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.315373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.315384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.315470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.315540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.315550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.315690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.315753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.315763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.315856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.315923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.315933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.316081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.316158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.316171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.316251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.316317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.316327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.316395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.316548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.316559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.316641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.316799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.316810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.316944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.317014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.317024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.317106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.317302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.317312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.317401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.317456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.317467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.317542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.317680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.317691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.317853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.318073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.318083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.318211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.318304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.318314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.318396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.318544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.318554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.318712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.318913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.318923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.319057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.319208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.319219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.319291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.319433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.319443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.319599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.319664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.319673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.319734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.319824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.319835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.319979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.320052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.320061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.320202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.320283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.320294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.320420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.320498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.320508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.208 qpair failed and we were unable to recover it. 00:26:29.208 [2024-05-15 11:18:26.320572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.208 [2024-05-15 11:18:26.320665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.320675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.209 qpair failed and we were unable to recover it. 00:26:29.209 [2024-05-15 11:18:26.320879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.320957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.320969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.209 qpair failed and we were unable to recover it. 00:26:29.209 [2024-05-15 11:18:26.321057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.321143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.321153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.209 qpair failed and we were unable to recover it. 00:26:29.209 [2024-05-15 11:18:26.321265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.321371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.321387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:29.209 qpair failed and we were unable to recover it. 00:26:29.209 [2024-05-15 11:18:26.321540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.321756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.321771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:29.209 qpair failed and we were unable to recover it. 00:26:29.209 [2024-05-15 11:18:26.321839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.322057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.322071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:29.209 qpair failed and we were unable to recover it. 00:26:29.209 [2024-05-15 11:18:26.322225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.322300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.322316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:29.209 qpair failed and we were unable to recover it. 00:26:29.209 [2024-05-15 11:18:26.322579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.322685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.322698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:29.209 qpair failed and we were unable to recover it. 00:26:29.209 [2024-05-15 11:18:26.322927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.323110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.323124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:29.209 qpair failed and we were unable to recover it. 00:26:29.209 [2024-05-15 11:18:26.323273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.323356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.323370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:29.209 qpair failed and we were unable to recover it. 00:26:29.209 [2024-05-15 11:18:26.323526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.323599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.323612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:29.209 qpair failed and we were unable to recover it. 00:26:29.209 [2024-05-15 11:18:26.323765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.323844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.323858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:29.209 qpair failed and we were unable to recover it. 00:26:29.209 [2024-05-15 11:18:26.323951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.324031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.324045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:29.209 qpair failed and we were unable to recover it. 00:26:29.209 [2024-05-15 11:18:26.324132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.324363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.324377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:29.209 qpair failed and we were unable to recover it. 00:26:29.209 [2024-05-15 11:18:26.324549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.324649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.324663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:29.209 qpair failed and we were unable to recover it. 00:26:29.209 [2024-05-15 11:18:26.324877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.324960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.324974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:29.209 qpair failed and we were unable to recover it. 00:26:29.209 [2024-05-15 11:18:26.325077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.325289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.325303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:29.209 qpair failed and we were unable to recover it. 00:26:29.209 [2024-05-15 11:18:26.325460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.325553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.325567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:29.209 qpair failed and we were unable to recover it. 00:26:29.209 [2024-05-15 11:18:26.325806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.325893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.325907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:29.209 qpair failed and we were unable to recover it. 00:26:29.209 [2024-05-15 11:18:26.325976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.326062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.326075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:29.209 qpair failed and we were unable to recover it. 00:26:29.209 [2024-05-15 11:18:26.326233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.326322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.326336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:29.209 qpair failed and we were unable to recover it. 00:26:29.209 [2024-05-15 11:18:26.326510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.326711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.326721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.209 qpair failed and we were unable to recover it. 00:26:29.209 [2024-05-15 11:18:26.326790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.326958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.326969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.209 qpair failed and we were unable to recover it. 00:26:29.209 [2024-05-15 11:18:26.327119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.327295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.327306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.209 qpair failed and we were unable to recover it. 00:26:29.209 [2024-05-15 11:18:26.327448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.327600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.327611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.209 qpair failed and we were unable to recover it. 00:26:29.209 [2024-05-15 11:18:26.327744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.327832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.327843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.209 qpair failed and we were unable to recover it. 00:26:29.209 [2024-05-15 11:18:26.327931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.328092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.328106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.209 qpair failed and we were unable to recover it. 00:26:29.209 [2024-05-15 11:18:26.328202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.328303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.328317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.209 qpair failed and we were unable to recover it. 00:26:29.209 [2024-05-15 11:18:26.328420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.328493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.328514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.209 qpair failed and we were unable to recover it. 00:26:29.209 [2024-05-15 11:18:26.328610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.328757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.328767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.209 qpair failed and we were unable to recover it. 00:26:29.209 [2024-05-15 11:18:26.328969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.329107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.329118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.209 qpair failed and we were unable to recover it. 00:26:29.209 [2024-05-15 11:18:26.329268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.329403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.329413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.209 qpair failed and we were unable to recover it. 00:26:29.209 [2024-05-15 11:18:26.329507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.329726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.329737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.209 qpair failed and we were unable to recover it. 00:26:29.209 [2024-05-15 11:18:26.329810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.329889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.329901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.209 qpair failed and we were unable to recover it. 00:26:29.209 [2024-05-15 11:18:26.330049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.330180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.330191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.209 qpair failed and we were unable to recover it. 00:26:29.209 [2024-05-15 11:18:26.330341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.330487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.330497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.209 qpair failed and we were unable to recover it. 00:26:29.209 [2024-05-15 11:18:26.330590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.330676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.330686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.209 qpair failed and we were unable to recover it. 00:26:29.209 [2024-05-15 11:18:26.330775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.330829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.330840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.209 qpair failed and we were unable to recover it. 00:26:29.209 [2024-05-15 11:18:26.330932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.331066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.331076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.209 qpair failed and we were unable to recover it. 00:26:29.209 [2024-05-15 11:18:26.331142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.331226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.331236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.209 qpair failed and we were unable to recover it. 00:26:29.209 [2024-05-15 11:18:26.331296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.331363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.331372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.209 qpair failed and we were unable to recover it. 00:26:29.209 [2024-05-15 11:18:26.331441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.331513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.331522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.209 qpair failed and we were unable to recover it. 00:26:29.209 [2024-05-15 11:18:26.331676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.331755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.331765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.209 qpair failed and we were unable to recover it. 00:26:29.209 [2024-05-15 11:18:26.331835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.331901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.331910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.209 qpair failed and we were unable to recover it. 00:26:29.209 [2024-05-15 11:18:26.332126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.332259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.332270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.209 qpair failed and we were unable to recover it. 00:26:29.209 [2024-05-15 11:18:26.332341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.332472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.332482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.209 qpair failed and we were unable to recover it. 00:26:29.209 [2024-05-15 11:18:26.332648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.332787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.332798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.209 qpair failed and we were unable to recover it. 00:26:29.209 [2024-05-15 11:18:26.332897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.332995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.333005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.209 qpair failed and we were unable to recover it. 00:26:29.209 [2024-05-15 11:18:26.333172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.333258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.333270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.209 qpair failed and we were unable to recover it. 00:26:29.209 [2024-05-15 11:18:26.333335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.333503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.333514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.209 qpair failed and we were unable to recover it. 00:26:29.209 [2024-05-15 11:18:26.333648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.333869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.209 [2024-05-15 11:18:26.333879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.333957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.334043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.334053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.334142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.334364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.334375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.334472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.334629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.334639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.334714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.334779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.334789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.334875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.335076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.335087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.335236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.335368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.335379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.335475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.335676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.335686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.335818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.335908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.335917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.336060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.336209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.336219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.336303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.336389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.336400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.336531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.336610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.336621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.336698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.336932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.336943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.337035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.337110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.337120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.337255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.337393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.337403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.337466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.337612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.337624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.337700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.337796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.337805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.337937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.338075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.338084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.338149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.338384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.338395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.338538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.338606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.338616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.338759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.338825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.338835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.339026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.339195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.339206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.339340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.339545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.339555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.339622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.339755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.339765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.339858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.339954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.339965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.340162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.340324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.340334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.340414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.340483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.340495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.340555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.340706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.340716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.340867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.340945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.340962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.341060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.341134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.341143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.341284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.341433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.341443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.341504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.341642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.341663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.341837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.341970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.341980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.342199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.342272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.342282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.342365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.342507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.342518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.342599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.342748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.342758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.342967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.343110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.343123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.343215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.343343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.343354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.343448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.343537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.343547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.343684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.343762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.343772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.343861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.344029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.344039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.344105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.344239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.344250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.344399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.344462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.344472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.344554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.344617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.344627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.344708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.344776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.344785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.344882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.344952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.344962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.345051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.345229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.345245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.345378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.345443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.345453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.345594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.345791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.345801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.345877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.346093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.346103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.346246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.346465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.346475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.346544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.346618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.346627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.346700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.346840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.346850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.346996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.347071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.347081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.347150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.347228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.347238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.347417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.347508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.347517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.347589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.347746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.347756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.210 qpair failed and we were unable to recover it. 00:26:29.210 [2024-05-15 11:18:26.347893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.210 [2024-05-15 11:18:26.348033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.348043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.348108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.348194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.348205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.348282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.348482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.348493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.348641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.348780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.348791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.348857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.348985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.348996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.349095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.349240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.349251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.349315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.349378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.349387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.349452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.349524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.349536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.349603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.349697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.349707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.349841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.350060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.350070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.350153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.350292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.350302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.350505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.350586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.350595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.350673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.350755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.350764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.350915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.351042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.351062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.351199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.351279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.351290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.351384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.351466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.351476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.351552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.351628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.351637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.351781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.351872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.351882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.351959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.352038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.352048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.352184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.352254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.352264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.352354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.352425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.352436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.352578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.352719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.352730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.352807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.352887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.352897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.352965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.353028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.353038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.353114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.353254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.353265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.353355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.353503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.353514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.353602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.353665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.353675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.353759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.353985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.353995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.354071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.354205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.354216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.354364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.354432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.354443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.354527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.354595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.354605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.354751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.354825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.354835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.354927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.355003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.355014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.355095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.355316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.355327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.355466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.355609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.355620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.355696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.355761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.355770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.355840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.355978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.355988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.356244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.356340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.356350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.356511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.356682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.356697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.356785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.356893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.356906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.356986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.357056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.357069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.357215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.357293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.357308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.357442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.357517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.357531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.357696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.357844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.357858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.357949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.358088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.358101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.358212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.358304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.358317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.358470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.358616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.358629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.358762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.358965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.358978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.359135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.359286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.359299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.359485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.359713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.359726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.359882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.359980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.359990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.360153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.360251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.360261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.360465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.360561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.360571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.360723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.360793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.360803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.360878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.361014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.361024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.361117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.361249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.361259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.361327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.361518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.211 [2024-05-15 11:18:26.361528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.211 qpair failed and we were unable to recover it. 00:26:29.211 [2024-05-15 11:18:26.361625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.361709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.361719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.212 qpair failed and we were unable to recover it. 00:26:29.212 [2024-05-15 11:18:26.361789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.361869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.361878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.212 qpair failed and we were unable to recover it. 00:26:29.212 [2024-05-15 11:18:26.362018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.362154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.362167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.212 qpair failed and we were unable to recover it. 00:26:29.212 [2024-05-15 11:18:26.362246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.362400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.362410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.212 qpair failed and we were unable to recover it. 00:26:29.212 [2024-05-15 11:18:26.362508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.362596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.362606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.212 qpair failed and we were unable to recover it. 00:26:29.212 [2024-05-15 11:18:26.362675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.362751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.362760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.212 qpair failed and we were unable to recover it. 00:26:29.212 [2024-05-15 11:18:26.362903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.362995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.363006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.212 qpair failed and we were unable to recover it. 00:26:29.212 [2024-05-15 11:18:26.363086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.363237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.363247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.212 qpair failed and we were unable to recover it. 00:26:29.212 [2024-05-15 11:18:26.363336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.363408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.363417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.212 qpair failed and we were unable to recover it. 00:26:29.212 [2024-05-15 11:18:26.363551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.363678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.363687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.212 qpair failed and we were unable to recover it. 00:26:29.212 [2024-05-15 11:18:26.363820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.363897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.363907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.212 qpair failed and we were unable to recover it. 00:26:29.212 [2024-05-15 11:18:26.363994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.364080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.364091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.212 qpair failed and we were unable to recover it. 00:26:29.212 [2024-05-15 11:18:26.364321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.364394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.364404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.212 qpair failed and we were unable to recover it. 00:26:29.212 [2024-05-15 11:18:26.364469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.364606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.364616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.212 qpair failed and we were unable to recover it. 00:26:29.212 [2024-05-15 11:18:26.364751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.364827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.364836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.212 qpair failed and we were unable to recover it. 00:26:29.212 [2024-05-15 11:18:26.364930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.365061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.365070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.212 qpair failed and we were unable to recover it. 00:26:29.212 [2024-05-15 11:18:26.365145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.365228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.365238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.212 qpair failed and we were unable to recover it. 00:26:29.212 [2024-05-15 11:18:26.365312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.365447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.365457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.212 qpair failed and we were unable to recover it. 00:26:29.212 [2024-05-15 11:18:26.365538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.365686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.365696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.212 qpair failed and we were unable to recover it. 00:26:29.212 [2024-05-15 11:18:26.365795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.365866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.365875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.212 qpair failed and we were unable to recover it. 00:26:29.212 [2024-05-15 11:18:26.366032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.366113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.366122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.212 qpair failed and we were unable to recover it. 00:26:29.212 [2024-05-15 11:18:26.366261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.366334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.366344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.212 qpair failed and we were unable to recover it. 00:26:29.212 [2024-05-15 11:18:26.366477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.366630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.366640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.212 qpair failed and we were unable to recover it. 00:26:29.212 [2024-05-15 11:18:26.366716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.366857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.366866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.212 qpair failed and we were unable to recover it. 00:26:29.212 [2024-05-15 11:18:26.367049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.367122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.367131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.212 qpair failed and we were unable to recover it. 00:26:29.212 [2024-05-15 11:18:26.367222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.367288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.367298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.212 qpair failed and we were unable to recover it. 00:26:29.212 [2024-05-15 11:18:26.367372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.367437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.367446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.212 qpair failed and we were unable to recover it. 00:26:29.212 [2024-05-15 11:18:26.367542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.367612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.367621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.212 qpair failed and we were unable to recover it. 00:26:29.212 [2024-05-15 11:18:26.367757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.367838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.367848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.212 qpair failed and we were unable to recover it. 00:26:29.212 [2024-05-15 11:18:26.367991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.368126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.368136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.212 qpair failed and we were unable to recover it. 00:26:29.212 [2024-05-15 11:18:26.368216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.368359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.368370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.212 qpair failed and we were unable to recover it. 00:26:29.212 [2024-05-15 11:18:26.368525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.368690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.368700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.212 qpair failed and we were unable to recover it. 00:26:29.212 [2024-05-15 11:18:26.368779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.368921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.368931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.212 qpair failed and we were unable to recover it. 00:26:29.212 [2024-05-15 11:18:26.369077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.369212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.369223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.212 qpair failed and we were unable to recover it. 00:26:29.212 [2024-05-15 11:18:26.369361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.369570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.369580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.212 qpair failed and we were unable to recover it. 00:26:29.212 [2024-05-15 11:18:26.369660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.369819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.369828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.212 qpair failed and we were unable to recover it. 00:26:29.212 [2024-05-15 11:18:26.369910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.370040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.370049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.212 qpair failed and we were unable to recover it. 00:26:29.212 [2024-05-15 11:18:26.370135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.370214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.370224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.212 qpair failed and we were unable to recover it. 00:26:29.212 [2024-05-15 11:18:26.370369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.370516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.370526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.212 qpair failed and we were unable to recover it. 00:26:29.212 [2024-05-15 11:18:26.370691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.370754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.370764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.212 qpair failed and we were unable to recover it. 00:26:29.212 [2024-05-15 11:18:26.370838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.371069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.371078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.212 qpair failed and we were unable to recover it. 00:26:29.212 [2024-05-15 11:18:26.371238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.371388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.371397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.212 qpair failed and we were unable to recover it. 00:26:29.212 [2024-05-15 11:18:26.371467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.371560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.371569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.212 qpair failed and we were unable to recover it. 00:26:29.212 [2024-05-15 11:18:26.371637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.371795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.371808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.212 qpair failed and we were unable to recover it. 00:26:29.212 [2024-05-15 11:18:26.371893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.372032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.372042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.212 qpair failed and we were unable to recover it. 00:26:29.212 [2024-05-15 11:18:26.372122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.372204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.372214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.212 qpair failed and we were unable to recover it. 00:26:29.212 [2024-05-15 11:18:26.372327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.372398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.372406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.212 qpair failed and we were unable to recover it. 00:26:29.212 [2024-05-15 11:18:26.372624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.372702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.372711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.212 qpair failed and we were unable to recover it. 00:26:29.212 [2024-05-15 11:18:26.372943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.373017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.373027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.212 qpair failed and we were unable to recover it. 00:26:29.212 [2024-05-15 11:18:26.373121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.373200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.373210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.212 qpair failed and we were unable to recover it. 00:26:29.212 [2024-05-15 11:18:26.373294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.373492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.212 [2024-05-15 11:18:26.373501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.212 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.373657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.373749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.373759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.373945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.374119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.374129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.374209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.374293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.374305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.374385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.374538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.374548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.374683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.374820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.374830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.374961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.375093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.375103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.375236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.375331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.375341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.375406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.375487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.375496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.375700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.375861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.375871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.375951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.376030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.376039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.376120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.376203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.376212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.376298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.376444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.376454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.376533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.376617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.376629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.376708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.376784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.376794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.376895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.376970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.376980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.377115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.377195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.377206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.377275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.377404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.377414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.377487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.377628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.377638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.377772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.377835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.377844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.377979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.378059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.378069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.378146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.378213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.378224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.378303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.378428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.378439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.378516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.378658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.378667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.378749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.378826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.378835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.378908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.379031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.379041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.379168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.379234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.379245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.379319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.379452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.379462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.379615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.379699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.379709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.379844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.379979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.379989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.380145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.380282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.380292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.380424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.380555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.380566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.380655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.380731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.380740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.380804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.380879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.380888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.381021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.381087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.381097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.381181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.381267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.381277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.381360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.381451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.381461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.381590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.381671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.381680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.381764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.381827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.381836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.381906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.381972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.381981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.382050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.382142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.382152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.382244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.382390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.382400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.382479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.382537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.382547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.382605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.382743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.382752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.382830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.383073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.383083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.383200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.383343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.383352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.383597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.383689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.383699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.383777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.383919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.383928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.384021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.384173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.384183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.384315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.384446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.384456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.384510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.384589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.384598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.384675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.384762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.384771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.384839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.384925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.384935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.385010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.385096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.385106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.385198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.385345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.385355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.385492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.385586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.385596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.385688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.385827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.385836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.213 qpair failed and we were unable to recover it. 00:26:29.213 [2024-05-15 11:18:26.385908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.213 [2024-05-15 11:18:26.385989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.385999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.214 qpair failed and we were unable to recover it. 00:26:29.214 [2024-05-15 11:18:26.386085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.386154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.386163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.214 qpair failed and we were unable to recover it. 00:26:29.214 [2024-05-15 11:18:26.386237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.386315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.386324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.214 qpair failed and we were unable to recover it. 00:26:29.214 [2024-05-15 11:18:26.386396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.386600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.386610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.214 qpair failed and we were unable to recover it. 00:26:29.214 [2024-05-15 11:18:26.386742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.386819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.386828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.214 qpair failed and we were unable to recover it. 00:26:29.214 [2024-05-15 11:18:26.386908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.387058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.387067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.214 qpair failed and we were unable to recover it. 00:26:29.214 [2024-05-15 11:18:26.387215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.387369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.387378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.214 qpair failed and we were unable to recover it. 00:26:29.214 [2024-05-15 11:18:26.387517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.387668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.387678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.214 qpair failed and we were unable to recover it. 00:26:29.214 [2024-05-15 11:18:26.387754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.387832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.387841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.214 qpair failed and we were unable to recover it. 00:26:29.214 [2024-05-15 11:18:26.387922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.388124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.388134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.214 qpair failed and we were unable to recover it. 00:26:29.214 [2024-05-15 11:18:26.388229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.388370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.388380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.214 qpair failed and we were unable to recover it. 00:26:29.214 [2024-05-15 11:18:26.388530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.388614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.388624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.214 qpair failed and we were unable to recover it. 00:26:29.214 [2024-05-15 11:18:26.388779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.388928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.388938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.214 qpair failed and we were unable to recover it. 00:26:29.214 [2024-05-15 11:18:26.389078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.389229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.389239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.214 qpair failed and we were unable to recover it. 00:26:29.214 [2024-05-15 11:18:26.389329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.389420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.389429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.214 qpair failed and we were unable to recover it. 00:26:29.214 [2024-05-15 11:18:26.389568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.389656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.389665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.214 11:18:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:26:29.214 qpair failed and we were unable to recover it. 00:26:29.214 [2024-05-15 11:18:26.389756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.389847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.389860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.214 qpair failed and we were unable to recover it. 00:26:29.214 [2024-05-15 11:18:26.389955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 11:18:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@861 -- # return 0 00:26:29.214 [2024-05-15 11:18:26.390069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.390084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.214 qpair failed and we were unable to recover it. 00:26:29.214 [2024-05-15 11:18:26.390161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.390250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.390263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.214 qpair failed and we were unable to recover it. 00:26:29.214 11:18:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:29.214 [2024-05-15 11:18:26.390340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.390485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.390498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.214 qpair failed and we were unable to recover it. 00:26:29.214 11:18:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@727 -- # xtrace_disable 00:26:29.214 [2024-05-15 11:18:26.390745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.390821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.390836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.214 qpair failed and we were unable to recover it. 00:26:29.214 11:18:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:29.214 [2024-05-15 11:18:26.390942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.391030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.391043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.214 qpair failed and we were unable to recover it. 00:26:29.214 [2024-05-15 11:18:26.391122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.391261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.391274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.214 qpair failed and we were unable to recover it. 00:26:29.214 [2024-05-15 11:18:26.391363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.391536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.391549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.214 qpair failed and we were unable to recover it. 00:26:29.214 [2024-05-15 11:18:26.391628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.391707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.391722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.214 qpair failed and we were unable to recover it. 00:26:29.214 [2024-05-15 11:18:26.391932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.392039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.392056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.214 qpair failed and we were unable to recover it. 00:26:29.214 [2024-05-15 11:18:26.392147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.392297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.392311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.214 qpair failed and we were unable to recover it. 00:26:29.214 [2024-05-15 11:18:26.392390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.392469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.392482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.214 qpair failed and we were unable to recover it. 00:26:29.214 [2024-05-15 11:18:26.392654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.392812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.392826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.214 qpair failed and we were unable to recover it. 00:26:29.214 [2024-05-15 11:18:26.392903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.393065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.393081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.214 qpair failed and we were unable to recover it. 00:26:29.214 [2024-05-15 11:18:26.393279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.393363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.393377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.214 qpair failed and we were unable to recover it. 00:26:29.214 [2024-05-15 11:18:26.393468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.393659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.393673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.214 qpair failed and we were unable to recover it. 00:26:29.214 [2024-05-15 11:18:26.393765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.394160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.394177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.214 qpair failed and we were unable to recover it. 00:26:29.214 [2024-05-15 11:18:26.394331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.394435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.394449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.214 qpair failed and we were unable to recover it. 00:26:29.214 [2024-05-15 11:18:26.394556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.394730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.394744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.214 qpair failed and we were unable to recover it. 00:26:29.214 [2024-05-15 11:18:26.394918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.394998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.395015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.214 qpair failed and we were unable to recover it. 00:26:29.214 [2024-05-15 11:18:26.395125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.395197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.395211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.214 qpair failed and we were unable to recover it. 00:26:29.214 [2024-05-15 11:18:26.395306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.395454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.395467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.214 qpair failed and we were unable to recover it. 00:26:29.214 [2024-05-15 11:18:26.395550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.395646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.395660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.214 qpair failed and we were unable to recover it. 00:26:29.214 [2024-05-15 11:18:26.395750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.214 [2024-05-15 11:18:26.395825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.395838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.395919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.395991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.396004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.396081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.396173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.396187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.396341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.396525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.396539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.396636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.396726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.396739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.396882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.396958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.396973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.397123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.397227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.397242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.397395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.397482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.397495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.397751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.397889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.397903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.398039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.398200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.398214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.398308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.398409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.398422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.398575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.398782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.398796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.398982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.399168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.399182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.399283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.399376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.399390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.399472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.399619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.399634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.399795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.399879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.399893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.399994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.400145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.400159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.400270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.400354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.400367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.400464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.400539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.400553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.400641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.400724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.400738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.400831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.400914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.400930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.401018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.401117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.401131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.401221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.401306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.401321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.401557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.401644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.401659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.401816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.401978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.401992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.402144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.402249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.402263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.402342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.402426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.402440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.402527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.402718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.402731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.402887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.403093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.403106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.403289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.403386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.403400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.403565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.403698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.403712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.403786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.403871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.403884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.404043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.404130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.404144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.404223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.404389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.404403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.404502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.404669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.404682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.404773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.404844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.404858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.404937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.405036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.405050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.405287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.405387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.405400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.405555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.405629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.405642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.405752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.405973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.405987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.406084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.406224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.406239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.406396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.406562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.406576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.406651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.406768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.406781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.406932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.407004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.407018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.407104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.407240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.407254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.407351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.407433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.407447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.407534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.407619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.407632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.407728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.407897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.407911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.407992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.408135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.408149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.408246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.408326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.408339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.408415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.408496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.408510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.408668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.408766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.408779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.408863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.408943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.408956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.409049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.409130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.409144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.409245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.409322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.409336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.215 [2024-05-15 11:18:26.409402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.409501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.215 [2024-05-15 11:18:26.409515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.215 qpair failed and we were unable to recover it. 00:26:29.216 [2024-05-15 11:18:26.409594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.409676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.409690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.216 qpair failed and we were unable to recover it. 00:26:29.216 [2024-05-15 11:18:26.409788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.409873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.409887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.216 qpair failed and we were unable to recover it. 00:26:29.216 [2024-05-15 11:18:26.409992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.410071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.410084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.216 qpair failed and we were unable to recover it. 00:26:29.216 [2024-05-15 11:18:26.410185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.410261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.410275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.216 qpair failed and we were unable to recover it. 00:26:29.216 [2024-05-15 11:18:26.410351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.410420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.410434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.216 qpair failed and we were unable to recover it. 00:26:29.216 [2024-05-15 11:18:26.410518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.410624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.410635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.216 qpair failed and we were unable to recover it. 00:26:29.216 [2024-05-15 11:18:26.410717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.410797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.410807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.216 qpair failed and we were unable to recover it. 00:26:29.216 [2024-05-15 11:18:26.410877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.410936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.410945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.216 qpair failed and we were unable to recover it. 00:26:29.216 [2024-05-15 11:18:26.411019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.411084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.411094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.216 qpair failed and we were unable to recover it. 00:26:29.216 [2024-05-15 11:18:26.411241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.411308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.411318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.216 qpair failed and we were unable to recover it. 00:26:29.216 [2024-05-15 11:18:26.411386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.411453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.411463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.216 qpair failed and we were unable to recover it. 00:26:29.216 [2024-05-15 11:18:26.411541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.411611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.411624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.216 qpair failed and we were unable to recover it. 00:26:29.216 [2024-05-15 11:18:26.411698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.411776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.411790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.216 qpair failed and we were unable to recover it. 00:26:29.216 [2024-05-15 11:18:26.411862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.412002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.412015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.216 qpair failed and we were unable to recover it. 00:26:29.216 [2024-05-15 11:18:26.412108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.412204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.412217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.216 qpair failed and we were unable to recover it. 00:26:29.216 [2024-05-15 11:18:26.412297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.412372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.412386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.216 qpair failed and we were unable to recover it. 00:26:29.216 [2024-05-15 11:18:26.412465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.412538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.412550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.216 qpair failed and we were unable to recover it. 00:26:29.216 [2024-05-15 11:18:26.412621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.412724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.412738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.216 qpair failed and we were unable to recover it. 00:26:29.216 [2024-05-15 11:18:26.412816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.412989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.413003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.216 qpair failed and we were unable to recover it. 00:26:29.216 [2024-05-15 11:18:26.413083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.413228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.413243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.216 qpair failed and we were unable to recover it. 00:26:29.216 [2024-05-15 11:18:26.413348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.413491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.413505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.216 qpair failed and we were unable to recover it. 00:26:29.216 [2024-05-15 11:18:26.413660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.413731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.413744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.216 qpair failed and we were unable to recover it. 00:26:29.216 [2024-05-15 11:18:26.413826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.413901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.413915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.216 qpair failed and we were unable to recover it. 00:26:29.216 [2024-05-15 11:18:26.413991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.414129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.414142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.216 qpair failed and we were unable to recover it. 00:26:29.216 [2024-05-15 11:18:26.414233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.414321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.414334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.216 qpair failed and we were unable to recover it. 00:26:29.216 [2024-05-15 11:18:26.414416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.414489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.414502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.216 qpair failed and we were unable to recover it. 00:26:29.216 [2024-05-15 11:18:26.414606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.414675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.414686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.216 qpair failed and we were unable to recover it. 00:26:29.216 [2024-05-15 11:18:26.414759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.414830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.414840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.216 qpair failed and we were unable to recover it. 00:26:29.216 [2024-05-15 11:18:26.414910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.414976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.414987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.216 qpair failed and we were unable to recover it. 00:26:29.216 [2024-05-15 11:18:26.415053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.415118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.415130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.216 qpair failed and we were unable to recover it. 00:26:29.216 [2024-05-15 11:18:26.415219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.415303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.415314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.216 qpair failed and we were unable to recover it. 00:26:29.216 [2024-05-15 11:18:26.415403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.415468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.415477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.216 qpair failed and we were unable to recover it. 00:26:29.216 [2024-05-15 11:18:26.415549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.415684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.415693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.216 qpair failed and we were unable to recover it. 00:26:29.216 [2024-05-15 11:18:26.415835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.415968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.415978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.216 qpair failed and we were unable to recover it. 00:26:29.216 [2024-05-15 11:18:26.416108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.416193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.416203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.216 qpair failed and we were unable to recover it. 00:26:29.216 [2024-05-15 11:18:26.416283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.416363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.416373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.216 qpair failed and we were unable to recover it. 00:26:29.216 [2024-05-15 11:18:26.416508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.416640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.416650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.216 qpair failed and we were unable to recover it. 00:26:29.216 [2024-05-15 11:18:26.416724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.416806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.416816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.216 qpair failed and we were unable to recover it. 00:26:29.216 [2024-05-15 11:18:26.416881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.416951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.416960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.216 qpair failed and we were unable to recover it. 00:26:29.216 [2024-05-15 11:18:26.417030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.417092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.417102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.216 qpair failed and we were unable to recover it. 00:26:29.216 [2024-05-15 11:18:26.417200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.417273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.417282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.216 qpair failed and we were unable to recover it. 00:26:29.216 [2024-05-15 11:18:26.417357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.417431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.417440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.216 qpair failed and we were unable to recover it. 00:26:29.216 [2024-05-15 11:18:26.417516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.417596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.417606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.216 qpair failed and we were unable to recover it. 00:26:29.216 [2024-05-15 11:18:26.417678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.417747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.417757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.216 qpair failed and we were unable to recover it. 00:26:29.216 [2024-05-15 11:18:26.417843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.417976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.417985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.216 qpair failed and we were unable to recover it. 00:26:29.216 [2024-05-15 11:18:26.418061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.418138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.418148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.216 qpair failed and we were unable to recover it. 00:26:29.216 [2024-05-15 11:18:26.418240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.418314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.418324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.216 qpair failed and we were unable to recover it. 00:26:29.216 [2024-05-15 11:18:26.418390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.418475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.418484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.216 qpair failed and we were unable to recover it. 00:26:29.216 [2024-05-15 11:18:26.418620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.418685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.418694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.216 qpair failed and we were unable to recover it. 00:26:29.216 [2024-05-15 11:18:26.418768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.418857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.418867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.216 qpair failed and we were unable to recover it. 00:26:29.216 [2024-05-15 11:18:26.418942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.419019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.419028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.216 qpair failed and we were unable to recover it. 00:26:29.216 [2024-05-15 11:18:26.419084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.419160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.216 [2024-05-15 11:18:26.419175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.216 qpair failed and we were unable to recover it. 00:26:29.217 [2024-05-15 11:18:26.419242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.419311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.419321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.217 qpair failed and we were unable to recover it. 00:26:29.217 [2024-05-15 11:18:26.419396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.419475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.419484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.217 qpair failed and we were unable to recover it. 00:26:29.217 [2024-05-15 11:18:26.419571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.419643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.419652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.217 qpair failed and we were unable to recover it. 00:26:29.217 [2024-05-15 11:18:26.419794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.419877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.419886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.217 qpair failed and we were unable to recover it. 00:26:29.217 [2024-05-15 11:18:26.419956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.420022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.420031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.217 qpair failed and we were unable to recover it. 00:26:29.217 [2024-05-15 11:18:26.420112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.420176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.420187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.217 qpair failed and we were unable to recover it. 00:26:29.217 [2024-05-15 11:18:26.420258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.420323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.420333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.217 qpair failed and we were unable to recover it. 00:26:29.217 [2024-05-15 11:18:26.420415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.420480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.420490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.217 qpair failed and we were unable to recover it. 00:26:29.217 [2024-05-15 11:18:26.420559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.420637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.420646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.217 qpair failed and we were unable to recover it. 00:26:29.217 [2024-05-15 11:18:26.420779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.420848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.420859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.217 qpair failed and we were unable to recover it. 00:26:29.217 [2024-05-15 11:18:26.420937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.421009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.421018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.217 qpair failed and we were unable to recover it. 00:26:29.217 [2024-05-15 11:18:26.421100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.421173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.421184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.217 qpair failed and we were unable to recover it. 00:26:29.217 [2024-05-15 11:18:26.421247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.421312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.421323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.217 qpair failed and we were unable to recover it. 00:26:29.217 [2024-05-15 11:18:26.421399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.421477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.421495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.217 qpair failed and we were unable to recover it. 00:26:29.217 [2024-05-15 11:18:26.421657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.421744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.421754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.217 qpair failed and we were unable to recover it. 00:26:29.217 [2024-05-15 11:18:26.421897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.422058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.422068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.217 qpair failed and we were unable to recover it. 00:26:29.217 [2024-05-15 11:18:26.422138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.422214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.422225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.217 qpair failed and we were unable to recover it. 00:26:29.217 [2024-05-15 11:18:26.422379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.422446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.422457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.217 qpair failed and we were unable to recover it. 00:26:29.217 [2024-05-15 11:18:26.422545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.422617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.422626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.217 qpair failed and we were unable to recover it. 00:26:29.217 [2024-05-15 11:18:26.422692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.422831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.422845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.217 qpair failed and we were unable to recover it. 00:26:29.217 [2024-05-15 11:18:26.422930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.423010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.423020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.217 qpair failed and we were unable to recover it. 00:26:29.217 [2024-05-15 11:18:26.423094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.423159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.423171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.217 qpair failed and we were unable to recover it. 00:26:29.217 [2024-05-15 11:18:26.423248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.423336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.423347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.217 qpair failed and we were unable to recover it. 00:26:29.217 [2024-05-15 11:18:26.423420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.423573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.423584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.217 qpair failed and we were unable to recover it. 00:26:29.217 [2024-05-15 11:18:26.423651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.423747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.423763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.217 qpair failed and we were unable to recover it. 00:26:29.217 [2024-05-15 11:18:26.423906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.424038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.424048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.217 qpair failed and we were unable to recover it. 00:26:29.217 [2024-05-15 11:18:26.424120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.424198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.424208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.217 qpair failed and we were unable to recover it. 00:26:29.217 [2024-05-15 11:18:26.424275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.424357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.424367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.217 qpair failed and we were unable to recover it. 00:26:29.217 [2024-05-15 11:18:26.424438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.424629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.424638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.217 qpair failed and we were unable to recover it. 00:26:29.217 [2024-05-15 11:18:26.424715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.424846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.424861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.217 11:18:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:29.217 qpair failed and we were unable to recover it. 00:26:29.217 [2024-05-15 11:18:26.424964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.425060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.425075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:29.217 qpair failed and we were unable to recover it. 00:26:29.217 [2024-05-15 11:18:26.425160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 11:18:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:29.217 [2024-05-15 11:18:26.425261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.425276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.217 qpair failed and we were unable to recover it. 00:26:29.217 [2024-05-15 11:18:26.425417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.425498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.425514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.217 qpair failed and we were unable to recover it. 00:26:29.217 11:18:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:29.217 [2024-05-15 11:18:26.425661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-05-15 11:18:26.425757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.477 [2024-05-15 11:18:26.425770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.477 qpair failed and we were unable to recover it. 00:26:29.477 11:18:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:29.477 [2024-05-15 11:18:26.425849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.477 [2024-05-15 11:18:26.425940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.477 [2024-05-15 11:18:26.425953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.477 qpair failed and we were unable to recover it. 00:26:29.477 [2024-05-15 11:18:26.426043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.477 [2024-05-15 11:18:26.426136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.477 [2024-05-15 11:18:26.426149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.477 qpair failed and we were unable to recover it. 00:26:29.477 [2024-05-15 11:18:26.426240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.477 [2024-05-15 11:18:26.426380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.477 [2024-05-15 11:18:26.426393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.477 qpair failed and we were unable to recover it. 00:26:29.477 [2024-05-15 11:18:26.426461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.477 [2024-05-15 11:18:26.426665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.477 [2024-05-15 11:18:26.426679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.477 qpair failed and we were unable to recover it. 00:26:29.477 [2024-05-15 11:18:26.426756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.477 [2024-05-15 11:18:26.426836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.477 [2024-05-15 11:18:26.426848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.477 qpair failed and we were unable to recover it. 00:26:29.477 [2024-05-15 11:18:26.426928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.477 [2024-05-15 11:18:26.427070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.477 [2024-05-15 11:18:26.427083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.477 qpair failed and we were unable to recover it. 00:26:29.477 [2024-05-15 11:18:26.427174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.477 [2024-05-15 11:18:26.427279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.477 [2024-05-15 11:18:26.427293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.477 qpair failed and we were unable to recover it. 00:26:29.477 [2024-05-15 11:18:26.427381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.477 [2024-05-15 11:18:26.427520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.477 [2024-05-15 11:18:26.427533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.477 qpair failed and we were unable to recover it. 00:26:29.477 [2024-05-15 11:18:26.427621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.427700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.427713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-05-15 11:18:26.427794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.427875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.427888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-05-15 11:18:26.427975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.428049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.428062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-05-15 11:18:26.428141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.428220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.428233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-05-15 11:18:26.428391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.428466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.428478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-05-15 11:18:26.428570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.428645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.428658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-05-15 11:18:26.428734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.428882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.428896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-05-15 11:18:26.429050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.429193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.429207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-05-15 11:18:26.429282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.429442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.429455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-05-15 11:18:26.429541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.429641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.429655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-05-15 11:18:26.429798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.429872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.429886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-05-15 11:18:26.429985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.430056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.430068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-05-15 11:18:26.430210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.430280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.430293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-05-15 11:18:26.430379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.430451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.430464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-05-15 11:18:26.430541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.430643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.430656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-05-15 11:18:26.430739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.430813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.430826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-05-15 11:18:26.430898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.431050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.431064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-05-15 11:18:26.431210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.431286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.431299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-05-15 11:18:26.431506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.431592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.431605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-05-15 11:18:26.431682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.431756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.431769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-05-15 11:18:26.431862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.431943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.431956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-05-15 11:18:26.432109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.432249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.432263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-05-15 11:18:26.432339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.432440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.432453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-05-15 11:18:26.432524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.432692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.432706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-05-15 11:18:26.432780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.432856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.432870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-05-15 11:18:26.433014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.433090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.433103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-05-15 11:18:26.433197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.433272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.433287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-05-15 11:18:26.433451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.433658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.433672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-05-15 11:18:26.433753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-05-15 11:18:26.433823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.433836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-05-15 11:18:26.433922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.434014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.434027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-05-15 11:18:26.434102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.434190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.434205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-05-15 11:18:26.434282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.434356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.434369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-05-15 11:18:26.434448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.434520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.434533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-05-15 11:18:26.434608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.434696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.434709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-05-15 11:18:26.434783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.434848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.434861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-05-15 11:18:26.434947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.435020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.435032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-05-15 11:18:26.435106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.435179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.435197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-05-15 11:18:26.435350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.435426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.435441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-05-15 11:18:26.435515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.435589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.435602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-05-15 11:18:26.435678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.435734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.435755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-05-15 11:18:26.435836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.435946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.435961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-05-15 11:18:26.436040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.436116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.436129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-05-15 11:18:26.436221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.436295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.436308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-05-15 11:18:26.436453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.436537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.436550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-05-15 11:18:26.436692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.436768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.436782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-05-15 11:18:26.436924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.437018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.437032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-05-15 11:18:26.437109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.437183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.437200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-05-15 11:18:26.437344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.437486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.437501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-05-15 11:18:26.437578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.437719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.437733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-05-15 11:18:26.437879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.437958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.437972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-05-15 11:18:26.438057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.438128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.438142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-05-15 11:18:26.438291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.438392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.438405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-05-15 11:18:26.438490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.438583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.438597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-05-15 11:18:26.438740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.438814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.438829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-05-15 11:18:26.438988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.439160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-05-15 11:18:26.439181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-05-15 11:18:26.439268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.439342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.439356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-05-15 11:18:26.439438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.439594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.439612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-05-15 11:18:26.439693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.439796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.439810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-05-15 11:18:26.439890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.440029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.440045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-05-15 11:18:26.440195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.440282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.440296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-05-15 11:18:26.440381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.440453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.440468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-05-15 11:18:26.440546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.440702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.440716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-05-15 11:18:26.440796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.440935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.440950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-05-15 11:18:26.441037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.441130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.441144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-05-15 11:18:26.441231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.441304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.441317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-05-15 11:18:26.441402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.441508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.441522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-05-15 11:18:26.441610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.441752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.441766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-05-15 11:18:26.441844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.441918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.441931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-05-15 11:18:26.442036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.442124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.442138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-05-15 11:18:26.442217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.442292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.442306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-05-15 11:18:26.442448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.442595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.442609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-05-15 11:18:26.442698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.442770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.442783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-05-15 11:18:26.442931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.443019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.443032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-05-15 11:18:26.443134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.443218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.443233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-05-15 11:18:26.443335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.443418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.443432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-05-15 11:18:26.443512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 Malloc0 00:26:29.480 [2024-05-15 11:18:26.443648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.443662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-05-15 11:18:26.443750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.443828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.443842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 11:18:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:29.480 [2024-05-15 11:18:26.444013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.444104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.444117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b9 11:18:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:29.480 0 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-05-15 11:18:26.444203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 11:18:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:29.480 [2024-05-15 11:18:26.444339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 11:18:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:29.480 [2024-05-15 11:18:26.444353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-05-15 11:18:26.444503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.444651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.444665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-05-15 11:18:26.444740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.444821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.444834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-05-15 11:18:26.444911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.445080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.445093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-05-15 11:18:26.445176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.445251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.445264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-05-15 11:18:26.445338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-05-15 11:18:26.445482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.445496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.481 qpair failed and we were unable to recover it. 00:26:29.481 [2024-05-15 11:18:26.445586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.445656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.445670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.481 qpair failed and we were unable to recover it. 00:26:29.481 [2024-05-15 11:18:26.445761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.445836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.445852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.481 qpair failed and we were unable to recover it. 00:26:29.481 [2024-05-15 11:18:26.445926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.446005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.446018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.481 qpair failed and we were unable to recover it. 00:26:29.481 [2024-05-15 11:18:26.446161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.446257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.446271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.481 qpair failed and we were unable to recover it. 00:26:29.481 [2024-05-15 11:18:26.446356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.446426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.446440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.481 qpair failed and we were unable to recover it. 00:26:29.481 [2024-05-15 11:18:26.446583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.446658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.446671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.481 qpair failed and we were unable to recover it. 00:26:29.481 [2024-05-15 11:18:26.446773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.446868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.446889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.481 qpair failed and we were unable to recover it. 00:26:29.481 [2024-05-15 11:18:26.446972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.447066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.447078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.481 qpair failed and we were unable to recover it. 00:26:29.481 [2024-05-15 11:18:26.447217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.447219] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:29.481 [2024-05-15 11:18:26.447289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.447301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.481 qpair failed and we were unable to recover it. 00:26:29.481 [2024-05-15 11:18:26.447508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.447673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.447686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.481 qpair failed and we were unable to recover it. 00:26:29.481 [2024-05-15 11:18:26.447848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.448011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.448024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.481 qpair failed and we were unable to recover it. 00:26:29.481 [2024-05-15 11:18:26.448124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.448215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.448229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.481 qpair failed and we were unable to recover it. 00:26:29.481 [2024-05-15 11:18:26.448318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.448400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.448413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.481 qpair failed and we were unable to recover it. 00:26:29.481 [2024-05-15 11:18:26.448620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.448684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.448697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.481 qpair failed and we were unable to recover it. 00:26:29.481 [2024-05-15 11:18:26.448815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.448891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.448904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.481 qpair failed and we were unable to recover it. 00:26:29.481 [2024-05-15 11:18:26.449016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.449168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.449182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.481 qpair failed and we were unable to recover it. 00:26:29.481 [2024-05-15 11:18:26.449285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.449386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.449399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.481 qpair failed and we were unable to recover it. 00:26:29.481 [2024-05-15 11:18:26.449488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.449566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.449579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.481 qpair failed and we were unable to recover it. 00:26:29.481 [2024-05-15 11:18:26.449667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.449739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.449754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.481 qpair failed and we were unable to recover it. 00:26:29.481 [2024-05-15 11:18:26.449829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.449904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.449917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.481 qpair failed and we were unable to recover it. 00:26:29.481 [2024-05-15 11:18:26.449996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.450094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.450107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.481 qpair failed and we were unable to recover it. 00:26:29.481 [2024-05-15 11:18:26.450337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.450411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.450427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.481 qpair failed and we were unable to recover it. 00:26:29.481 [2024-05-15 11:18:26.450506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.450581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.450594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.481 qpair failed and we were unable to recover it. 00:26:29.481 [2024-05-15 11:18:26.450665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.450746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.450759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.481 qpair failed and we were unable to recover it. 00:26:29.481 [2024-05-15 11:18:26.450844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.450931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.450945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.481 qpair failed and we were unable to recover it. 00:26:29.481 [2024-05-15 11:18:26.451017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.451154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.481 [2024-05-15 11:18:26.451173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.482 qpair failed and we were unable to recover it. 00:26:29.482 [2024-05-15 11:18:26.451382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.482 [2024-05-15 11:18:26.451522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.482 [2024-05-15 11:18:26.451536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.482 qpair failed and we were unable to recover it. 00:26:29.482 [2024-05-15 11:18:26.451603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.482 [2024-05-15 11:18:26.451687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.482 [2024-05-15 11:18:26.451700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.482 qpair failed and we were unable to recover it. 00:26:29.482 [2024-05-15 11:18:26.451863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.482 [2024-05-15 11:18:26.452021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.482 [2024-05-15 11:18:26.452034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.482 qpair failed and we were unable to recover it. 00:26:29.482 [2024-05-15 11:18:26.452109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.482 [2024-05-15 11:18:26.452194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.482 [2024-05-15 11:18:26.452208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.482 qpair failed and we were unable to recover it. 00:26:29.482 [2024-05-15 11:18:26.452287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.482 [2024-05-15 11:18:26.452366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.482 [2024-05-15 11:18:26.452381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.482 qpair failed and we were unable to recover it. 00:26:29.482 [2024-05-15 11:18:26.452475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.482 [2024-05-15 11:18:26.452561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.482 [2024-05-15 11:18:26.452576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.482 qpair failed and we were unable to recover it. 00:26:29.482 [2024-05-15 11:18:26.452650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.482 [2024-05-15 11:18:26.452828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.482 [2024-05-15 11:18:26.452842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.482 qpair failed and we were unable to recover it. 00:26:29.482 [2024-05-15 11:18:26.452917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.482 [2024-05-15 11:18:26.453123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.482 [2024-05-15 11:18:26.453136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.482 qpair failed and we were unable to recover it. 00:26:29.482 [2024-05-15 11:18:26.453231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.482 [2024-05-15 11:18:26.453374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.482 [2024-05-15 11:18:26.453388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.482 qpair failed and we were unable to recover it. 00:26:29.482 [2024-05-15 11:18:26.453567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.482 [2024-05-15 11:18:26.453657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.482 [2024-05-15 11:18:26.453670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.482 qpair failed and we were unable to recover it. 00:26:29.482 [2024-05-15 11:18:26.453744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.482 [2024-05-15 11:18:26.453899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.482 [2024-05-15 11:18:26.453912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.482 qpair failed and we were unable to recover it. 00:26:29.482 [2024-05-15 11:18:26.454066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.482 [2024-05-15 11:18:26.454138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.482 [2024-05-15 11:18:26.454151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.482 qpair failed and we were unable to recover it. 00:26:29.482 [2024-05-15 11:18:26.454226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.482 [2024-05-15 11:18:26.454448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.482 [2024-05-15 11:18:26.454461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.482 qpair failed and we were unable to recover it. 00:26:29.482 [2024-05-15 11:18:26.454535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.482 [2024-05-15 11:18:26.454614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.482 [2024-05-15 11:18:26.454627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.482 qpair failed and we were unable to recover it. 00:26:29.482 [2024-05-15 11:18:26.454705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.482 [2024-05-15 11:18:26.454805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.482 [2024-05-15 11:18:26.454818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.482 qpair failed and we were unable to recover it. 00:26:29.482 [2024-05-15 11:18:26.454959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.482 [2024-05-15 11:18:26.455099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.482 [2024-05-15 11:18:26.455114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.482 qpair failed and we were unable to recover it. 00:26:29.482 [2024-05-15 11:18:26.455216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.482 [2024-05-15 11:18:26.455358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.482 [2024-05-15 11:18:26.455372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.482 qpair failed and we were unable to recover it. 00:26:29.482 [2024-05-15 11:18:26.455520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.482 11:18:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:29.482 [2024-05-15 11:18:26.455610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.482 [2024-05-15 11:18:26.455623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.482 qpair failed and we were unable to recover it. 00:26:29.482 11:18:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:29.482 [2024-05-15 11:18:26.455761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.482 11:18:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:29.482 11:18:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:29.482 [2024-05-15 11:18:26.455903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.482 [2024-05-15 11:18:26.455917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.482 qpair failed and we were unable to recover it. 00:26:29.482 [2024-05-15 11:18:26.456001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.482 [2024-05-15 11:18:26.456102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.482 [2024-05-15 11:18:26.456115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.482 qpair failed and we were unable to recover it. 00:26:29.482 [2024-05-15 11:18:26.456215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.482 [2024-05-15 11:18:26.456287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.482 [2024-05-15 11:18:26.456300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.482 qpair failed and we were unable to recover it. 00:26:29.482 [2024-05-15 11:18:26.456470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.482 [2024-05-15 11:18:26.456551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.482 [2024-05-15 11:18:26.456564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.483 qpair failed and we were unable to recover it. 00:26:29.483 [2024-05-15 11:18:26.456666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.483 [2024-05-15 11:18:26.456802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.483 [2024-05-15 11:18:26.456815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.483 qpair failed and we were unable to recover it. 00:26:29.483 [2024-05-15 11:18:26.456907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.483 [2024-05-15 11:18:26.456981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.483 [2024-05-15 11:18:26.456993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.483 qpair failed and we were unable to recover it. 00:26:29.483 [2024-05-15 11:18:26.457071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.483 [2024-05-15 11:18:26.457241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.483 [2024-05-15 11:18:26.457255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.483 qpair failed and we were unable to recover it. 00:26:29.483 [2024-05-15 11:18:26.457475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.483 [2024-05-15 11:18:26.457560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.483 [2024-05-15 11:18:26.457573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.483 qpair failed and we were unable to recover it. 00:26:29.483 [2024-05-15 11:18:26.457717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.483 [2024-05-15 11:18:26.457803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.483 [2024-05-15 11:18:26.457816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.483 qpair failed and we were unable to recover it. 00:26:29.483 [2024-05-15 11:18:26.457972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.483 [2024-05-15 11:18:26.458060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.483 [2024-05-15 11:18:26.458074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.483 qpair failed and we were unable to recover it. 00:26:29.483 [2024-05-15 11:18:26.458147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.483 [2024-05-15 11:18:26.458317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.483 [2024-05-15 11:18:26.458332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.483 qpair failed and we were unable to recover it. 00:26:29.483 [2024-05-15 11:18:26.458426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.483 [2024-05-15 11:18:26.458520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.483 [2024-05-15 11:18:26.458533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.483 qpair failed and we were unable to recover it. 00:26:29.483 [2024-05-15 11:18:26.458697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.483 [2024-05-15 11:18:26.458774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.483 [2024-05-15 11:18:26.458788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.483 qpair failed and we were unable to recover it. 00:26:29.483 [2024-05-15 11:18:26.458885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.483 [2024-05-15 11:18:26.459042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.483 [2024-05-15 11:18:26.459055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.483 qpair failed and we were unable to recover it. 00:26:29.483 [2024-05-15 11:18:26.459146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.483 [2024-05-15 11:18:26.459310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.483 [2024-05-15 11:18:26.459324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.483 qpair failed and we were unable to recover it. 00:26:29.483 [2024-05-15 11:18:26.459486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.483 [2024-05-15 11:18:26.459711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.483 [2024-05-15 11:18:26.459724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.483 qpair failed and we were unable to recover it. 00:26:29.483 [2024-05-15 11:18:26.459814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.483 [2024-05-15 11:18:26.459959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.483 [2024-05-15 11:18:26.459972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.483 qpair failed and we were unable to recover it. 00:26:29.483 [2024-05-15 11:18:26.460081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.483 [2024-05-15 11:18:26.460220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.483 [2024-05-15 11:18:26.460234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.483 qpair failed and we were unable to recover it. 00:26:29.483 [2024-05-15 11:18:26.460395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.483 [2024-05-15 11:18:26.460550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.483 [2024-05-15 11:18:26.460563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.483 qpair failed and we were unable to recover it. 00:26:29.483 [2024-05-15 11:18:26.460665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.483 [2024-05-15 11:18:26.460756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.483 [2024-05-15 11:18:26.460770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.483 qpair failed and we were unable to recover it. 00:26:29.483 [2024-05-15 11:18:26.460913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.483 [2024-05-15 11:18:26.461143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.483 [2024-05-15 11:18:26.461156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.483 qpair failed and we were unable to recover it. 00:26:29.483 [2024-05-15 11:18:26.461333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.483 [2024-05-15 11:18:26.461509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.483 [2024-05-15 11:18:26.461522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.483 qpair failed and we were unable to recover it. 00:26:29.483 [2024-05-15 11:18:26.461613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.483 [2024-05-15 11:18:26.461701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.483 [2024-05-15 11:18:26.461714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.483 qpair failed and we were unable to recover it. 00:26:29.483 [2024-05-15 11:18:26.461846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.483 [2024-05-15 11:18:26.461918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.483 [2024-05-15 11:18:26.461931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.483 qpair failed and we were unable to recover it. 00:26:29.483 [2024-05-15 11:18:26.462091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.483 [2024-05-15 11:18:26.462175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.483 [2024-05-15 11:18:26.462188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.483 qpair failed and we were unable to recover it. 00:26:29.483 [2024-05-15 11:18:26.462344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.483 [2024-05-15 11:18:26.462431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.483 [2024-05-15 11:18:26.462444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.483 qpair failed and we were unable to recover it. 00:26:29.483 [2024-05-15 11:18:26.462522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.483 [2024-05-15 11:18:26.462669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.483 [2024-05-15 11:18:26.462685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.483 qpair failed and we were unable to recover it. 00:26:29.483 [2024-05-15 11:18:26.462839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.483 [2024-05-15 11:18:26.463000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.483 [2024-05-15 11:18:26.463013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.483 qpair failed and we were unable to recover it. 00:26:29.483 [2024-05-15 11:18:26.463163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.483 [2024-05-15 11:18:26.463308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.483 [2024-05-15 11:18:26.463321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.483 qpair failed and we were unable to recover it. 00:26:29.483 [2024-05-15 11:18:26.463474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.483 11:18:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:29.483 [2024-05-15 11:18:26.463560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.483 [2024-05-15 11:18:26.463574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.483 qpair failed and we were unable to recover it. 00:26:29.483 11:18:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:29.483 11:18:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:29.483 [2024-05-15 11:18:26.463778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 11:18:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:29.484 [2024-05-15 11:18:26.463852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.463865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.484 qpair failed and we were unable to recover it. 00:26:29.484 [2024-05-15 11:18:26.463980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.464117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.464130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.484 qpair failed and we were unable to recover it. 00:26:29.484 [2024-05-15 11:18:26.464222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.464379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.464392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.484 qpair failed and we were unable to recover it. 00:26:29.484 [2024-05-15 11:18:26.464475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.464688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.464701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.484 qpair failed and we were unable to recover it. 00:26:29.484 [2024-05-15 11:18:26.464773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.464869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.464882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.484 qpair failed and we were unable to recover it. 00:26:29.484 [2024-05-15 11:18:26.464982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.465085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.465098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.484 qpair failed and we were unable to recover it. 00:26:29.484 [2024-05-15 11:18:26.465186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.465272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.465285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.484 qpair failed and we were unable to recover it. 00:26:29.484 [2024-05-15 11:18:26.465380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.465532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.465546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.484 qpair failed and we were unable to recover it. 00:26:29.484 [2024-05-15 11:18:26.465768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.465923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.465936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.484 qpair failed and we were unable to recover it. 00:26:29.484 [2024-05-15 11:18:26.466029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.466118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.466131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.484 qpair failed and we were unable to recover it. 00:26:29.484 [2024-05-15 11:18:26.466295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.466436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.466449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.484 qpair failed and we were unable to recover it. 00:26:29.484 [2024-05-15 11:18:26.466544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.466698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.466711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.484 qpair failed and we were unable to recover it. 00:26:29.484 [2024-05-15 11:18:26.466787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.466859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.466872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.484 qpair failed and we were unable to recover it. 00:26:29.484 [2024-05-15 11:18:26.466961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.467112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.467126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.484 qpair failed and we were unable to recover it. 00:26:29.484 [2024-05-15 11:18:26.467275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.467351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.467364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.484 qpair failed and we were unable to recover it. 00:26:29.484 [2024-05-15 11:18:26.467463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.467546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.467559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.484 qpair failed and we were unable to recover it. 00:26:29.484 [2024-05-15 11:18:26.467752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.467826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.467839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.484 qpair failed and we were unable to recover it. 00:26:29.484 [2024-05-15 11:18:26.467946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.468085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.468099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.484 qpair failed and we were unable to recover it. 00:26:29.484 [2024-05-15 11:18:26.468305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.468521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.468534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.484 qpair failed and we were unable to recover it. 00:26:29.484 [2024-05-15 11:18:26.468786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.468936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.468949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.484 qpair failed and we were unable to recover it. 00:26:29.484 [2024-05-15 11:18:26.469106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.469258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.469272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.484 qpair failed and we were unable to recover it. 00:26:29.484 [2024-05-15 11:18:26.469372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.469455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.469468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.484 qpair failed and we were unable to recover it. 00:26:29.484 [2024-05-15 11:18:26.469618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.469764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.469777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.484 qpair failed and we were unable to recover it. 00:26:29.484 [2024-05-15 11:18:26.469852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.469939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.469952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.484 qpair failed and we were unable to recover it. 00:26:29.484 [2024-05-15 11:18:26.470050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.470226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.470240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.484 qpair failed and we were unable to recover it. 00:26:29.484 [2024-05-15 11:18:26.470337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.470505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.470518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.484 qpair failed and we were unable to recover it. 00:26:29.484 [2024-05-15 11:18:26.470598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.470754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.470767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.484 qpair failed and we were unable to recover it. 00:26:29.484 [2024-05-15 11:18:26.470920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.471104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.471117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.484 qpair failed and we were unable to recover it. 00:26:29.484 [2024-05-15 11:18:26.471292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.471381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.484 [2024-05-15 11:18:26.471394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.484 qpair failed and we were unable to recover it. 00:26:29.484 11:18:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:29.484 [2024-05-15 11:18:26.471557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.485 11:18:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:29.485 [2024-05-15 11:18:26.471655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.485 [2024-05-15 11:18:26.471668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.485 qpair failed and we were unable to recover it. 00:26:29.485 11:18:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:29.485 11:18:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:29.485 [2024-05-15 11:18:26.471829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.485 [2024-05-15 11:18:26.471970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.485 [2024-05-15 11:18:26.471983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.485 qpair failed and we were unable to recover it. 00:26:29.485 [2024-05-15 11:18:26.472138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.485 [2024-05-15 11:18:26.472229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.485 [2024-05-15 11:18:26.472243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.485 qpair failed and we were unable to recover it. 00:26:29.485 [2024-05-15 11:18:26.472396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.485 [2024-05-15 11:18:26.472490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.485 [2024-05-15 11:18:26.472503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.485 qpair failed and we were unable to recover it. 00:26:29.485 [2024-05-15 11:18:26.472661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.485 [2024-05-15 11:18:26.472812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.485 [2024-05-15 11:18:26.472825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa688000b90 with addr=10.0.0.2, port=4420 00:26:29.485 qpair failed and we were unable to recover it. 00:26:29.485 [2024-05-15 11:18:26.473004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.485 [2024-05-15 11:18:26.473102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.485 [2024-05-15 11:18:26.473113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa690000b90 with addr=10.0.0.2, port=4420 00:26:29.485 qpair failed and we were unable to recover it. 00:26:29.485 [2024-05-15 11:18:26.473279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.485 [2024-05-15 11:18:26.473452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.485 [2024-05-15 11:18:26.473467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d5c10 with addr=10.0.0.2, port=4420 00:26:29.485 qpair failed and we were unable to recover it. 00:26:29.485 [2024-05-15 11:18:26.473651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.485 [2024-05-15 11:18:26.473743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.485 [2024-05-15 11:18:26.473757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:29.485 qpair failed and we were unable to recover it. 00:26:29.485 [2024-05-15 11:18:26.473857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.485 [2024-05-15 11:18:26.474033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.485 [2024-05-15 11:18:26.474047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:29.485 qpair failed and we were unable to recover it. 00:26:29.485 [2024-05-15 11:18:26.474190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.485 [2024-05-15 11:18:26.474350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.485 [2024-05-15 11:18:26.474363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:29.485 qpair failed and we were unable to recover it. 00:26:29.485 [2024-05-15 11:18:26.474507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.485 [2024-05-15 11:18:26.474659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.485 [2024-05-15 11:18:26.474673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:29.485 qpair failed and we were unable to recover it. 00:26:29.485 [2024-05-15 11:18:26.474835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.485 [2024-05-15 11:18:26.474975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.485 [2024-05-15 11:18:26.474988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:29.485 qpair failed and we were unable to recover it. 00:26:29.485 [2024-05-15 11:18:26.475080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.485 [2024-05-15 11:18:26.475233] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:29.485 [2024-05-15 11:18:26.475252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.485 [2024-05-15 11:18:26.475265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa698000b90 with addr=10.0.0.2, port=4420 00:26:29.485 qpair failed and we were unable to recover it. 00:26:29.485 [2024-05-15 11:18:26.475421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.485 [2024-05-15 11:18:26.475475] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:29.485 [2024-05-15 11:18:26.478315] posix.c: 675:posix_sock_psk_use_session_client_cb: *ERROR*: PSK is not set 00:26:29.485 [2024-05-15 11:18:26.478362] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7fa698000b90 (107): Transport endpoint is not connected 00:26:29.485 [2024-05-15 11:18:26.478415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.485 qpair failed and we were unable to recover it. 00:26:29.485 11:18:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:29.485 11:18:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:29.485 11:18:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:29.485 11:18:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:29.485 11:18:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:29.485 11:18:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2408735 00:26:29.485 [2024-05-15 11:18:26.487771] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.485 [2024-05-15 11:18:26.487850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.485 [2024-05-15 11:18:26.487868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.485 [2024-05-15 11:18:26.487876] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.485 [2024-05-15 11:18:26.487883] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.485 [2024-05-15 11:18:26.487899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.485 qpair failed and we were unable to recover it. 00:26:29.485 [2024-05-15 11:18:26.497676] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.485 [2024-05-15 11:18:26.497735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.485 [2024-05-15 11:18:26.497751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.485 [2024-05-15 11:18:26.497758] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.485 [2024-05-15 11:18:26.497764] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.485 [2024-05-15 11:18:26.497779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.485 qpair failed and we were unable to recover it. 00:26:29.485 [2024-05-15 11:18:26.507704] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.485 [2024-05-15 11:18:26.507775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.485 [2024-05-15 11:18:26.507791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.485 [2024-05-15 11:18:26.507797] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.485 [2024-05-15 11:18:26.507803] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.485 [2024-05-15 11:18:26.507818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.485 qpair failed and we were unable to recover it. 00:26:29.485 [2024-05-15 11:18:26.517742] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.485 [2024-05-15 11:18:26.517802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.485 [2024-05-15 11:18:26.517818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.485 [2024-05-15 11:18:26.517828] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.485 [2024-05-15 11:18:26.517834] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.485 [2024-05-15 11:18:26.517849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.485 qpair failed and we were unable to recover it. 00:26:29.485 [2024-05-15 11:18:26.527748] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.485 [2024-05-15 11:18:26.527810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.485 [2024-05-15 11:18:26.527825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.485 [2024-05-15 11:18:26.527831] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.485 [2024-05-15 11:18:26.527838] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.485 [2024-05-15 11:18:26.527851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.485 qpair failed and we were unable to recover it. 00:26:29.485 [2024-05-15 11:18:26.537777] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.485 [2024-05-15 11:18:26.537830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.485 [2024-05-15 11:18:26.537844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.485 [2024-05-15 11:18:26.537851] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.485 [2024-05-15 11:18:26.537857] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.486 [2024-05-15 11:18:26.537871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.486 qpair failed and we were unable to recover it. 00:26:29.486 [2024-05-15 11:18:26.547805] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.486 [2024-05-15 11:18:26.547862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.486 [2024-05-15 11:18:26.547876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.486 [2024-05-15 11:18:26.547883] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.486 [2024-05-15 11:18:26.547890] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.486 [2024-05-15 11:18:26.547904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.486 qpair failed and we were unable to recover it. 00:26:29.486 [2024-05-15 11:18:26.557894] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.486 [2024-05-15 11:18:26.557952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.486 [2024-05-15 11:18:26.557966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.486 [2024-05-15 11:18:26.557973] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.486 [2024-05-15 11:18:26.557979] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.486 [2024-05-15 11:18:26.557994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.486 qpair failed and we were unable to recover it. 00:26:29.486 [2024-05-15 11:18:26.567869] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.486 [2024-05-15 11:18:26.567923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.486 [2024-05-15 11:18:26.567938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.486 [2024-05-15 11:18:26.567944] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.486 [2024-05-15 11:18:26.567950] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.486 [2024-05-15 11:18:26.567965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.486 qpair failed and we were unable to recover it. 00:26:29.486 [2024-05-15 11:18:26.577819] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.486 [2024-05-15 11:18:26.577872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.486 [2024-05-15 11:18:26.577887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.486 [2024-05-15 11:18:26.577893] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.486 [2024-05-15 11:18:26.577899] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.486 [2024-05-15 11:18:26.577913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.486 qpair failed and we were unable to recover it. 00:26:29.486 [2024-05-15 11:18:26.587949] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.486 [2024-05-15 11:18:26.588002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.486 [2024-05-15 11:18:26.588017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.486 [2024-05-15 11:18:26.588024] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.486 [2024-05-15 11:18:26.588030] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.486 [2024-05-15 11:18:26.588044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.486 qpair failed and we were unable to recover it. 00:26:29.486 [2024-05-15 11:18:26.597922] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.486 [2024-05-15 11:18:26.597985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.486 [2024-05-15 11:18:26.597999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.486 [2024-05-15 11:18:26.598006] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.486 [2024-05-15 11:18:26.598011] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.486 [2024-05-15 11:18:26.598026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.486 qpair failed and we were unable to recover it. 00:26:29.486 [2024-05-15 11:18:26.607907] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.486 [2024-05-15 11:18:26.607993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.486 [2024-05-15 11:18:26.608007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.486 [2024-05-15 11:18:26.608017] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.486 [2024-05-15 11:18:26.608023] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.486 [2024-05-15 11:18:26.608037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.486 qpair failed and we were unable to recover it. 00:26:29.486 [2024-05-15 11:18:26.617980] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.486 [2024-05-15 11:18:26.618034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.486 [2024-05-15 11:18:26.618050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.486 [2024-05-15 11:18:26.618057] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.486 [2024-05-15 11:18:26.618063] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.486 [2024-05-15 11:18:26.618078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.486 qpair failed and we were unable to recover it. 00:26:29.486 [2024-05-15 11:18:26.628072] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.486 [2024-05-15 11:18:26.628177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.486 [2024-05-15 11:18:26.628192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.486 [2024-05-15 11:18:26.628198] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.486 [2024-05-15 11:18:26.628204] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.486 [2024-05-15 11:18:26.628219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.486 qpair failed and we were unable to recover it. 00:26:29.486 [2024-05-15 11:18:26.638093] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.486 [2024-05-15 11:18:26.638158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.486 [2024-05-15 11:18:26.638178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.486 [2024-05-15 11:18:26.638185] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.486 [2024-05-15 11:18:26.638190] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.486 [2024-05-15 11:18:26.638205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.486 qpair failed and we were unable to recover it. 00:26:29.486 [2024-05-15 11:18:26.648100] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.486 [2024-05-15 11:18:26.648154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.486 [2024-05-15 11:18:26.648173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.486 [2024-05-15 11:18:26.648179] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.486 [2024-05-15 11:18:26.648185] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.486 [2024-05-15 11:18:26.648200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.487 qpair failed and we were unable to recover it. 00:26:29.487 [2024-05-15 11:18:26.658115] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.487 [2024-05-15 11:18:26.658179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.487 [2024-05-15 11:18:26.658193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.487 [2024-05-15 11:18:26.658200] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.487 [2024-05-15 11:18:26.658206] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.487 [2024-05-15 11:18:26.658220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.487 qpair failed and we were unable to recover it. 00:26:29.487 [2024-05-15 11:18:26.668150] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.487 [2024-05-15 11:18:26.668213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.487 [2024-05-15 11:18:26.668228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.487 [2024-05-15 11:18:26.668236] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.487 [2024-05-15 11:18:26.668242] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.487 [2024-05-15 11:18:26.668257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.487 qpair failed and we were unable to recover it. 00:26:29.487 [2024-05-15 11:18:26.678214] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.487 [2024-05-15 11:18:26.678296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.487 [2024-05-15 11:18:26.678310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.487 [2024-05-15 11:18:26.678317] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.487 [2024-05-15 11:18:26.678322] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.487 [2024-05-15 11:18:26.678336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.487 qpair failed and we were unable to recover it. 00:26:29.487 [2024-05-15 11:18:26.688210] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.487 [2024-05-15 11:18:26.688263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.487 [2024-05-15 11:18:26.688277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.487 [2024-05-15 11:18:26.688284] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.487 [2024-05-15 11:18:26.688290] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.487 [2024-05-15 11:18:26.688304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.487 qpair failed and we were unable to recover it. 00:26:29.487 [2024-05-15 11:18:26.698388] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.487 [2024-05-15 11:18:26.698450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.487 [2024-05-15 11:18:26.698467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.487 [2024-05-15 11:18:26.698474] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.487 [2024-05-15 11:18:26.698480] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.487 [2024-05-15 11:18:26.698494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.487 qpair failed and we were unable to recover it. 00:26:29.487 [2024-05-15 11:18:26.708311] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.487 [2024-05-15 11:18:26.708368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.487 [2024-05-15 11:18:26.708382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.487 [2024-05-15 11:18:26.708389] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.487 [2024-05-15 11:18:26.708395] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.487 [2024-05-15 11:18:26.708409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.487 qpair failed and we were unable to recover it. 00:26:29.487 [2024-05-15 11:18:26.718341] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.487 [2024-05-15 11:18:26.718397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.487 [2024-05-15 11:18:26.718411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.487 [2024-05-15 11:18:26.718419] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.487 [2024-05-15 11:18:26.718424] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.487 [2024-05-15 11:18:26.718439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.487 qpair failed and we were unable to recover it. 00:26:29.487 [2024-05-15 11:18:26.728388] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.487 [2024-05-15 11:18:26.728445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.487 [2024-05-15 11:18:26.728459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.487 [2024-05-15 11:18:26.728466] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.487 [2024-05-15 11:18:26.728473] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.487 [2024-05-15 11:18:26.728487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.487 qpair failed and we were unable to recover it. 00:26:29.487 [2024-05-15 11:18:26.738368] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.487 [2024-05-15 11:18:26.738422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.487 [2024-05-15 11:18:26.738436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.487 [2024-05-15 11:18:26.738444] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.487 [2024-05-15 11:18:26.738449] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.487 [2024-05-15 11:18:26.738466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.487 qpair failed and we were unable to recover it. 00:26:29.746 [2024-05-15 11:18:26.748379] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.746 [2024-05-15 11:18:26.748435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.746 [2024-05-15 11:18:26.748449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.746 [2024-05-15 11:18:26.748456] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.746 [2024-05-15 11:18:26.748462] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.746 [2024-05-15 11:18:26.748476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.746 qpair failed and we were unable to recover it. 00:26:29.746 [2024-05-15 11:18:26.758418] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.746 [2024-05-15 11:18:26.758476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.747 [2024-05-15 11:18:26.758490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.747 [2024-05-15 11:18:26.758497] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.747 [2024-05-15 11:18:26.758503] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.747 [2024-05-15 11:18:26.758517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.747 qpair failed and we were unable to recover it. 00:26:29.747 [2024-05-15 11:18:26.768459] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.747 [2024-05-15 11:18:26.768518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.747 [2024-05-15 11:18:26.768533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.747 [2024-05-15 11:18:26.768539] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.747 [2024-05-15 11:18:26.768546] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.747 [2024-05-15 11:18:26.768560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.747 qpair failed and we were unable to recover it. 00:26:29.747 [2024-05-15 11:18:26.778471] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.747 [2024-05-15 11:18:26.778523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.747 [2024-05-15 11:18:26.778537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.747 [2024-05-15 11:18:26.778544] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.747 [2024-05-15 11:18:26.778551] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.747 [2024-05-15 11:18:26.778565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.747 qpair failed and we were unable to recover it. 00:26:29.747 [2024-05-15 11:18:26.788473] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.747 [2024-05-15 11:18:26.788548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.747 [2024-05-15 11:18:26.788567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.747 [2024-05-15 11:18:26.788573] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.747 [2024-05-15 11:18:26.788579] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.747 [2024-05-15 11:18:26.788593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.747 qpair failed and we were unable to recover it. 00:26:29.747 [2024-05-15 11:18:26.798473] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.747 [2024-05-15 11:18:26.798527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.747 [2024-05-15 11:18:26.798541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.747 [2024-05-15 11:18:26.798547] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.747 [2024-05-15 11:18:26.798554] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.747 [2024-05-15 11:18:26.798568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.747 qpair failed and we were unable to recover it. 00:26:29.747 [2024-05-15 11:18:26.808507] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.747 [2024-05-15 11:18:26.808562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.747 [2024-05-15 11:18:26.808576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.747 [2024-05-15 11:18:26.808583] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.747 [2024-05-15 11:18:26.808589] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.747 [2024-05-15 11:18:26.808603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.747 qpair failed and we were unable to recover it. 00:26:29.747 [2024-05-15 11:18:26.818510] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.747 [2024-05-15 11:18:26.818577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.747 [2024-05-15 11:18:26.818592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.747 [2024-05-15 11:18:26.818599] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.747 [2024-05-15 11:18:26.818605] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.747 [2024-05-15 11:18:26.818619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.747 qpair failed and we were unable to recover it. 00:26:29.747 [2024-05-15 11:18:26.828600] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.747 [2024-05-15 11:18:26.828660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.747 [2024-05-15 11:18:26.828674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.747 [2024-05-15 11:18:26.828681] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.747 [2024-05-15 11:18:26.828690] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.747 [2024-05-15 11:18:26.828705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.747 qpair failed and we were unable to recover it. 00:26:29.747 [2024-05-15 11:18:26.838662] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.747 [2024-05-15 11:18:26.838719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.747 [2024-05-15 11:18:26.838735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.747 [2024-05-15 11:18:26.838741] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.747 [2024-05-15 11:18:26.838747] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.747 [2024-05-15 11:18:26.838762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.747 qpair failed and we were unable to recover it. 00:26:29.747 [2024-05-15 11:18:26.848689] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.747 [2024-05-15 11:18:26.848779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.747 [2024-05-15 11:18:26.848793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.747 [2024-05-15 11:18:26.848799] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.747 [2024-05-15 11:18:26.848805] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.747 [2024-05-15 11:18:26.848819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.747 qpair failed and we were unable to recover it. 00:26:29.747 [2024-05-15 11:18:26.858735] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.747 [2024-05-15 11:18:26.858791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.747 [2024-05-15 11:18:26.858805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.747 [2024-05-15 11:18:26.858812] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.747 [2024-05-15 11:18:26.858817] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.747 [2024-05-15 11:18:26.858832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.747 qpair failed and we were unable to recover it. 00:26:29.747 [2024-05-15 11:18:26.868726] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.747 [2024-05-15 11:18:26.868800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.747 [2024-05-15 11:18:26.868815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.747 [2024-05-15 11:18:26.868822] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.747 [2024-05-15 11:18:26.868827] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.747 [2024-05-15 11:18:26.868841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.747 qpair failed and we were unable to recover it. 00:26:29.747 [2024-05-15 11:18:26.878755] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.747 [2024-05-15 11:18:26.878819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.747 [2024-05-15 11:18:26.878833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.747 [2024-05-15 11:18:26.878840] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.747 [2024-05-15 11:18:26.878846] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.747 [2024-05-15 11:18:26.878860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.747 qpair failed and we were unable to recover it. 00:26:29.747 [2024-05-15 11:18:26.888707] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.747 [2024-05-15 11:18:26.888764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.747 [2024-05-15 11:18:26.888778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.747 [2024-05-15 11:18:26.888785] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.748 [2024-05-15 11:18:26.888791] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.748 [2024-05-15 11:18:26.888805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.748 qpair failed and we were unable to recover it. 00:26:29.748 [2024-05-15 11:18:26.898730] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.748 [2024-05-15 11:18:26.898792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.748 [2024-05-15 11:18:26.898806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.748 [2024-05-15 11:18:26.898812] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.748 [2024-05-15 11:18:26.898818] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.748 [2024-05-15 11:18:26.898833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.748 qpair failed and we were unable to recover it. 00:26:29.748 [2024-05-15 11:18:26.908821] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.748 [2024-05-15 11:18:26.908882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.748 [2024-05-15 11:18:26.908897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.748 [2024-05-15 11:18:26.908904] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.748 [2024-05-15 11:18:26.908910] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.748 [2024-05-15 11:18:26.908924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.748 qpair failed and we were unable to recover it. 00:26:29.748 [2024-05-15 11:18:26.918855] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.748 [2024-05-15 11:18:26.918912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.748 [2024-05-15 11:18:26.918927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.748 [2024-05-15 11:18:26.918934] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.748 [2024-05-15 11:18:26.918943] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.748 [2024-05-15 11:18:26.918957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.748 qpair failed and we were unable to recover it. 00:26:29.748 [2024-05-15 11:18:26.928848] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.748 [2024-05-15 11:18:26.928931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.748 [2024-05-15 11:18:26.928946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.748 [2024-05-15 11:18:26.928952] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.748 [2024-05-15 11:18:26.928958] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.748 [2024-05-15 11:18:26.928972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.748 qpair failed and we were unable to recover it. 00:26:29.748 [2024-05-15 11:18:26.938960] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.748 [2024-05-15 11:18:26.939016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.748 [2024-05-15 11:18:26.939031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.748 [2024-05-15 11:18:26.939038] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.748 [2024-05-15 11:18:26.939044] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.748 [2024-05-15 11:18:26.939058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.748 qpair failed and we were unable to recover it. 00:26:29.748 [2024-05-15 11:18:26.948948] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.748 [2024-05-15 11:18:26.949008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.748 [2024-05-15 11:18:26.949023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.748 [2024-05-15 11:18:26.949030] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.748 [2024-05-15 11:18:26.949036] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.748 [2024-05-15 11:18:26.949050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.748 qpair failed and we were unable to recover it. 00:26:29.748 [2024-05-15 11:18:26.959004] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.748 [2024-05-15 11:18:26.959061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.748 [2024-05-15 11:18:26.959076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.748 [2024-05-15 11:18:26.959083] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.748 [2024-05-15 11:18:26.959089] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.748 [2024-05-15 11:18:26.959103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.748 qpair failed and we were unable to recover it. 00:26:29.748 [2024-05-15 11:18:26.968973] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.748 [2024-05-15 11:18:26.969068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.748 [2024-05-15 11:18:26.969083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.748 [2024-05-15 11:18:26.969090] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.748 [2024-05-15 11:18:26.969096] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.748 [2024-05-15 11:18:26.969111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.748 qpair failed and we were unable to recover it. 00:26:29.748 [2024-05-15 11:18:26.979010] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.748 [2024-05-15 11:18:26.979072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.748 [2024-05-15 11:18:26.979088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.748 [2024-05-15 11:18:26.979094] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.748 [2024-05-15 11:18:26.979100] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.748 [2024-05-15 11:18:26.979115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.748 qpair failed and we were unable to recover it. 00:26:29.748 [2024-05-15 11:18:26.989126] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.748 [2024-05-15 11:18:26.989186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.748 [2024-05-15 11:18:26.989201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.748 [2024-05-15 11:18:26.989208] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.748 [2024-05-15 11:18:26.989214] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.748 [2024-05-15 11:18:26.989228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.748 qpair failed and we were unable to recover it. 00:26:29.748 [2024-05-15 11:18:26.999130] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.748 [2024-05-15 11:18:26.999190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.748 [2024-05-15 11:18:26.999206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.748 [2024-05-15 11:18:26.999215] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.748 [2024-05-15 11:18:26.999223] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.748 [2024-05-15 11:18:26.999238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.748 qpair failed and we were unable to recover it. 00:26:29.748 [2024-05-15 11:18:27.009069] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.748 [2024-05-15 11:18:27.009126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.748 [2024-05-15 11:18:27.009141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.748 [2024-05-15 11:18:27.009151] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.748 [2024-05-15 11:18:27.009157] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:29.748 [2024-05-15 11:18:27.009176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.748 qpair failed and we were unable to recover it. 00:26:30.007 [2024-05-15 11:18:27.019139] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.007 [2024-05-15 11:18:27.019199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.007 [2024-05-15 11:18:27.019215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.007 [2024-05-15 11:18:27.019222] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.007 [2024-05-15 11:18:27.019228] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.007 [2024-05-15 11:18:27.019242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.007 qpair failed and we were unable to recover it. 00:26:30.007 [2024-05-15 11:18:27.029236] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.007 [2024-05-15 11:18:27.029302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.007 [2024-05-15 11:18:27.029320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.007 [2024-05-15 11:18:27.029330] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.007 [2024-05-15 11:18:27.029340] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.007 [2024-05-15 11:18:27.029359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.007 qpair failed and we were unable to recover it. 00:26:30.007 [2024-05-15 11:18:27.039247] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.007 [2024-05-15 11:18:27.039311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.007 [2024-05-15 11:18:27.039326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.007 [2024-05-15 11:18:27.039333] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.007 [2024-05-15 11:18:27.039339] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.007 [2024-05-15 11:18:27.039354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.007 qpair failed and we were unable to recover it. 00:26:30.007 [2024-05-15 11:18:27.049232] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.007 [2024-05-15 11:18:27.049284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.007 [2024-05-15 11:18:27.049298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.007 [2024-05-15 11:18:27.049305] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.007 [2024-05-15 11:18:27.049311] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.007 [2024-05-15 11:18:27.049325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.007 qpair failed and we were unable to recover it. 00:26:30.007 [2024-05-15 11:18:27.059329] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.007 [2024-05-15 11:18:27.059384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.007 [2024-05-15 11:18:27.059399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.007 [2024-05-15 11:18:27.059405] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.007 [2024-05-15 11:18:27.059411] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.007 [2024-05-15 11:18:27.059425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.007 qpair failed and we were unable to recover it. 00:26:30.007 [2024-05-15 11:18:27.069321] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.008 [2024-05-15 11:18:27.069383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.008 [2024-05-15 11:18:27.069398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.008 [2024-05-15 11:18:27.069405] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.008 [2024-05-15 11:18:27.069411] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.008 [2024-05-15 11:18:27.069425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.008 qpair failed and we were unable to recover it. 00:26:30.008 [2024-05-15 11:18:27.079365] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.008 [2024-05-15 11:18:27.079427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.008 [2024-05-15 11:18:27.079442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.008 [2024-05-15 11:18:27.079449] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.008 [2024-05-15 11:18:27.079455] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.008 [2024-05-15 11:18:27.079469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.008 qpair failed and we were unable to recover it. 00:26:30.008 [2024-05-15 11:18:27.089326] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.008 [2024-05-15 11:18:27.089395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.008 [2024-05-15 11:18:27.089411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.008 [2024-05-15 11:18:27.089418] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.008 [2024-05-15 11:18:27.089424] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.008 [2024-05-15 11:18:27.089438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.008 qpair failed and we were unable to recover it. 00:26:30.008 [2024-05-15 11:18:27.099373] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.008 [2024-05-15 11:18:27.099423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.008 [2024-05-15 11:18:27.099441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.008 [2024-05-15 11:18:27.099448] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.008 [2024-05-15 11:18:27.099454] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.008 [2024-05-15 11:18:27.099468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.008 qpair failed and we were unable to recover it. 00:26:30.008 [2024-05-15 11:18:27.109360] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.008 [2024-05-15 11:18:27.109416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.008 [2024-05-15 11:18:27.109431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.008 [2024-05-15 11:18:27.109438] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.008 [2024-05-15 11:18:27.109444] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.008 [2024-05-15 11:18:27.109459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.008 qpair failed and we were unable to recover it. 00:26:30.008 [2024-05-15 11:18:27.119368] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.008 [2024-05-15 11:18:27.119432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.008 [2024-05-15 11:18:27.119446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.008 [2024-05-15 11:18:27.119453] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.008 [2024-05-15 11:18:27.119459] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.008 [2024-05-15 11:18:27.119473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.008 qpair failed and we were unable to recover it. 00:26:30.008 [2024-05-15 11:18:27.129483] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.008 [2024-05-15 11:18:27.129544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.008 [2024-05-15 11:18:27.129559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.008 [2024-05-15 11:18:27.129566] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.008 [2024-05-15 11:18:27.129572] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.008 [2024-05-15 11:18:27.129587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.008 qpair failed and we were unable to recover it. 00:26:30.008 [2024-05-15 11:18:27.139540] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.008 [2024-05-15 11:18:27.139628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.008 [2024-05-15 11:18:27.139642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.008 [2024-05-15 11:18:27.139649] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.008 [2024-05-15 11:18:27.139655] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.008 [2024-05-15 11:18:27.139673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.008 qpair failed and we were unable to recover it. 00:26:30.008 [2024-05-15 11:18:27.149530] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.008 [2024-05-15 11:18:27.149588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.008 [2024-05-15 11:18:27.149602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.008 [2024-05-15 11:18:27.149609] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.008 [2024-05-15 11:18:27.149615] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.008 [2024-05-15 11:18:27.149629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.008 qpair failed and we were unable to recover it. 00:26:30.008 [2024-05-15 11:18:27.159491] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.008 [2024-05-15 11:18:27.159542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.008 [2024-05-15 11:18:27.159557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.008 [2024-05-15 11:18:27.159564] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.008 [2024-05-15 11:18:27.159570] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.008 [2024-05-15 11:18:27.159584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.008 qpair failed and we were unable to recover it. 00:26:30.008 [2024-05-15 11:18:27.169557] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.008 [2024-05-15 11:18:27.169654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.008 [2024-05-15 11:18:27.169668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.008 [2024-05-15 11:18:27.169675] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.008 [2024-05-15 11:18:27.169681] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.008 [2024-05-15 11:18:27.169695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.008 qpair failed and we were unable to recover it. 00:26:30.008 [2024-05-15 11:18:27.179533] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.008 [2024-05-15 11:18:27.179588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.008 [2024-05-15 11:18:27.179602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.008 [2024-05-15 11:18:27.179609] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.008 [2024-05-15 11:18:27.179615] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.008 [2024-05-15 11:18:27.179629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.008 qpair failed and we were unable to recover it. 00:26:30.008 [2024-05-15 11:18:27.189702] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.008 [2024-05-15 11:18:27.189762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.008 [2024-05-15 11:18:27.189780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.008 [2024-05-15 11:18:27.189787] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.008 [2024-05-15 11:18:27.189793] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.008 [2024-05-15 11:18:27.189806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.008 qpair failed and we were unable to recover it. 00:26:30.008 [2024-05-15 11:18:27.199662] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.008 [2024-05-15 11:18:27.199722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.008 [2024-05-15 11:18:27.199737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.008 [2024-05-15 11:18:27.199744] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.008 [2024-05-15 11:18:27.199749] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.008 [2024-05-15 11:18:27.199764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.009 qpair failed and we were unable to recover it. 00:26:30.009 [2024-05-15 11:18:27.209687] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.009 [2024-05-15 11:18:27.209740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.009 [2024-05-15 11:18:27.209755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.009 [2024-05-15 11:18:27.209762] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.009 [2024-05-15 11:18:27.209768] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.009 [2024-05-15 11:18:27.209782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.009 qpair failed and we were unable to recover it. 00:26:30.009 [2024-05-15 11:18:27.219753] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.009 [2024-05-15 11:18:27.219809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.009 [2024-05-15 11:18:27.219824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.009 [2024-05-15 11:18:27.219831] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.009 [2024-05-15 11:18:27.219837] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.009 [2024-05-15 11:18:27.219851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.009 qpair failed and we were unable to recover it. 00:26:30.009 [2024-05-15 11:18:27.229747] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.009 [2024-05-15 11:18:27.229830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.009 [2024-05-15 11:18:27.229847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.009 [2024-05-15 11:18:27.229857] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.009 [2024-05-15 11:18:27.229866] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.009 [2024-05-15 11:18:27.229890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.009 qpair failed and we were unable to recover it. 00:26:30.009 [2024-05-15 11:18:27.239778] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.009 [2024-05-15 11:18:27.239833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.009 [2024-05-15 11:18:27.239849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.009 [2024-05-15 11:18:27.239856] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.009 [2024-05-15 11:18:27.239862] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.009 [2024-05-15 11:18:27.239877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.009 qpair failed and we were unable to recover it. 00:26:30.009 [2024-05-15 11:18:27.249782] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.009 [2024-05-15 11:18:27.249840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.009 [2024-05-15 11:18:27.249855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.009 [2024-05-15 11:18:27.249862] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.009 [2024-05-15 11:18:27.249868] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.009 [2024-05-15 11:18:27.249882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.009 qpair failed and we were unable to recover it. 00:26:30.009 [2024-05-15 11:18:27.259861] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.009 [2024-05-15 11:18:27.259916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.009 [2024-05-15 11:18:27.259931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.009 [2024-05-15 11:18:27.259937] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.009 [2024-05-15 11:18:27.259943] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.009 [2024-05-15 11:18:27.259958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.009 qpair failed and we were unable to recover it. 00:26:30.009 [2024-05-15 11:18:27.269913] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.009 [2024-05-15 11:18:27.269978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.009 [2024-05-15 11:18:27.269993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.009 [2024-05-15 11:18:27.269999] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.009 [2024-05-15 11:18:27.270005] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.009 [2024-05-15 11:18:27.270019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.009 qpair failed and we were unable to recover it. 00:26:30.268 [2024-05-15 11:18:27.279912] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.268 [2024-05-15 11:18:27.279974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.268 [2024-05-15 11:18:27.279989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.268 [2024-05-15 11:18:27.279996] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.268 [2024-05-15 11:18:27.280002] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.268 [2024-05-15 11:18:27.280016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.268 qpair failed and we were unable to recover it. 00:26:30.268 [2024-05-15 11:18:27.289963] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.268 [2024-05-15 11:18:27.290031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.269 [2024-05-15 11:18:27.290045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.269 [2024-05-15 11:18:27.290052] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.269 [2024-05-15 11:18:27.290058] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.269 [2024-05-15 11:18:27.290072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.269 qpair failed and we were unable to recover it. 00:26:30.269 [2024-05-15 11:18:27.299970] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.269 [2024-05-15 11:18:27.300026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.269 [2024-05-15 11:18:27.300041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.269 [2024-05-15 11:18:27.300047] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.269 [2024-05-15 11:18:27.300053] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.269 [2024-05-15 11:18:27.300067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.269 qpair failed and we were unable to recover it. 00:26:30.269 [2024-05-15 11:18:27.310004] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.269 [2024-05-15 11:18:27.310064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.269 [2024-05-15 11:18:27.310079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.269 [2024-05-15 11:18:27.310085] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.269 [2024-05-15 11:18:27.310091] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.269 [2024-05-15 11:18:27.310105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.269 qpair failed and we were unable to recover it. 00:26:30.269 [2024-05-15 11:18:27.320024] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.269 [2024-05-15 11:18:27.320076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.269 [2024-05-15 11:18:27.320091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.269 [2024-05-15 11:18:27.320097] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.269 [2024-05-15 11:18:27.320109] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.269 [2024-05-15 11:18:27.320124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.269 qpair failed and we were unable to recover it. 00:26:30.269 [2024-05-15 11:18:27.330040] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.269 [2024-05-15 11:18:27.330091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.269 [2024-05-15 11:18:27.330105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.269 [2024-05-15 11:18:27.330112] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.269 [2024-05-15 11:18:27.330118] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.269 [2024-05-15 11:18:27.330132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.269 qpair failed and we were unable to recover it. 00:26:30.269 [2024-05-15 11:18:27.340077] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.269 [2024-05-15 11:18:27.340131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.269 [2024-05-15 11:18:27.340146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.269 [2024-05-15 11:18:27.340153] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.269 [2024-05-15 11:18:27.340159] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.269 [2024-05-15 11:18:27.340178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.269 qpair failed and we were unable to recover it. 00:26:30.269 [2024-05-15 11:18:27.350114] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.269 [2024-05-15 11:18:27.350175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.269 [2024-05-15 11:18:27.350189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.269 [2024-05-15 11:18:27.350196] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.269 [2024-05-15 11:18:27.350202] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.269 [2024-05-15 11:18:27.350217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.269 qpair failed and we were unable to recover it. 00:26:30.269 [2024-05-15 11:18:27.360138] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.269 [2024-05-15 11:18:27.360200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.269 [2024-05-15 11:18:27.360214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.269 [2024-05-15 11:18:27.360221] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.269 [2024-05-15 11:18:27.360227] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.269 [2024-05-15 11:18:27.360242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.269 qpair failed and we were unable to recover it. 00:26:30.269 [2024-05-15 11:18:27.370170] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.269 [2024-05-15 11:18:27.370225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.269 [2024-05-15 11:18:27.370240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.269 [2024-05-15 11:18:27.370247] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.269 [2024-05-15 11:18:27.370253] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.269 [2024-05-15 11:18:27.370267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.269 qpair failed and we were unable to recover it. 00:26:30.269 [2024-05-15 11:18:27.380201] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.269 [2024-05-15 11:18:27.380257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.269 [2024-05-15 11:18:27.380272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.269 [2024-05-15 11:18:27.380279] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.269 [2024-05-15 11:18:27.380286] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.269 [2024-05-15 11:18:27.380300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.269 qpair failed and we were unable to recover it. 00:26:30.269 [2024-05-15 11:18:27.390268] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.269 [2024-05-15 11:18:27.390328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.269 [2024-05-15 11:18:27.390342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.269 [2024-05-15 11:18:27.390349] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.269 [2024-05-15 11:18:27.390355] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.270 [2024-05-15 11:18:27.390369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.270 qpair failed and we were unable to recover it. 00:26:30.270 [2024-05-15 11:18:27.400263] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.270 [2024-05-15 11:18:27.400318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.270 [2024-05-15 11:18:27.400332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.270 [2024-05-15 11:18:27.400339] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.270 [2024-05-15 11:18:27.400345] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.270 [2024-05-15 11:18:27.400359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.270 qpair failed and we were unable to recover it. 00:26:30.270 [2024-05-15 11:18:27.410267] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.270 [2024-05-15 11:18:27.410318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.270 [2024-05-15 11:18:27.410332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.270 [2024-05-15 11:18:27.410342] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.270 [2024-05-15 11:18:27.410348] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.270 [2024-05-15 11:18:27.410362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.270 qpair failed and we were unable to recover it. 00:26:30.270 [2024-05-15 11:18:27.420323] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.270 [2024-05-15 11:18:27.420376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.270 [2024-05-15 11:18:27.420391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.270 [2024-05-15 11:18:27.420398] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.270 [2024-05-15 11:18:27.420404] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.270 [2024-05-15 11:18:27.420418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.270 qpair failed and we were unable to recover it. 00:26:30.270 [2024-05-15 11:18:27.430408] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.270 [2024-05-15 11:18:27.430466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.270 [2024-05-15 11:18:27.430481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.270 [2024-05-15 11:18:27.430488] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.270 [2024-05-15 11:18:27.430494] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.270 [2024-05-15 11:18:27.430508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.270 qpair failed and we were unable to recover it. 00:26:30.270 [2024-05-15 11:18:27.440372] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.270 [2024-05-15 11:18:27.440443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.270 [2024-05-15 11:18:27.440458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.270 [2024-05-15 11:18:27.440465] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.270 [2024-05-15 11:18:27.440471] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.270 [2024-05-15 11:18:27.440485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.270 qpair failed and we were unable to recover it. 00:26:30.270 [2024-05-15 11:18:27.450444] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.270 [2024-05-15 11:18:27.450505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.270 [2024-05-15 11:18:27.450518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.270 [2024-05-15 11:18:27.450525] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.270 [2024-05-15 11:18:27.450531] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.270 [2024-05-15 11:18:27.450546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.270 qpair failed and we were unable to recover it. 00:26:30.270 [2024-05-15 11:18:27.460465] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.270 [2024-05-15 11:18:27.460524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.270 [2024-05-15 11:18:27.460538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.270 [2024-05-15 11:18:27.460545] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.270 [2024-05-15 11:18:27.460551] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.270 [2024-05-15 11:18:27.460565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.270 qpair failed and we were unable to recover it. 00:26:30.270 [2024-05-15 11:18:27.470470] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.270 [2024-05-15 11:18:27.470525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.270 [2024-05-15 11:18:27.470540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.271 [2024-05-15 11:18:27.470546] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.271 [2024-05-15 11:18:27.470552] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.271 [2024-05-15 11:18:27.470567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.271 qpair failed and we were unable to recover it. 00:26:30.271 [2024-05-15 11:18:27.480495] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.271 [2024-05-15 11:18:27.480553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.271 [2024-05-15 11:18:27.480568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.271 [2024-05-15 11:18:27.480575] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.271 [2024-05-15 11:18:27.480581] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.271 [2024-05-15 11:18:27.480595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.271 qpair failed and we were unable to recover it. 00:26:30.271 [2024-05-15 11:18:27.490525] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.271 [2024-05-15 11:18:27.490582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.271 [2024-05-15 11:18:27.490596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.271 [2024-05-15 11:18:27.490603] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.271 [2024-05-15 11:18:27.490609] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.271 [2024-05-15 11:18:27.490623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.271 qpair failed and we were unable to recover it. 00:26:30.271 [2024-05-15 11:18:27.500550] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.271 [2024-05-15 11:18:27.500608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.271 [2024-05-15 11:18:27.500627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.271 [2024-05-15 11:18:27.500634] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.271 [2024-05-15 11:18:27.500640] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.271 [2024-05-15 11:18:27.500654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.271 qpair failed and we were unable to recover it. 00:26:30.271 [2024-05-15 11:18:27.510595] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.271 [2024-05-15 11:18:27.510649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.271 [2024-05-15 11:18:27.510663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.271 [2024-05-15 11:18:27.510670] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.271 [2024-05-15 11:18:27.510676] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.271 [2024-05-15 11:18:27.510690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.271 qpair failed and we were unable to recover it. 00:26:30.271 [2024-05-15 11:18:27.520614] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.271 [2024-05-15 11:18:27.520670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.271 [2024-05-15 11:18:27.520685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.271 [2024-05-15 11:18:27.520692] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.271 [2024-05-15 11:18:27.520698] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.271 [2024-05-15 11:18:27.520712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.271 qpair failed and we were unable to recover it. 00:26:30.271 [2024-05-15 11:18:27.530641] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.271 [2024-05-15 11:18:27.530695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.271 [2024-05-15 11:18:27.530709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.271 [2024-05-15 11:18:27.530716] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.271 [2024-05-15 11:18:27.530721] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.271 [2024-05-15 11:18:27.530736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.271 qpair failed and we were unable to recover it. 00:26:30.531 [2024-05-15 11:18:27.540664] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.531 [2024-05-15 11:18:27.540718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.531 [2024-05-15 11:18:27.540732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.531 [2024-05-15 11:18:27.540739] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.531 [2024-05-15 11:18:27.540745] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.531 [2024-05-15 11:18:27.540763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.531 qpair failed and we were unable to recover it. 00:26:30.531 [2024-05-15 11:18:27.550697] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.531 [2024-05-15 11:18:27.550759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.531 [2024-05-15 11:18:27.550773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.531 [2024-05-15 11:18:27.550780] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.531 [2024-05-15 11:18:27.550786] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.531 [2024-05-15 11:18:27.550800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.531 qpair failed and we were unable to recover it. 00:26:30.531 [2024-05-15 11:18:27.560709] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.531 [2024-05-15 11:18:27.560764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.531 [2024-05-15 11:18:27.560778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.531 [2024-05-15 11:18:27.560785] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.531 [2024-05-15 11:18:27.560791] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.531 [2024-05-15 11:18:27.560806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.531 qpair failed and we were unable to recover it. 00:26:30.531 [2024-05-15 11:18:27.570786] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.531 [2024-05-15 11:18:27.570838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.531 [2024-05-15 11:18:27.570853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.531 [2024-05-15 11:18:27.570861] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.531 [2024-05-15 11:18:27.570867] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.531 [2024-05-15 11:18:27.570881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.531 qpair failed and we were unable to recover it. 00:26:30.531 [2024-05-15 11:18:27.580762] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.531 [2024-05-15 11:18:27.580818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.531 [2024-05-15 11:18:27.580833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.531 [2024-05-15 11:18:27.580840] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.531 [2024-05-15 11:18:27.580846] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.531 [2024-05-15 11:18:27.580861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.531 qpair failed and we were unable to recover it. 00:26:30.531 [2024-05-15 11:18:27.590737] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.531 [2024-05-15 11:18:27.590795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.531 [2024-05-15 11:18:27.590813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.531 [2024-05-15 11:18:27.590820] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.531 [2024-05-15 11:18:27.590826] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.531 [2024-05-15 11:18:27.590840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.531 qpair failed and we were unable to recover it. 00:26:30.531 [2024-05-15 11:18:27.600821] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.531 [2024-05-15 11:18:27.600872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.531 [2024-05-15 11:18:27.600886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.531 [2024-05-15 11:18:27.600893] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.531 [2024-05-15 11:18:27.600900] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.531 [2024-05-15 11:18:27.600914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.531 qpair failed and we were unable to recover it. 00:26:30.531 [2024-05-15 11:18:27.610839] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.531 [2024-05-15 11:18:27.610894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.531 [2024-05-15 11:18:27.610908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.531 [2024-05-15 11:18:27.610915] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.531 [2024-05-15 11:18:27.610921] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.531 [2024-05-15 11:18:27.610935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.531 qpair failed and we were unable to recover it. 00:26:30.531 [2024-05-15 11:18:27.620889] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.531 [2024-05-15 11:18:27.620946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.531 [2024-05-15 11:18:27.620961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.531 [2024-05-15 11:18:27.620967] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.531 [2024-05-15 11:18:27.620973] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.531 [2024-05-15 11:18:27.620988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.532 qpair failed and we were unable to recover it. 00:26:30.532 [2024-05-15 11:18:27.630979] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.532 [2024-05-15 11:18:27.631058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.532 [2024-05-15 11:18:27.631073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.532 [2024-05-15 11:18:27.631079] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.532 [2024-05-15 11:18:27.631085] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.532 [2024-05-15 11:18:27.631104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.532 qpair failed and we were unable to recover it. 00:26:30.532 [2024-05-15 11:18:27.640935] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.532 [2024-05-15 11:18:27.640991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.532 [2024-05-15 11:18:27.641006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.532 [2024-05-15 11:18:27.641013] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.532 [2024-05-15 11:18:27.641019] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.532 [2024-05-15 11:18:27.641033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.532 qpair failed and we were unable to recover it. 00:26:30.532 [2024-05-15 11:18:27.650965] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.532 [2024-05-15 11:18:27.651019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.532 [2024-05-15 11:18:27.651033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.532 [2024-05-15 11:18:27.651040] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.532 [2024-05-15 11:18:27.651046] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.532 [2024-05-15 11:18:27.651061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.532 qpair failed and we were unable to recover it. 00:26:30.532 [2024-05-15 11:18:27.660993] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.532 [2024-05-15 11:18:27.661050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.532 [2024-05-15 11:18:27.661064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.532 [2024-05-15 11:18:27.661071] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.532 [2024-05-15 11:18:27.661077] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.532 [2024-05-15 11:18:27.661091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.532 qpair failed and we were unable to recover it. 00:26:30.532 [2024-05-15 11:18:27.671039] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.532 [2024-05-15 11:18:27.671096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.532 [2024-05-15 11:18:27.671111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.532 [2024-05-15 11:18:27.671118] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.532 [2024-05-15 11:18:27.671124] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.532 [2024-05-15 11:18:27.671138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.532 qpair failed and we were unable to recover it. 00:26:30.532 [2024-05-15 11:18:27.681052] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.532 [2024-05-15 11:18:27.681106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.532 [2024-05-15 11:18:27.681124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.532 [2024-05-15 11:18:27.681131] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.532 [2024-05-15 11:18:27.681137] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.532 [2024-05-15 11:18:27.681151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.532 qpair failed and we were unable to recover it. 00:26:30.532 [2024-05-15 11:18:27.691087] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.532 [2024-05-15 11:18:27.691145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.532 [2024-05-15 11:18:27.691159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.532 [2024-05-15 11:18:27.691170] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.532 [2024-05-15 11:18:27.691176] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.532 [2024-05-15 11:18:27.691190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.532 qpair failed and we were unable to recover it. 00:26:30.532 [2024-05-15 11:18:27.701158] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.532 [2024-05-15 11:18:27.701217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.532 [2024-05-15 11:18:27.701232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.532 [2024-05-15 11:18:27.701239] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.532 [2024-05-15 11:18:27.701245] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.532 [2024-05-15 11:18:27.701259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.532 qpair failed and we were unable to recover it. 00:26:30.532 [2024-05-15 11:18:27.711218] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.532 [2024-05-15 11:18:27.711307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.532 [2024-05-15 11:18:27.711322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.532 [2024-05-15 11:18:27.711328] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.532 [2024-05-15 11:18:27.711334] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.532 [2024-05-15 11:18:27.711348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.532 qpair failed and we were unable to recover it. 00:26:30.532 [2024-05-15 11:18:27.721180] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.532 [2024-05-15 11:18:27.721237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.532 [2024-05-15 11:18:27.721252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.532 [2024-05-15 11:18:27.721259] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.532 [2024-05-15 11:18:27.721268] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.532 [2024-05-15 11:18:27.721282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.532 qpair failed and we were unable to recover it. 00:26:30.532 [2024-05-15 11:18:27.731210] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.532 [2024-05-15 11:18:27.731267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.532 [2024-05-15 11:18:27.731281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.532 [2024-05-15 11:18:27.731288] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.532 [2024-05-15 11:18:27.731294] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.532 [2024-05-15 11:18:27.731308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.532 qpair failed and we were unable to recover it. 00:26:30.532 [2024-05-15 11:18:27.741264] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.532 [2024-05-15 11:18:27.741319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.532 [2024-05-15 11:18:27.741333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.532 [2024-05-15 11:18:27.741340] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.532 [2024-05-15 11:18:27.741346] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.532 [2024-05-15 11:18:27.741360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.532 qpair failed and we were unable to recover it. 00:26:30.532 [2024-05-15 11:18:27.751280] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.532 [2024-05-15 11:18:27.751335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.532 [2024-05-15 11:18:27.751350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.532 [2024-05-15 11:18:27.751356] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.532 [2024-05-15 11:18:27.751362] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.532 [2024-05-15 11:18:27.751376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.532 qpair failed and we were unable to recover it. 00:26:30.532 [2024-05-15 11:18:27.761323] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.532 [2024-05-15 11:18:27.761382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.532 [2024-05-15 11:18:27.761396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.532 [2024-05-15 11:18:27.761403] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.532 [2024-05-15 11:18:27.761409] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.533 [2024-05-15 11:18:27.761423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.533 qpair failed and we were unable to recover it. 00:26:30.533 [2024-05-15 11:18:27.771261] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.533 [2024-05-15 11:18:27.771324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.533 [2024-05-15 11:18:27.771339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.533 [2024-05-15 11:18:27.771345] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.533 [2024-05-15 11:18:27.771351] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.533 [2024-05-15 11:18:27.771366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.533 qpair failed and we were unable to recover it. 00:26:30.533 [2024-05-15 11:18:27.781374] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.533 [2024-05-15 11:18:27.781429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.533 [2024-05-15 11:18:27.781444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.533 [2024-05-15 11:18:27.781451] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.533 [2024-05-15 11:18:27.781456] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.533 [2024-05-15 11:18:27.781470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.533 qpair failed and we were unable to recover it. 00:26:30.533 [2024-05-15 11:18:27.791395] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.533 [2024-05-15 11:18:27.791455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.533 [2024-05-15 11:18:27.791469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.533 [2024-05-15 11:18:27.791476] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.533 [2024-05-15 11:18:27.791482] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.533 [2024-05-15 11:18:27.791496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.533 qpair failed and we were unable to recover it. 00:26:30.792 [2024-05-15 11:18:27.801394] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.792 [2024-05-15 11:18:27.801450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.792 [2024-05-15 11:18:27.801464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.792 [2024-05-15 11:18:27.801471] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.792 [2024-05-15 11:18:27.801477] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.792 [2024-05-15 11:18:27.801491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.792 qpair failed and we were unable to recover it. 00:26:30.792 [2024-05-15 11:18:27.811438] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.792 [2024-05-15 11:18:27.811496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.792 [2024-05-15 11:18:27.811510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.792 [2024-05-15 11:18:27.811520] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.792 [2024-05-15 11:18:27.811526] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.792 [2024-05-15 11:18:27.811540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.792 qpair failed and we were unable to recover it. 00:26:30.792 [2024-05-15 11:18:27.821463] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.792 [2024-05-15 11:18:27.821518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.792 [2024-05-15 11:18:27.821532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.792 [2024-05-15 11:18:27.821539] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.792 [2024-05-15 11:18:27.821545] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.792 [2024-05-15 11:18:27.821559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.792 qpair failed and we were unable to recover it. 00:26:30.792 [2024-05-15 11:18:27.831508] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.792 [2024-05-15 11:18:27.831594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.792 [2024-05-15 11:18:27.831610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.792 [2024-05-15 11:18:27.831616] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.792 [2024-05-15 11:18:27.831622] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.792 [2024-05-15 11:18:27.831637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.792 qpair failed and we were unable to recover it. 00:26:30.792 [2024-05-15 11:18:27.841533] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.792 [2024-05-15 11:18:27.841588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.792 [2024-05-15 11:18:27.841603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.792 [2024-05-15 11:18:27.841610] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.792 [2024-05-15 11:18:27.841615] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.793 [2024-05-15 11:18:27.841629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.793 qpair failed and we were unable to recover it. 00:26:30.793 [2024-05-15 11:18:27.851562] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.793 [2024-05-15 11:18:27.851613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.793 [2024-05-15 11:18:27.851627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.793 [2024-05-15 11:18:27.851634] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.793 [2024-05-15 11:18:27.851640] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.793 [2024-05-15 11:18:27.851655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.793 qpair failed and we were unable to recover it. 00:26:30.793 [2024-05-15 11:18:27.861602] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.793 [2024-05-15 11:18:27.861654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.793 [2024-05-15 11:18:27.861669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.793 [2024-05-15 11:18:27.861676] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.793 [2024-05-15 11:18:27.861682] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.793 [2024-05-15 11:18:27.861696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.793 qpair failed and we were unable to recover it. 00:26:30.793 [2024-05-15 11:18:27.871623] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.793 [2024-05-15 11:18:27.871677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.793 [2024-05-15 11:18:27.871692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.793 [2024-05-15 11:18:27.871699] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.793 [2024-05-15 11:18:27.871705] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.793 [2024-05-15 11:18:27.871719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.793 qpair failed and we were unable to recover it. 00:26:30.793 [2024-05-15 11:18:27.881642] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.793 [2024-05-15 11:18:27.881699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.793 [2024-05-15 11:18:27.881714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.793 [2024-05-15 11:18:27.881721] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.793 [2024-05-15 11:18:27.881727] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.793 [2024-05-15 11:18:27.881741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.793 qpair failed and we were unable to recover it. 00:26:30.793 [2024-05-15 11:18:27.891669] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.793 [2024-05-15 11:18:27.891725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.793 [2024-05-15 11:18:27.891739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.793 [2024-05-15 11:18:27.891746] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.793 [2024-05-15 11:18:27.891752] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.793 [2024-05-15 11:18:27.891766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.793 qpair failed and we were unable to recover it. 00:26:30.793 [2024-05-15 11:18:27.901705] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.793 [2024-05-15 11:18:27.901761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.793 [2024-05-15 11:18:27.901776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.793 [2024-05-15 11:18:27.901786] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.793 [2024-05-15 11:18:27.901792] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.793 [2024-05-15 11:18:27.901805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.793 qpair failed and we were unable to recover it. 00:26:30.793 [2024-05-15 11:18:27.911736] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.793 [2024-05-15 11:18:27.911794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.793 [2024-05-15 11:18:27.911808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.793 [2024-05-15 11:18:27.911815] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.793 [2024-05-15 11:18:27.911821] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.793 [2024-05-15 11:18:27.911835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.793 qpair failed and we were unable to recover it. 00:26:30.793 [2024-05-15 11:18:27.921764] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.793 [2024-05-15 11:18:27.921819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.793 [2024-05-15 11:18:27.921834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.793 [2024-05-15 11:18:27.921841] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.793 [2024-05-15 11:18:27.921847] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.793 [2024-05-15 11:18:27.921861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.793 qpair failed and we were unable to recover it. 00:26:30.793 [2024-05-15 11:18:27.931787] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.793 [2024-05-15 11:18:27.931841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.793 [2024-05-15 11:18:27.931856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.793 [2024-05-15 11:18:27.931863] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.793 [2024-05-15 11:18:27.931869] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.793 [2024-05-15 11:18:27.931884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.793 qpair failed and we were unable to recover it. 00:26:30.793 [2024-05-15 11:18:27.941837] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.794 [2024-05-15 11:18:27.941889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.794 [2024-05-15 11:18:27.941904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.794 [2024-05-15 11:18:27.941910] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.794 [2024-05-15 11:18:27.941916] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.794 [2024-05-15 11:18:27.941931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.794 qpair failed and we were unable to recover it. 00:26:30.794 [2024-05-15 11:18:27.951898] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.794 [2024-05-15 11:18:27.951956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.794 [2024-05-15 11:18:27.951970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.794 [2024-05-15 11:18:27.951977] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.794 [2024-05-15 11:18:27.951983] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.794 [2024-05-15 11:18:27.951997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.794 qpair failed and we were unable to recover it. 00:26:30.794 [2024-05-15 11:18:27.961892] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.794 [2024-05-15 11:18:27.961978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.794 [2024-05-15 11:18:27.961992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.794 [2024-05-15 11:18:27.961998] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.794 [2024-05-15 11:18:27.962004] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.794 [2024-05-15 11:18:27.962018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.794 qpair failed and we were unable to recover it. 00:26:30.794 [2024-05-15 11:18:27.971927] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.794 [2024-05-15 11:18:27.971982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.794 [2024-05-15 11:18:27.971997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.794 [2024-05-15 11:18:27.972004] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.794 [2024-05-15 11:18:27.972010] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.794 [2024-05-15 11:18:27.972024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.794 qpair failed and we were unable to recover it. 00:26:30.794 [2024-05-15 11:18:27.981945] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.794 [2024-05-15 11:18:27.982009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.794 [2024-05-15 11:18:27.982025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.794 [2024-05-15 11:18:27.982031] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.794 [2024-05-15 11:18:27.982037] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.794 [2024-05-15 11:18:27.982052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.794 qpair failed and we were unable to recover it. 00:26:30.794 [2024-05-15 11:18:27.991972] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.794 [2024-05-15 11:18:27.992027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.794 [2024-05-15 11:18:27.992045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.794 [2024-05-15 11:18:27.992052] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.794 [2024-05-15 11:18:27.992058] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.794 [2024-05-15 11:18:27.992072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.794 qpair failed and we were unable to recover it. 00:26:30.794 [2024-05-15 11:18:28.001996] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.794 [2024-05-15 11:18:28.002048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.794 [2024-05-15 11:18:28.002063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.794 [2024-05-15 11:18:28.002069] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.794 [2024-05-15 11:18:28.002075] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.794 [2024-05-15 11:18:28.002090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.794 qpair failed and we were unable to recover it. 00:26:30.794 [2024-05-15 11:18:28.012021] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.794 [2024-05-15 11:18:28.012076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.794 [2024-05-15 11:18:28.012090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.794 [2024-05-15 11:18:28.012097] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.794 [2024-05-15 11:18:28.012103] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.794 [2024-05-15 11:18:28.012117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.794 qpair failed and we were unable to recover it. 00:26:30.794 [2024-05-15 11:18:28.022024] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.794 [2024-05-15 11:18:28.022078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.794 [2024-05-15 11:18:28.022093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.794 [2024-05-15 11:18:28.022100] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.794 [2024-05-15 11:18:28.022106] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.794 [2024-05-15 11:18:28.022120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.794 qpair failed and we were unable to recover it. 00:26:30.794 [2024-05-15 11:18:28.032082] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.795 [2024-05-15 11:18:28.032137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.795 [2024-05-15 11:18:28.032152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.795 [2024-05-15 11:18:28.032159] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.795 [2024-05-15 11:18:28.032169] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.795 [2024-05-15 11:18:28.032191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.795 qpair failed and we were unable to recover it. 00:26:30.795 [2024-05-15 11:18:28.042122] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.795 [2024-05-15 11:18:28.042204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.795 [2024-05-15 11:18:28.042219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.795 [2024-05-15 11:18:28.042225] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.795 [2024-05-15 11:18:28.042231] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.795 [2024-05-15 11:18:28.042245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.795 qpair failed and we were unable to recover it. 00:26:30.795 [2024-05-15 11:18:28.052132] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.795 [2024-05-15 11:18:28.052185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.795 [2024-05-15 11:18:28.052200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.795 [2024-05-15 11:18:28.052207] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.795 [2024-05-15 11:18:28.052214] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:30.795 [2024-05-15 11:18:28.052228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.795 qpair failed and we were unable to recover it. 00:26:31.054 [2024-05-15 11:18:28.062173] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.054 [2024-05-15 11:18:28.062232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.054 [2024-05-15 11:18:28.062246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.054 [2024-05-15 11:18:28.062253] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.054 [2024-05-15 11:18:28.062259] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.054 [2024-05-15 11:18:28.062273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.054 qpair failed and we were unable to recover it. 00:26:31.054 [2024-05-15 11:18:28.072244] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.054 [2024-05-15 11:18:28.072297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.054 [2024-05-15 11:18:28.072311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.054 [2024-05-15 11:18:28.072318] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.054 [2024-05-15 11:18:28.072324] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.054 [2024-05-15 11:18:28.072338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.054 qpair failed and we were unable to recover it. 00:26:31.054 [2024-05-15 11:18:28.082261] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.054 [2024-05-15 11:18:28.082320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.054 [2024-05-15 11:18:28.082338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.054 [2024-05-15 11:18:28.082344] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.054 [2024-05-15 11:18:28.082350] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.054 [2024-05-15 11:18:28.082364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.054 qpair failed and we were unable to recover it. 00:26:31.054 [2024-05-15 11:18:28.092255] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.054 [2024-05-15 11:18:28.092313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.054 [2024-05-15 11:18:28.092327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.054 [2024-05-15 11:18:28.092334] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.054 [2024-05-15 11:18:28.092340] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.054 [2024-05-15 11:18:28.092354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.054 qpair failed and we were unable to recover it. 00:26:31.054 [2024-05-15 11:18:28.102224] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.054 [2024-05-15 11:18:28.102279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.054 [2024-05-15 11:18:28.102294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.054 [2024-05-15 11:18:28.102300] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.054 [2024-05-15 11:18:28.102307] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.054 [2024-05-15 11:18:28.102320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.054 qpair failed and we were unable to recover it. 00:26:31.054 [2024-05-15 11:18:28.112354] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.054 [2024-05-15 11:18:28.112408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.054 [2024-05-15 11:18:28.112422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.054 [2024-05-15 11:18:28.112429] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.054 [2024-05-15 11:18:28.112435] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.054 [2024-05-15 11:18:28.112449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.054 qpair failed and we were unable to recover it. 00:26:31.054 [2024-05-15 11:18:28.122343] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.054 [2024-05-15 11:18:28.122400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.054 [2024-05-15 11:18:28.122415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.054 [2024-05-15 11:18:28.122422] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.054 [2024-05-15 11:18:28.122430] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.054 [2024-05-15 11:18:28.122444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.054 qpair failed and we were unable to recover it. 00:26:31.054 [2024-05-15 11:18:28.132363] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.054 [2024-05-15 11:18:28.132443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.054 [2024-05-15 11:18:28.132458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.054 [2024-05-15 11:18:28.132465] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.054 [2024-05-15 11:18:28.132471] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.054 [2024-05-15 11:18:28.132485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.054 qpair failed and we were unable to recover it. 00:26:31.054 [2024-05-15 11:18:28.142449] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.055 [2024-05-15 11:18:28.142503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.055 [2024-05-15 11:18:28.142517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.055 [2024-05-15 11:18:28.142524] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.055 [2024-05-15 11:18:28.142530] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.055 [2024-05-15 11:18:28.142544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.055 qpair failed and we were unable to recover it. 00:26:31.055 [2024-05-15 11:18:28.152410] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.055 [2024-05-15 11:18:28.152466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.055 [2024-05-15 11:18:28.152481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.055 [2024-05-15 11:18:28.152490] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.055 [2024-05-15 11:18:28.152496] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.055 [2024-05-15 11:18:28.152510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.055 qpair failed and we were unable to recover it. 00:26:31.055 [2024-05-15 11:18:28.162451] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.055 [2024-05-15 11:18:28.162508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.055 [2024-05-15 11:18:28.162523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.055 [2024-05-15 11:18:28.162530] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.055 [2024-05-15 11:18:28.162536] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.055 [2024-05-15 11:18:28.162550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.055 qpair failed and we were unable to recover it. 00:26:31.055 [2024-05-15 11:18:28.172399] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.055 [2024-05-15 11:18:28.172459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.055 [2024-05-15 11:18:28.172474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.055 [2024-05-15 11:18:28.172481] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.055 [2024-05-15 11:18:28.172488] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.055 [2024-05-15 11:18:28.172502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.055 qpair failed and we were unable to recover it. 00:26:31.055 [2024-05-15 11:18:28.182508] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.055 [2024-05-15 11:18:28.182558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.055 [2024-05-15 11:18:28.182573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.055 [2024-05-15 11:18:28.182581] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.055 [2024-05-15 11:18:28.182588] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.055 [2024-05-15 11:18:28.182602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.055 qpair failed and we were unable to recover it. 00:26:31.055 [2024-05-15 11:18:28.192542] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.055 [2024-05-15 11:18:28.192603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.055 [2024-05-15 11:18:28.192617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.055 [2024-05-15 11:18:28.192624] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.055 [2024-05-15 11:18:28.192630] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.055 [2024-05-15 11:18:28.192644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.055 qpair failed and we were unable to recover it. 00:26:31.055 [2024-05-15 11:18:28.202557] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.055 [2024-05-15 11:18:28.202617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.055 [2024-05-15 11:18:28.202631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.055 [2024-05-15 11:18:28.202639] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.055 [2024-05-15 11:18:28.202645] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.055 [2024-05-15 11:18:28.202658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.055 qpair failed and we were unable to recover it. 00:26:31.055 [2024-05-15 11:18:28.212592] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.055 [2024-05-15 11:18:28.212655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.055 [2024-05-15 11:18:28.212669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.055 [2024-05-15 11:18:28.212680] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.055 [2024-05-15 11:18:28.212686] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.055 [2024-05-15 11:18:28.212700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.055 qpair failed and we were unable to recover it. 00:26:31.055 [2024-05-15 11:18:28.222620] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.055 [2024-05-15 11:18:28.222670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.055 [2024-05-15 11:18:28.222685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.055 [2024-05-15 11:18:28.222692] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.055 [2024-05-15 11:18:28.222697] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.055 [2024-05-15 11:18:28.222711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.055 qpair failed and we were unable to recover it. 00:26:31.055 [2024-05-15 11:18:28.232587] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.055 [2024-05-15 11:18:28.232645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.055 [2024-05-15 11:18:28.232660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.055 [2024-05-15 11:18:28.232667] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.055 [2024-05-15 11:18:28.232673] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.055 [2024-05-15 11:18:28.232687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.055 qpair failed and we were unable to recover it. 00:26:31.055 [2024-05-15 11:18:28.242670] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.055 [2024-05-15 11:18:28.242731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.055 [2024-05-15 11:18:28.242749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.055 [2024-05-15 11:18:28.242759] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.055 [2024-05-15 11:18:28.242767] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.055 [2024-05-15 11:18:28.242781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.055 qpair failed and we were unable to recover it. 00:26:31.055 [2024-05-15 11:18:28.252736] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.055 [2024-05-15 11:18:28.252818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.055 [2024-05-15 11:18:28.252833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.055 [2024-05-15 11:18:28.252839] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.055 [2024-05-15 11:18:28.252845] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.055 [2024-05-15 11:18:28.252859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.055 qpair failed and we were unable to recover it. 00:26:31.055 [2024-05-15 11:18:28.262710] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.055 [2024-05-15 11:18:28.262764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.055 [2024-05-15 11:18:28.262778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.055 [2024-05-15 11:18:28.262785] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.055 [2024-05-15 11:18:28.262791] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.055 [2024-05-15 11:18:28.262805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.055 qpair failed and we were unable to recover it. 00:26:31.055 [2024-05-15 11:18:28.272700] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.055 [2024-05-15 11:18:28.272765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.055 [2024-05-15 11:18:28.272779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.055 [2024-05-15 11:18:28.272785] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.055 [2024-05-15 11:18:28.272791] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.055 [2024-05-15 11:18:28.272805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.055 qpair failed and we were unable to recover it. 00:26:31.055 [2024-05-15 11:18:28.282829] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.055 [2024-05-15 11:18:28.282884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.055 [2024-05-15 11:18:28.282899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.055 [2024-05-15 11:18:28.282906] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.056 [2024-05-15 11:18:28.282911] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.056 [2024-05-15 11:18:28.282925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.056 qpair failed and we were unable to recover it. 00:26:31.056 [2024-05-15 11:18:28.292877] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.056 [2024-05-15 11:18:28.292942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.056 [2024-05-15 11:18:28.292957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.056 [2024-05-15 11:18:28.292963] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.056 [2024-05-15 11:18:28.292969] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.056 [2024-05-15 11:18:28.292984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.056 qpair failed and we were unable to recover it. 00:26:31.056 [2024-05-15 11:18:28.302891] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.056 [2024-05-15 11:18:28.302943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.056 [2024-05-15 11:18:28.302957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.056 [2024-05-15 11:18:28.302966] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.056 [2024-05-15 11:18:28.302972] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.056 [2024-05-15 11:18:28.302987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.056 qpair failed and we were unable to recover it. 00:26:31.056 [2024-05-15 11:18:28.312889] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.056 [2024-05-15 11:18:28.312950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.056 [2024-05-15 11:18:28.312964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.056 [2024-05-15 11:18:28.312971] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.056 [2024-05-15 11:18:28.312977] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.056 [2024-05-15 11:18:28.312992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.056 qpair failed and we were unable to recover it. 00:26:31.315 [2024-05-15 11:18:28.322904] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.315 [2024-05-15 11:18:28.322960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.315 [2024-05-15 11:18:28.322975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.315 [2024-05-15 11:18:28.322981] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.315 [2024-05-15 11:18:28.322987] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.315 [2024-05-15 11:18:28.323001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.315 qpair failed and we were unable to recover it. 00:26:31.315 [2024-05-15 11:18:28.332934] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.315 [2024-05-15 11:18:28.333028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.315 [2024-05-15 11:18:28.333043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.315 [2024-05-15 11:18:28.333049] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.315 [2024-05-15 11:18:28.333055] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.315 [2024-05-15 11:18:28.333069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.315 qpair failed and we were unable to recover it. 00:26:31.315 [2024-05-15 11:18:28.342944] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.315 [2024-05-15 11:18:28.343031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.315 [2024-05-15 11:18:28.343045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.315 [2024-05-15 11:18:28.343052] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.315 [2024-05-15 11:18:28.343057] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.315 [2024-05-15 11:18:28.343071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.315 qpair failed and we were unable to recover it. 00:26:31.315 [2024-05-15 11:18:28.352998] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.315 [2024-05-15 11:18:28.353056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.315 [2024-05-15 11:18:28.353071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.315 [2024-05-15 11:18:28.353078] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.315 [2024-05-15 11:18:28.353084] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.315 [2024-05-15 11:18:28.353099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.315 qpair failed and we were unable to recover it. 00:26:31.315 [2024-05-15 11:18:28.362950] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.315 [2024-05-15 11:18:28.363002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.315 [2024-05-15 11:18:28.363016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.315 [2024-05-15 11:18:28.363022] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.315 [2024-05-15 11:18:28.363029] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.315 [2024-05-15 11:18:28.363043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.315 qpair failed and we were unable to recover it. 00:26:31.315 [2024-05-15 11:18:28.373072] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.315 [2024-05-15 11:18:28.373123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.315 [2024-05-15 11:18:28.373137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.315 [2024-05-15 11:18:28.373144] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.315 [2024-05-15 11:18:28.373150] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.315 [2024-05-15 11:18:28.373170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.315 qpair failed and we were unable to recover it. 00:26:31.315 [2024-05-15 11:18:28.383008] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.315 [2024-05-15 11:18:28.383064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.315 [2024-05-15 11:18:28.383080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.315 [2024-05-15 11:18:28.383086] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.315 [2024-05-15 11:18:28.383092] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.315 [2024-05-15 11:18:28.383107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.315 qpair failed and we were unable to recover it. 00:26:31.315 [2024-05-15 11:18:28.393092] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.315 [2024-05-15 11:18:28.393152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.315 [2024-05-15 11:18:28.393175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.315 [2024-05-15 11:18:28.393182] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.315 [2024-05-15 11:18:28.393188] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.315 [2024-05-15 11:18:28.393202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.315 qpair failed and we were unable to recover it. 00:26:31.315 [2024-05-15 11:18:28.403173] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.315 [2024-05-15 11:18:28.403227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.315 [2024-05-15 11:18:28.403242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.315 [2024-05-15 11:18:28.403249] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.315 [2024-05-15 11:18:28.403255] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.315 [2024-05-15 11:18:28.403269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.315 qpair failed and we were unable to recover it. 00:26:31.315 [2024-05-15 11:18:28.413214] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.315 [2024-05-15 11:18:28.413274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.316 [2024-05-15 11:18:28.413288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.316 [2024-05-15 11:18:28.413296] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.316 [2024-05-15 11:18:28.413302] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.316 [2024-05-15 11:18:28.413316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.316 qpair failed and we were unable to recover it. 00:26:31.316 [2024-05-15 11:18:28.423126] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.316 [2024-05-15 11:18:28.423190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.316 [2024-05-15 11:18:28.423206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.316 [2024-05-15 11:18:28.423213] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.316 [2024-05-15 11:18:28.423219] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.316 [2024-05-15 11:18:28.423235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.316 qpair failed and we were unable to recover it. 00:26:31.316 [2024-05-15 11:18:28.433211] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.316 [2024-05-15 11:18:28.433269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.316 [2024-05-15 11:18:28.433285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.316 [2024-05-15 11:18:28.433292] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.316 [2024-05-15 11:18:28.433298] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.316 [2024-05-15 11:18:28.433316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.316 qpair failed and we were unable to recover it. 00:26:31.316 [2024-05-15 11:18:28.443267] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.316 [2024-05-15 11:18:28.443322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.316 [2024-05-15 11:18:28.443337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.316 [2024-05-15 11:18:28.443344] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.316 [2024-05-15 11:18:28.443350] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.316 [2024-05-15 11:18:28.443364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.316 qpair failed and we were unable to recover it. 00:26:31.316 [2024-05-15 11:18:28.453289] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.316 [2024-05-15 11:18:28.453340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.316 [2024-05-15 11:18:28.453355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.316 [2024-05-15 11:18:28.453362] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.316 [2024-05-15 11:18:28.453368] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.316 [2024-05-15 11:18:28.453383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.316 qpair failed and we were unable to recover it. 00:26:31.316 [2024-05-15 11:18:28.463317] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.316 [2024-05-15 11:18:28.463375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.316 [2024-05-15 11:18:28.463390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.316 [2024-05-15 11:18:28.463397] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.316 [2024-05-15 11:18:28.463402] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.316 [2024-05-15 11:18:28.463416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.316 qpair failed and we were unable to recover it. 00:26:31.316 [2024-05-15 11:18:28.473302] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.316 [2024-05-15 11:18:28.473361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.316 [2024-05-15 11:18:28.473376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.316 [2024-05-15 11:18:28.473383] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.316 [2024-05-15 11:18:28.473389] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.316 [2024-05-15 11:18:28.473403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.316 qpair failed and we were unable to recover it. 00:26:31.316 [2024-05-15 11:18:28.483405] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.316 [2024-05-15 11:18:28.483475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.316 [2024-05-15 11:18:28.483493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.316 [2024-05-15 11:18:28.483500] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.316 [2024-05-15 11:18:28.483506] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.316 [2024-05-15 11:18:28.483520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.316 qpair failed and we were unable to recover it. 00:26:31.316 [2024-05-15 11:18:28.493337] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.316 [2024-05-15 11:18:28.493399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.316 [2024-05-15 11:18:28.493414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.316 [2024-05-15 11:18:28.493420] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.316 [2024-05-15 11:18:28.493426] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.316 [2024-05-15 11:18:28.493440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.316 qpair failed and we were unable to recover it. 00:26:31.316 [2024-05-15 11:18:28.503404] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.316 [2024-05-15 11:18:28.503459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.316 [2024-05-15 11:18:28.503473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.316 [2024-05-15 11:18:28.503479] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.316 [2024-05-15 11:18:28.503485] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.316 [2024-05-15 11:18:28.503499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.316 qpair failed and we were unable to recover it. 00:26:31.316 [2024-05-15 11:18:28.513497] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.316 [2024-05-15 11:18:28.513553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.316 [2024-05-15 11:18:28.513568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.316 [2024-05-15 11:18:28.513574] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.316 [2024-05-15 11:18:28.513580] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.316 [2024-05-15 11:18:28.513594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.316 qpair failed and we were unable to recover it. 00:26:31.316 [2024-05-15 11:18:28.523451] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.316 [2024-05-15 11:18:28.523503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.316 [2024-05-15 11:18:28.523517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.316 [2024-05-15 11:18:28.523523] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.316 [2024-05-15 11:18:28.523532] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.316 [2024-05-15 11:18:28.523546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.316 qpair failed and we were unable to recover it. 00:26:31.316 [2024-05-15 11:18:28.533433] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.316 [2024-05-15 11:18:28.533493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.316 [2024-05-15 11:18:28.533507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.316 [2024-05-15 11:18:28.533513] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.316 [2024-05-15 11:18:28.533519] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.317 [2024-05-15 11:18:28.533534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.317 qpair failed and we were unable to recover it. 00:26:31.317 [2024-05-15 11:18:28.543537] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.317 [2024-05-15 11:18:28.543591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.317 [2024-05-15 11:18:28.543605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.317 [2024-05-15 11:18:28.543612] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.317 [2024-05-15 11:18:28.543618] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.317 [2024-05-15 11:18:28.543632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.317 qpair failed and we were unable to recover it. 00:26:31.317 [2024-05-15 11:18:28.553567] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.317 [2024-05-15 11:18:28.553623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.317 [2024-05-15 11:18:28.553637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.317 [2024-05-15 11:18:28.553644] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.317 [2024-05-15 11:18:28.553650] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.317 [2024-05-15 11:18:28.553664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.317 qpair failed and we were unable to recover it. 00:26:31.317 [2024-05-15 11:18:28.563586] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.317 [2024-05-15 11:18:28.563643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.317 [2024-05-15 11:18:28.563657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.317 [2024-05-15 11:18:28.563664] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.317 [2024-05-15 11:18:28.563670] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.317 [2024-05-15 11:18:28.563684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.317 qpair failed and we were unable to recover it. 00:26:31.317 [2024-05-15 11:18:28.573612] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.317 [2024-05-15 11:18:28.573673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.317 [2024-05-15 11:18:28.573687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.317 [2024-05-15 11:18:28.573694] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.317 [2024-05-15 11:18:28.573700] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.317 [2024-05-15 11:18:28.573714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.317 qpair failed and we were unable to recover it. 00:26:31.575 [2024-05-15 11:18:28.583679] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.575 [2024-05-15 11:18:28.583744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.576 [2024-05-15 11:18:28.583759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.576 [2024-05-15 11:18:28.583766] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.576 [2024-05-15 11:18:28.583772] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.576 [2024-05-15 11:18:28.583786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.576 qpair failed and we were unable to recover it. 00:26:31.576 [2024-05-15 11:18:28.593761] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.576 [2024-05-15 11:18:28.593852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.576 [2024-05-15 11:18:28.593866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.576 [2024-05-15 11:18:28.593873] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.576 [2024-05-15 11:18:28.593879] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.576 [2024-05-15 11:18:28.593893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.576 qpair failed and we were unable to recover it. 00:26:31.576 [2024-05-15 11:18:28.603636] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.576 [2024-05-15 11:18:28.603694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.576 [2024-05-15 11:18:28.603708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.576 [2024-05-15 11:18:28.603715] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.576 [2024-05-15 11:18:28.603720] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.576 [2024-05-15 11:18:28.603734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.576 qpair failed and we were unable to recover it. 00:26:31.576 [2024-05-15 11:18:28.613728] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.576 [2024-05-15 11:18:28.613782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.576 [2024-05-15 11:18:28.613796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.576 [2024-05-15 11:18:28.613803] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.576 [2024-05-15 11:18:28.613812] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.576 [2024-05-15 11:18:28.613826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.576 qpair failed and we were unable to recover it. 00:26:31.576 [2024-05-15 11:18:28.623762] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.576 [2024-05-15 11:18:28.623820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.576 [2024-05-15 11:18:28.623834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.576 [2024-05-15 11:18:28.623841] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.576 [2024-05-15 11:18:28.623847] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.576 [2024-05-15 11:18:28.623861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.576 qpair failed and we were unable to recover it. 00:26:31.576 [2024-05-15 11:18:28.633811] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.576 [2024-05-15 11:18:28.633868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.576 [2024-05-15 11:18:28.633883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.576 [2024-05-15 11:18:28.633890] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.576 [2024-05-15 11:18:28.633896] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.576 [2024-05-15 11:18:28.633910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.576 qpair failed and we were unable to recover it. 00:26:31.576 [2024-05-15 11:18:28.643816] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.576 [2024-05-15 11:18:28.643877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.576 [2024-05-15 11:18:28.643892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.576 [2024-05-15 11:18:28.643898] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.576 [2024-05-15 11:18:28.643904] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.576 [2024-05-15 11:18:28.643918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.576 qpair failed and we were unable to recover it. 00:26:31.576 [2024-05-15 11:18:28.653806] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.576 [2024-05-15 11:18:28.653855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.576 [2024-05-15 11:18:28.653869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.576 [2024-05-15 11:18:28.653875] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.576 [2024-05-15 11:18:28.653881] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:31.576 [2024-05-15 11:18:28.653895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.576 qpair failed and we were unable to recover it. 00:26:31.576 [2024-05-15 11:18:28.663849] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.576 [2024-05-15 11:18:28.663916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.576 [2024-05-15 11:18:28.663943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.576 [2024-05-15 11:18:28.663954] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.576 [2024-05-15 11:18:28.663963] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:31.576 [2024-05-15 11:18:28.663986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:31.576 qpair failed and we were unable to recover it. 00:26:31.576 [2024-05-15 11:18:28.673910] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.576 [2024-05-15 11:18:28.673967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.576 [2024-05-15 11:18:28.673983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.576 [2024-05-15 11:18:28.673990] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.576 [2024-05-15 11:18:28.673996] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:31.576 [2024-05-15 11:18:28.674011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:31.576 qpair failed and we were unable to recover it. 00:26:31.576 [2024-05-15 11:18:28.683938] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.576 [2024-05-15 11:18:28.683996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.576 [2024-05-15 11:18:28.684012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.576 [2024-05-15 11:18:28.684019] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.576 [2024-05-15 11:18:28.684025] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:31.576 [2024-05-15 11:18:28.684040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:31.576 qpair failed and we were unable to recover it. 00:26:31.576 [2024-05-15 11:18:28.693967] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.576 [2024-05-15 11:18:28.694025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.576 [2024-05-15 11:18:28.694041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.576 [2024-05-15 11:18:28.694048] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.576 [2024-05-15 11:18:28.694054] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:31.576 [2024-05-15 11:18:28.694069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:31.576 qpair failed and we were unable to recover it. 00:26:31.576 [2024-05-15 11:18:28.704052] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.576 [2024-05-15 11:18:28.704113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.577 [2024-05-15 11:18:28.704129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.577 [2024-05-15 11:18:28.704138] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.577 [2024-05-15 11:18:28.704144] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:31.577 [2024-05-15 11:18:28.704159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:31.577 qpair failed and we were unable to recover it. 00:26:31.577 [2024-05-15 11:18:28.714031] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.577 [2024-05-15 11:18:28.714111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.577 [2024-05-15 11:18:28.714126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.577 [2024-05-15 11:18:28.714133] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.577 [2024-05-15 11:18:28.714139] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:31.577 [2024-05-15 11:18:28.714153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:31.577 qpair failed and we were unable to recover it. 00:26:31.577 [2024-05-15 11:18:28.724054] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.577 [2024-05-15 11:18:28.724111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.577 [2024-05-15 11:18:28.724126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.577 [2024-05-15 11:18:28.724133] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.577 [2024-05-15 11:18:28.724139] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:31.577 [2024-05-15 11:18:28.724153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:31.577 qpair failed and we were unable to recover it. 00:26:31.577 [2024-05-15 11:18:28.734091] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.577 [2024-05-15 11:18:28.734148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.577 [2024-05-15 11:18:28.734163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.577 [2024-05-15 11:18:28.734174] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.577 [2024-05-15 11:18:28.734180] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:31.577 [2024-05-15 11:18:28.734194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:31.577 qpair failed and we were unable to recover it. 00:26:31.577 [2024-05-15 11:18:28.744157] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.577 [2024-05-15 11:18:28.744218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.577 [2024-05-15 11:18:28.744233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.577 [2024-05-15 11:18:28.744240] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.577 [2024-05-15 11:18:28.744246] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:31.577 [2024-05-15 11:18:28.744260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:31.577 qpair failed and we were unable to recover it. 00:26:31.577 [2024-05-15 11:18:28.754191] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.577 [2024-05-15 11:18:28.754268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.577 [2024-05-15 11:18:28.754284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.577 [2024-05-15 11:18:28.754291] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.577 [2024-05-15 11:18:28.754297] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:31.577 [2024-05-15 11:18:28.754311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:31.577 qpair failed and we were unable to recover it. 00:26:31.577 [2024-05-15 11:18:28.764181] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.577 [2024-05-15 11:18:28.764235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.577 [2024-05-15 11:18:28.764250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.577 [2024-05-15 11:18:28.764257] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.577 [2024-05-15 11:18:28.764263] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:31.577 [2024-05-15 11:18:28.764277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:31.577 qpair failed and we were unable to recover it. 00:26:31.577 [2024-05-15 11:18:28.774208] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.577 [2024-05-15 11:18:28.774275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.577 [2024-05-15 11:18:28.774290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.577 [2024-05-15 11:18:28.774297] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.577 [2024-05-15 11:18:28.774303] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:31.577 [2024-05-15 11:18:28.774316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:31.577 qpair failed and we were unable to recover it. 00:26:31.577 [2024-05-15 11:18:28.784230] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.577 [2024-05-15 11:18:28.784283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.577 [2024-05-15 11:18:28.784298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.577 [2024-05-15 11:18:28.784305] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.577 [2024-05-15 11:18:28.784311] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:31.577 [2024-05-15 11:18:28.784325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:31.577 qpair failed and we were unable to recover it. 00:26:31.577 [2024-05-15 11:18:28.794268] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.577 [2024-05-15 11:18:28.794326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.577 [2024-05-15 11:18:28.794341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.577 [2024-05-15 11:18:28.794352] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.577 [2024-05-15 11:18:28.794358] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:31.577 [2024-05-15 11:18:28.794372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:31.577 qpair failed and we were unable to recover it. 00:26:31.577 [2024-05-15 11:18:28.804297] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.577 [2024-05-15 11:18:28.804354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.577 [2024-05-15 11:18:28.804369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.577 [2024-05-15 11:18:28.804376] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.577 [2024-05-15 11:18:28.804382] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:31.577 [2024-05-15 11:18:28.804395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:31.577 qpair failed and we were unable to recover it. 00:26:31.577 [2024-05-15 11:18:28.814334] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.577 [2024-05-15 11:18:28.814390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.577 [2024-05-15 11:18:28.814405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.577 [2024-05-15 11:18:28.814412] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.577 [2024-05-15 11:18:28.814418] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:31.577 [2024-05-15 11:18:28.814432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:31.577 qpair failed and we were unable to recover it. 00:26:31.577 [2024-05-15 11:18:28.824350] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.577 [2024-05-15 11:18:28.824406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.577 [2024-05-15 11:18:28.824420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.577 [2024-05-15 11:18:28.824427] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.577 [2024-05-15 11:18:28.824433] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:31.577 [2024-05-15 11:18:28.824446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:31.577 qpair failed and we were unable to recover it. 00:26:31.577 [2024-05-15 11:18:28.834398] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.578 [2024-05-15 11:18:28.834461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.578 [2024-05-15 11:18:28.834476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.578 [2024-05-15 11:18:28.834483] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.578 [2024-05-15 11:18:28.834489] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:31.578 [2024-05-15 11:18:28.834502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:31.578 qpair failed and we were unable to recover it. 00:26:31.837 [2024-05-15 11:18:28.844443] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.837 [2024-05-15 11:18:28.844509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.837 [2024-05-15 11:18:28.844530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.837 [2024-05-15 11:18:28.844538] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.837 [2024-05-15 11:18:28.844545] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:31.837 [2024-05-15 11:18:28.844561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:31.837 qpair failed and we were unable to recover it. 00:26:31.837 [2024-05-15 11:18:28.854435] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.837 [2024-05-15 11:18:28.854489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.837 [2024-05-15 11:18:28.854504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.837 [2024-05-15 11:18:28.854511] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.837 [2024-05-15 11:18:28.854517] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:31.837 [2024-05-15 11:18:28.854531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:31.837 qpair failed and we were unable to recover it. 00:26:31.837 [2024-05-15 11:18:28.864516] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.837 [2024-05-15 11:18:28.864570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.837 [2024-05-15 11:18:28.864585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.837 [2024-05-15 11:18:28.864592] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.837 [2024-05-15 11:18:28.864597] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:31.837 [2024-05-15 11:18:28.864611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:31.837 qpair failed and we were unable to recover it. 00:26:31.837 [2024-05-15 11:18:28.874448] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.837 [2024-05-15 11:18:28.874503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.837 [2024-05-15 11:18:28.874518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.837 [2024-05-15 11:18:28.874525] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.837 [2024-05-15 11:18:28.874530] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:31.837 [2024-05-15 11:18:28.874544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:31.837 qpair failed and we were unable to recover it. 00:26:31.837 [2024-05-15 11:18:28.884518] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.837 [2024-05-15 11:18:28.884589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.837 [2024-05-15 11:18:28.884604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.837 [2024-05-15 11:18:28.884614] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.837 [2024-05-15 11:18:28.884620] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:31.837 [2024-05-15 11:18:28.884634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:31.837 qpair failed and we were unable to recover it. 00:26:31.837 [2024-05-15 11:18:28.894553] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.837 [2024-05-15 11:18:28.894626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.837 [2024-05-15 11:18:28.894641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.837 [2024-05-15 11:18:28.894648] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.837 [2024-05-15 11:18:28.894653] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:31.837 [2024-05-15 11:18:28.894667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:31.837 qpair failed and we were unable to recover it. 00:26:31.837 [2024-05-15 11:18:28.904580] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.837 [2024-05-15 11:18:28.904680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.837 [2024-05-15 11:18:28.904695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.837 [2024-05-15 11:18:28.904701] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.837 [2024-05-15 11:18:28.904707] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:31.837 [2024-05-15 11:18:28.904721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:31.837 qpair failed and we were unable to recover it. 00:26:31.837 [2024-05-15 11:18:28.914659] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.837 [2024-05-15 11:18:28.914714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.837 [2024-05-15 11:18:28.914728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.837 [2024-05-15 11:18:28.914735] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.837 [2024-05-15 11:18:28.914741] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:31.837 [2024-05-15 11:18:28.914754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:31.837 qpair failed and we were unable to recover it. 00:26:31.837 [2024-05-15 11:18:28.924642] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.837 [2024-05-15 11:18:28.924698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.837 [2024-05-15 11:18:28.924712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.837 [2024-05-15 11:18:28.924719] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.837 [2024-05-15 11:18:28.924725] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:31.837 [2024-05-15 11:18:28.924738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:31.837 qpair failed and we were unable to recover it. 00:26:31.837 [2024-05-15 11:18:28.934650] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.837 [2024-05-15 11:18:28.934699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.837 [2024-05-15 11:18:28.934713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.837 [2024-05-15 11:18:28.934720] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.837 [2024-05-15 11:18:28.934726] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:31.837 [2024-05-15 11:18:28.934740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:31.837 qpair failed and we were unable to recover it. 00:26:31.837 [2024-05-15 11:18:28.944631] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.838 [2024-05-15 11:18:28.944704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.838 [2024-05-15 11:18:28.944718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.838 [2024-05-15 11:18:28.944725] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.838 [2024-05-15 11:18:28.944731] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:31.838 [2024-05-15 11:18:28.944745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:31.838 qpair failed and we were unable to recover it. 00:26:31.838 [2024-05-15 11:18:28.954731] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.838 [2024-05-15 11:18:28.954785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.838 [2024-05-15 11:18:28.954800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.838 [2024-05-15 11:18:28.954807] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.838 [2024-05-15 11:18:28.954812] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:31.838 [2024-05-15 11:18:28.954826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:31.838 qpair failed and we were unable to recover it. 00:26:31.838 [2024-05-15 11:18:28.964750] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.838 [2024-05-15 11:18:28.964803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.838 [2024-05-15 11:18:28.964818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.838 [2024-05-15 11:18:28.964824] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.838 [2024-05-15 11:18:28.964831] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:31.838 [2024-05-15 11:18:28.964844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:31.838 qpair failed and we were unable to recover it. 00:26:31.838 [2024-05-15 11:18:28.974799] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.838 [2024-05-15 11:18:28.974858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.838 [2024-05-15 11:18:28.974875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.838 [2024-05-15 11:18:28.974882] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.838 [2024-05-15 11:18:28.974888] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:31.838 [2024-05-15 11:18:28.974901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:31.838 qpair failed and we were unable to recover it. 00:26:31.838 [2024-05-15 11:18:28.984819] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.838 [2024-05-15 11:18:28.984869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.838 [2024-05-15 11:18:28.984884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.838 [2024-05-15 11:18:28.984890] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.838 [2024-05-15 11:18:28.984897] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:31.838 [2024-05-15 11:18:28.984910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:31.838 qpair failed and we were unable to recover it. 00:26:31.838 [2024-05-15 11:18:28.994858] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.838 [2024-05-15 11:18:28.994912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.838 [2024-05-15 11:18:28.994927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.838 [2024-05-15 11:18:28.994934] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.838 [2024-05-15 11:18:28.994941] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:31.838 [2024-05-15 11:18:28.994954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:31.838 qpair failed and we were unable to recover it. 00:26:31.838 [2024-05-15 11:18:29.004895] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.838 [2024-05-15 11:18:29.004966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.838 [2024-05-15 11:18:29.004981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.838 [2024-05-15 11:18:29.004988] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.838 [2024-05-15 11:18:29.004994] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:31.838 [2024-05-15 11:18:29.005008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:31.838 qpair failed and we were unable to recover it. 00:26:31.838 [2024-05-15 11:18:29.014904] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.838 [2024-05-15 11:18:29.014957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.838 [2024-05-15 11:18:29.014972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.838 [2024-05-15 11:18:29.014979] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.838 [2024-05-15 11:18:29.014985] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:31.838 [2024-05-15 11:18:29.015002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:31.838 qpair failed and we were unable to recover it. 00:26:31.838 [2024-05-15 11:18:29.024921] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.838 [2024-05-15 11:18:29.024990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.838 [2024-05-15 11:18:29.025004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.838 [2024-05-15 11:18:29.025011] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.838 [2024-05-15 11:18:29.025017] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:31.838 [2024-05-15 11:18:29.025031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:31.838 qpair failed and we were unable to recover it. 00:26:31.838 [2024-05-15 11:18:29.034980] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.838 [2024-05-15 11:18:29.035033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.838 [2024-05-15 11:18:29.035047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.838 [2024-05-15 11:18:29.035054] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.838 [2024-05-15 11:18:29.035060] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:31.838 [2024-05-15 11:18:29.035074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:31.838 qpair failed and we were unable to recover it. 00:26:31.838 [2024-05-15 11:18:29.045044] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.838 [2024-05-15 11:18:29.045109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.838 [2024-05-15 11:18:29.045123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.838 [2024-05-15 11:18:29.045130] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.838 [2024-05-15 11:18:29.045136] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:31.838 [2024-05-15 11:18:29.045150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:31.838 qpair failed and we were unable to recover it. 00:26:31.838 [2024-05-15 11:18:29.055018] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.838 [2024-05-15 11:18:29.055072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.838 [2024-05-15 11:18:29.055087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.838 [2024-05-15 11:18:29.055093] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.838 [2024-05-15 11:18:29.055099] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:31.838 [2024-05-15 11:18:29.055113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:31.838 qpair failed and we were unable to recover it. 00:26:31.838 [2024-05-15 11:18:29.065072] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.838 [2024-05-15 11:18:29.065127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.838 [2024-05-15 11:18:29.065145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.838 [2024-05-15 11:18:29.065152] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.838 [2024-05-15 11:18:29.065158] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:31.838 [2024-05-15 11:18:29.065174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:31.838 qpair failed and we were unable to recover it. 00:26:31.838 [2024-05-15 11:18:29.075080] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.838 [2024-05-15 11:18:29.075134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.838 [2024-05-15 11:18:29.075148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.838 [2024-05-15 11:18:29.075155] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.838 [2024-05-15 11:18:29.075162] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:31.838 [2024-05-15 11:18:29.075178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:31.838 qpair failed and we were unable to recover it. 00:26:31.838 [2024-05-15 11:18:29.085093] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.839 [2024-05-15 11:18:29.085151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.839 [2024-05-15 11:18:29.085168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.839 [2024-05-15 11:18:29.085175] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.839 [2024-05-15 11:18:29.085182] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:31.839 [2024-05-15 11:18:29.085195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:31.839 qpair failed and we were unable to recover it. 00:26:31.839 [2024-05-15 11:18:29.095127] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.839 [2024-05-15 11:18:29.095188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.839 [2024-05-15 11:18:29.095203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.839 [2024-05-15 11:18:29.095209] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.839 [2024-05-15 11:18:29.095215] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:31.839 [2024-05-15 11:18:29.095229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:31.839 qpair failed and we were unable to recover it. 00:26:32.098 [2024-05-15 11:18:29.105169] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.098 [2024-05-15 11:18:29.105232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.098 [2024-05-15 11:18:29.105250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.098 [2024-05-15 11:18:29.105258] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.098 [2024-05-15 11:18:29.105265] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.098 [2024-05-15 11:18:29.105285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.098 qpair failed and we were unable to recover it. 00:26:32.098 [2024-05-15 11:18:29.115204] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.098 [2024-05-15 11:18:29.115264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.098 [2024-05-15 11:18:29.115282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.098 [2024-05-15 11:18:29.115289] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.098 [2024-05-15 11:18:29.115295] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.098 [2024-05-15 11:18:29.115310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.098 qpair failed and we were unable to recover it. 00:26:32.098 [2024-05-15 11:18:29.125211] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.098 [2024-05-15 11:18:29.125272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.099 [2024-05-15 11:18:29.125289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.099 [2024-05-15 11:18:29.125297] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.099 [2024-05-15 11:18:29.125303] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.099 [2024-05-15 11:18:29.125317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.099 qpair failed and we were unable to recover it. 00:26:32.099 [2024-05-15 11:18:29.135278] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.099 [2024-05-15 11:18:29.135330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.099 [2024-05-15 11:18:29.135345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.099 [2024-05-15 11:18:29.135351] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.099 [2024-05-15 11:18:29.135357] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.099 [2024-05-15 11:18:29.135372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.099 qpair failed and we were unable to recover it. 00:26:32.099 [2024-05-15 11:18:29.145196] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.099 [2024-05-15 11:18:29.145252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.099 [2024-05-15 11:18:29.145268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.099 [2024-05-15 11:18:29.145275] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.099 [2024-05-15 11:18:29.145280] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.099 [2024-05-15 11:18:29.145294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.099 qpair failed and we were unable to recover it. 00:26:32.099 [2024-05-15 11:18:29.155240] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.099 [2024-05-15 11:18:29.155349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.099 [2024-05-15 11:18:29.155367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.099 [2024-05-15 11:18:29.155374] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.099 [2024-05-15 11:18:29.155380] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.099 [2024-05-15 11:18:29.155393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.099 qpair failed and we were unable to recover it. 00:26:32.099 [2024-05-15 11:18:29.165327] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.099 [2024-05-15 11:18:29.165385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.099 [2024-05-15 11:18:29.165399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.099 [2024-05-15 11:18:29.165406] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.099 [2024-05-15 11:18:29.165412] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.099 [2024-05-15 11:18:29.165425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.099 qpair failed and we were unable to recover it. 00:26:32.099 [2024-05-15 11:18:29.175356] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.099 [2024-05-15 11:18:29.175413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.099 [2024-05-15 11:18:29.175427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.099 [2024-05-15 11:18:29.175434] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.099 [2024-05-15 11:18:29.175440] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.099 [2024-05-15 11:18:29.175453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.099 qpair failed and we were unable to recover it. 00:26:32.099 [2024-05-15 11:18:29.185379] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.099 [2024-05-15 11:18:29.185434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.099 [2024-05-15 11:18:29.185449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.099 [2024-05-15 11:18:29.185455] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.099 [2024-05-15 11:18:29.185461] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.099 [2024-05-15 11:18:29.185475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.099 qpair failed and we were unable to recover it. 00:26:32.099 [2024-05-15 11:18:29.195412] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.099 [2024-05-15 11:18:29.195470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.099 [2024-05-15 11:18:29.195484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.099 [2024-05-15 11:18:29.195491] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.099 [2024-05-15 11:18:29.195497] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.099 [2024-05-15 11:18:29.195514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.099 qpair failed and we were unable to recover it. 00:26:32.099 [2024-05-15 11:18:29.205441] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.099 [2024-05-15 11:18:29.205498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.099 [2024-05-15 11:18:29.205513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.099 [2024-05-15 11:18:29.205520] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.099 [2024-05-15 11:18:29.205525] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.099 [2024-05-15 11:18:29.205539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.099 qpair failed and we were unable to recover it. 00:26:32.099 [2024-05-15 11:18:29.215468] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.099 [2024-05-15 11:18:29.215526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.099 [2024-05-15 11:18:29.215540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.099 [2024-05-15 11:18:29.215547] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.099 [2024-05-15 11:18:29.215553] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.099 [2024-05-15 11:18:29.215566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.099 qpair failed and we were unable to recover it. 00:26:32.099 [2024-05-15 11:18:29.225527] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.099 [2024-05-15 11:18:29.225580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.099 [2024-05-15 11:18:29.225595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.099 [2024-05-15 11:18:29.225602] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.099 [2024-05-15 11:18:29.225607] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.099 [2024-05-15 11:18:29.225621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.099 qpair failed and we were unable to recover it. 00:26:32.099 [2024-05-15 11:18:29.235479] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.099 [2024-05-15 11:18:29.235533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.099 [2024-05-15 11:18:29.235548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.099 [2024-05-15 11:18:29.235555] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.099 [2024-05-15 11:18:29.235561] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.099 [2024-05-15 11:18:29.235574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.099 qpair failed and we were unable to recover it. 00:26:32.099 [2024-05-15 11:18:29.245562] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.099 [2024-05-15 11:18:29.245618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.099 [2024-05-15 11:18:29.245637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.099 [2024-05-15 11:18:29.245643] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.099 [2024-05-15 11:18:29.245650] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.099 [2024-05-15 11:18:29.245663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.100 qpair failed and we were unable to recover it. 00:26:32.100 [2024-05-15 11:18:29.255637] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.100 [2024-05-15 11:18:29.255703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.100 [2024-05-15 11:18:29.255718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.100 [2024-05-15 11:18:29.255725] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.100 [2024-05-15 11:18:29.255730] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.100 [2024-05-15 11:18:29.255744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.100 qpair failed and we were unable to recover it. 00:26:32.100 [2024-05-15 11:18:29.265537] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.100 [2024-05-15 11:18:29.265596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.100 [2024-05-15 11:18:29.265612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.100 [2024-05-15 11:18:29.265619] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.100 [2024-05-15 11:18:29.265625] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.100 [2024-05-15 11:18:29.265639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.100 qpair failed and we were unable to recover it. 00:26:32.100 [2024-05-15 11:18:29.275641] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.100 [2024-05-15 11:18:29.275726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.100 [2024-05-15 11:18:29.275742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.100 [2024-05-15 11:18:29.275750] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.100 [2024-05-15 11:18:29.275757] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.100 [2024-05-15 11:18:29.275771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.100 qpair failed and we were unable to recover it. 00:26:32.100 [2024-05-15 11:18:29.285666] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.100 [2024-05-15 11:18:29.285744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.100 [2024-05-15 11:18:29.285758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.100 [2024-05-15 11:18:29.285765] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.100 [2024-05-15 11:18:29.285774] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.100 [2024-05-15 11:18:29.285788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.100 qpair failed and we were unable to recover it. 00:26:32.100 [2024-05-15 11:18:29.295708] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.100 [2024-05-15 11:18:29.295763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.100 [2024-05-15 11:18:29.295778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.100 [2024-05-15 11:18:29.295785] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.100 [2024-05-15 11:18:29.295790] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.100 [2024-05-15 11:18:29.295804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.100 qpair failed and we were unable to recover it. 00:26:32.100 [2024-05-15 11:18:29.305843] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.100 [2024-05-15 11:18:29.305908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.100 [2024-05-15 11:18:29.305923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.100 [2024-05-15 11:18:29.305930] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.100 [2024-05-15 11:18:29.305936] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.100 [2024-05-15 11:18:29.305950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.100 qpair failed and we were unable to recover it. 00:26:32.100 [2024-05-15 11:18:29.315806] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.100 [2024-05-15 11:18:29.315861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.100 [2024-05-15 11:18:29.315875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.100 [2024-05-15 11:18:29.315882] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.100 [2024-05-15 11:18:29.315888] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.100 [2024-05-15 11:18:29.315901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.100 qpair failed and we were unable to recover it. 00:26:32.100 [2024-05-15 11:18:29.325823] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.100 [2024-05-15 11:18:29.325904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.100 [2024-05-15 11:18:29.325919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.100 [2024-05-15 11:18:29.325926] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.100 [2024-05-15 11:18:29.325931] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.100 [2024-05-15 11:18:29.325945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.100 qpair failed and we were unable to recover it. 00:26:32.100 [2024-05-15 11:18:29.335916] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.100 [2024-05-15 11:18:29.335969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.100 [2024-05-15 11:18:29.335984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.100 [2024-05-15 11:18:29.335991] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.100 [2024-05-15 11:18:29.335996] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.100 [2024-05-15 11:18:29.336010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.100 qpair failed and we were unable to recover it. 00:26:32.100 [2024-05-15 11:18:29.345841] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.100 [2024-05-15 11:18:29.345896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.100 [2024-05-15 11:18:29.345911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.100 [2024-05-15 11:18:29.345918] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.100 [2024-05-15 11:18:29.345924] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.100 [2024-05-15 11:18:29.345938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.100 qpair failed and we were unable to recover it. 00:26:32.100 [2024-05-15 11:18:29.355874] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.100 [2024-05-15 11:18:29.355929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.100 [2024-05-15 11:18:29.355944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.100 [2024-05-15 11:18:29.355951] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.100 [2024-05-15 11:18:29.355957] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.100 [2024-05-15 11:18:29.355971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.101 qpair failed and we were unable to recover it. 00:26:32.359 [2024-05-15 11:18:29.365902] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.359 [2024-05-15 11:18:29.365960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.359 [2024-05-15 11:18:29.365979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.359 [2024-05-15 11:18:29.365987] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.359 [2024-05-15 11:18:29.365993] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.359 [2024-05-15 11:18:29.366009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.359 qpair failed and we were unable to recover it. 00:26:32.359 [2024-05-15 11:18:29.375924] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.359 [2024-05-15 11:18:29.375985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.359 [2024-05-15 11:18:29.376003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.359 [2024-05-15 11:18:29.376011] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.359 [2024-05-15 11:18:29.376020] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.359 [2024-05-15 11:18:29.376035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.359 qpair failed and we were unable to recover it. 00:26:32.359 [2024-05-15 11:18:29.385960] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.359 [2024-05-15 11:18:29.386014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.359 [2024-05-15 11:18:29.386030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.359 [2024-05-15 11:18:29.386037] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.359 [2024-05-15 11:18:29.386043] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.359 [2024-05-15 11:18:29.386056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.359 qpair failed and we were unable to recover it. 00:26:32.359 [2024-05-15 11:18:29.395985] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.359 [2024-05-15 11:18:29.396041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.359 [2024-05-15 11:18:29.396056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.359 [2024-05-15 11:18:29.396064] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.359 [2024-05-15 11:18:29.396070] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.359 [2024-05-15 11:18:29.396085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.359 qpair failed and we were unable to recover it. 00:26:32.359 [2024-05-15 11:18:29.405988] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.359 [2024-05-15 11:18:29.406046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.359 [2024-05-15 11:18:29.406061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.359 [2024-05-15 11:18:29.406068] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.359 [2024-05-15 11:18:29.406074] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.359 [2024-05-15 11:18:29.406088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.359 qpair failed and we were unable to recover it. 00:26:32.359 [2024-05-15 11:18:29.416034] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.359 [2024-05-15 11:18:29.416087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.359 [2024-05-15 11:18:29.416103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.359 [2024-05-15 11:18:29.416109] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.359 [2024-05-15 11:18:29.416115] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.359 [2024-05-15 11:18:29.416129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.359 qpair failed and we were unable to recover it. 00:26:32.359 [2024-05-15 11:18:29.426089] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.359 [2024-05-15 11:18:29.426148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.359 [2024-05-15 11:18:29.426163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.359 [2024-05-15 11:18:29.426174] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.359 [2024-05-15 11:18:29.426180] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.359 [2024-05-15 11:18:29.426194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.359 qpair failed and we were unable to recover it. 00:26:32.359 [2024-05-15 11:18:29.436108] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.359 [2024-05-15 11:18:29.436162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.359 [2024-05-15 11:18:29.436180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.359 [2024-05-15 11:18:29.436187] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.359 [2024-05-15 11:18:29.436192] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.359 [2024-05-15 11:18:29.436206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.359 qpair failed and we were unable to recover it. 00:26:32.359 [2024-05-15 11:18:29.446132] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.359 [2024-05-15 11:18:29.446188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.359 [2024-05-15 11:18:29.446203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.359 [2024-05-15 11:18:29.446210] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.360 [2024-05-15 11:18:29.446216] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.360 [2024-05-15 11:18:29.446230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.360 qpair failed and we were unable to recover it. 00:26:32.360 [2024-05-15 11:18:29.456161] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.360 [2024-05-15 11:18:29.456219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.360 [2024-05-15 11:18:29.456234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.360 [2024-05-15 11:18:29.456240] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.360 [2024-05-15 11:18:29.456246] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.360 [2024-05-15 11:18:29.456259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.360 qpair failed and we were unable to recover it. 00:26:32.360 [2024-05-15 11:18:29.466196] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.360 [2024-05-15 11:18:29.466253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.360 [2024-05-15 11:18:29.466268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.360 [2024-05-15 11:18:29.466275] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.360 [2024-05-15 11:18:29.466284] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.360 [2024-05-15 11:18:29.466297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.360 qpair failed and we were unable to recover it. 00:26:32.360 [2024-05-15 11:18:29.476223] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.360 [2024-05-15 11:18:29.476278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.360 [2024-05-15 11:18:29.476293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.360 [2024-05-15 11:18:29.476299] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.360 [2024-05-15 11:18:29.476305] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.360 [2024-05-15 11:18:29.476319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.360 qpair failed and we were unable to recover it. 00:26:32.360 [2024-05-15 11:18:29.486292] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.360 [2024-05-15 11:18:29.486356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.360 [2024-05-15 11:18:29.486371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.360 [2024-05-15 11:18:29.486378] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.360 [2024-05-15 11:18:29.486384] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.360 [2024-05-15 11:18:29.486397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.360 qpair failed and we were unable to recover it. 00:26:32.360 [2024-05-15 11:18:29.496350] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.360 [2024-05-15 11:18:29.496435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.360 [2024-05-15 11:18:29.496449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.360 [2024-05-15 11:18:29.496456] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.360 [2024-05-15 11:18:29.496462] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.360 [2024-05-15 11:18:29.496476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.360 qpair failed and we were unable to recover it. 00:26:32.360 [2024-05-15 11:18:29.506306] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.360 [2024-05-15 11:18:29.506362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.360 [2024-05-15 11:18:29.506377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.360 [2024-05-15 11:18:29.506384] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.360 [2024-05-15 11:18:29.506390] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.360 [2024-05-15 11:18:29.506404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.360 qpair failed and we were unable to recover it. 00:26:32.360 [2024-05-15 11:18:29.516348] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.360 [2024-05-15 11:18:29.516405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.360 [2024-05-15 11:18:29.516421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.360 [2024-05-15 11:18:29.516428] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.360 [2024-05-15 11:18:29.516434] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.360 [2024-05-15 11:18:29.516448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.360 qpair failed and we were unable to recover it. 00:26:32.360 [2024-05-15 11:18:29.526372] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.360 [2024-05-15 11:18:29.526432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.360 [2024-05-15 11:18:29.526447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.360 [2024-05-15 11:18:29.526454] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.360 [2024-05-15 11:18:29.526460] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.360 [2024-05-15 11:18:29.526473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.360 qpair failed and we were unable to recover it. 00:26:32.360 [2024-05-15 11:18:29.536396] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.360 [2024-05-15 11:18:29.536448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.360 [2024-05-15 11:18:29.536463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.360 [2024-05-15 11:18:29.536470] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.360 [2024-05-15 11:18:29.536476] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.360 [2024-05-15 11:18:29.536489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.360 qpair failed and we were unable to recover it. 00:26:32.360 [2024-05-15 11:18:29.546422] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.360 [2024-05-15 11:18:29.546479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.360 [2024-05-15 11:18:29.546494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.360 [2024-05-15 11:18:29.546501] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.360 [2024-05-15 11:18:29.546507] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.360 [2024-05-15 11:18:29.546521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.360 qpair failed and we were unable to recover it. 00:26:32.360 [2024-05-15 11:18:29.556465] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.360 [2024-05-15 11:18:29.556523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.360 [2024-05-15 11:18:29.556538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.360 [2024-05-15 11:18:29.556548] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.360 [2024-05-15 11:18:29.556554] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.360 [2024-05-15 11:18:29.556567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.360 qpair failed and we were unable to recover it. 00:26:32.360 [2024-05-15 11:18:29.566423] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.360 [2024-05-15 11:18:29.566480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.360 [2024-05-15 11:18:29.566495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.360 [2024-05-15 11:18:29.566501] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.360 [2024-05-15 11:18:29.566507] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.360 [2024-05-15 11:18:29.566521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.360 qpair failed and we were unable to recover it. 00:26:32.360 [2024-05-15 11:18:29.576506] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.360 [2024-05-15 11:18:29.576556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.360 [2024-05-15 11:18:29.576571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.360 [2024-05-15 11:18:29.576577] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.360 [2024-05-15 11:18:29.576583] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.360 [2024-05-15 11:18:29.576597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.360 qpair failed and we were unable to recover it. 00:26:32.360 [2024-05-15 11:18:29.586539] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.360 [2024-05-15 11:18:29.586591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.360 [2024-05-15 11:18:29.586606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.361 [2024-05-15 11:18:29.586613] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.361 [2024-05-15 11:18:29.586619] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.361 [2024-05-15 11:18:29.586632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.361 qpair failed and we were unable to recover it. 00:26:32.361 [2024-05-15 11:18:29.596577] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.361 [2024-05-15 11:18:29.596632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.361 [2024-05-15 11:18:29.596646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.361 [2024-05-15 11:18:29.596653] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.361 [2024-05-15 11:18:29.596659] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.361 [2024-05-15 11:18:29.596672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.361 qpair failed and we were unable to recover it. 00:26:32.361 [2024-05-15 11:18:29.606607] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.361 [2024-05-15 11:18:29.606664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.361 [2024-05-15 11:18:29.606678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.361 [2024-05-15 11:18:29.606685] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.361 [2024-05-15 11:18:29.606691] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.361 [2024-05-15 11:18:29.606704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.361 qpair failed and we were unable to recover it. 00:26:32.361 [2024-05-15 11:18:29.616635] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.361 [2024-05-15 11:18:29.616688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.361 [2024-05-15 11:18:29.616703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.361 [2024-05-15 11:18:29.616709] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.361 [2024-05-15 11:18:29.616716] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.361 [2024-05-15 11:18:29.616732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.361 qpair failed and we were unable to recover it. 00:26:32.621 [2024-05-15 11:18:29.626663] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.621 [2024-05-15 11:18:29.626726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.621 [2024-05-15 11:18:29.626745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.621 [2024-05-15 11:18:29.626752] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.621 [2024-05-15 11:18:29.626758] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.621 [2024-05-15 11:18:29.626774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.621 qpair failed and we were unable to recover it. 00:26:32.621 [2024-05-15 11:18:29.636705] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.621 [2024-05-15 11:18:29.636765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.621 [2024-05-15 11:18:29.636783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.621 [2024-05-15 11:18:29.636791] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.621 [2024-05-15 11:18:29.636797] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.621 [2024-05-15 11:18:29.636812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.621 qpair failed and we were unable to recover it. 00:26:32.621 [2024-05-15 11:18:29.646754] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.621 [2024-05-15 11:18:29.646815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.621 [2024-05-15 11:18:29.646830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.621 [2024-05-15 11:18:29.646842] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.621 [2024-05-15 11:18:29.646849] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.621 [2024-05-15 11:18:29.646864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.621 qpair failed and we were unable to recover it. 00:26:32.621 [2024-05-15 11:18:29.656707] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.621 [2024-05-15 11:18:29.656793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.621 [2024-05-15 11:18:29.656808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.621 [2024-05-15 11:18:29.656815] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.621 [2024-05-15 11:18:29.656821] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.621 [2024-05-15 11:18:29.656835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.621 qpair failed and we were unable to recover it. 00:26:32.621 [2024-05-15 11:18:29.666791] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.621 [2024-05-15 11:18:29.666850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.621 [2024-05-15 11:18:29.666865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.621 [2024-05-15 11:18:29.666872] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.621 [2024-05-15 11:18:29.666878] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.621 [2024-05-15 11:18:29.666892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.621 qpair failed and we were unable to recover it. 00:26:32.621 [2024-05-15 11:18:29.676812] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.621 [2024-05-15 11:18:29.676874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.621 [2024-05-15 11:18:29.676889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.621 [2024-05-15 11:18:29.676896] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.621 [2024-05-15 11:18:29.676903] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.621 [2024-05-15 11:18:29.676916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.621 qpair failed and we were unable to recover it. 00:26:32.621 [2024-05-15 11:18:29.686831] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.621 [2024-05-15 11:18:29.686893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.621 [2024-05-15 11:18:29.686908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.621 [2024-05-15 11:18:29.686915] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.621 [2024-05-15 11:18:29.686922] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.621 [2024-05-15 11:18:29.686936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.621 qpair failed and we were unable to recover it. 00:26:32.621 [2024-05-15 11:18:29.696885] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.621 [2024-05-15 11:18:29.696942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.621 [2024-05-15 11:18:29.696957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.621 [2024-05-15 11:18:29.696964] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.621 [2024-05-15 11:18:29.696970] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.621 [2024-05-15 11:18:29.696984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.621 qpair failed and we were unable to recover it. 00:26:32.621 [2024-05-15 11:18:29.706943] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.621 [2024-05-15 11:18:29.707000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.621 [2024-05-15 11:18:29.707014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.621 [2024-05-15 11:18:29.707021] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.622 [2024-05-15 11:18:29.707027] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.622 [2024-05-15 11:18:29.707040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.622 qpair failed and we were unable to recover it. 00:26:32.622 [2024-05-15 11:18:29.716941] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.622 [2024-05-15 11:18:29.716998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.622 [2024-05-15 11:18:29.717012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.622 [2024-05-15 11:18:29.717019] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.622 [2024-05-15 11:18:29.717025] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.622 [2024-05-15 11:18:29.717039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.622 qpair failed and we were unable to recover it. 00:26:32.622 [2024-05-15 11:18:29.726995] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.622 [2024-05-15 11:18:29.727053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.622 [2024-05-15 11:18:29.727068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.622 [2024-05-15 11:18:29.727075] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.622 [2024-05-15 11:18:29.727080] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.622 [2024-05-15 11:18:29.727094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.622 qpair failed and we were unable to recover it. 00:26:32.622 [2024-05-15 11:18:29.736977] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.622 [2024-05-15 11:18:29.737030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.622 [2024-05-15 11:18:29.737045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.622 [2024-05-15 11:18:29.737055] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.622 [2024-05-15 11:18:29.737061] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.622 [2024-05-15 11:18:29.737074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.622 qpair failed and we were unable to recover it. 00:26:32.622 [2024-05-15 11:18:29.747028] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.622 [2024-05-15 11:18:29.747080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.622 [2024-05-15 11:18:29.747095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.622 [2024-05-15 11:18:29.747101] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.622 [2024-05-15 11:18:29.747107] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.622 [2024-05-15 11:18:29.747121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.622 qpair failed and we were unable to recover it. 00:26:32.622 [2024-05-15 11:18:29.757046] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.622 [2024-05-15 11:18:29.757107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.622 [2024-05-15 11:18:29.757122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.622 [2024-05-15 11:18:29.757129] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.622 [2024-05-15 11:18:29.757135] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.622 [2024-05-15 11:18:29.757148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.622 qpair failed and we were unable to recover it. 00:26:32.622 [2024-05-15 11:18:29.767089] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.622 [2024-05-15 11:18:29.767144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.622 [2024-05-15 11:18:29.767159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.622 [2024-05-15 11:18:29.767170] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.622 [2024-05-15 11:18:29.767176] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.622 [2024-05-15 11:18:29.767190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.622 qpair failed and we were unable to recover it. 00:26:32.622 [2024-05-15 11:18:29.777050] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.622 [2024-05-15 11:18:29.777108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.622 [2024-05-15 11:18:29.777123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.622 [2024-05-15 11:18:29.777129] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.622 [2024-05-15 11:18:29.777135] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.622 [2024-05-15 11:18:29.777149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.622 qpair failed and we were unable to recover it. 00:26:32.622 [2024-05-15 11:18:29.787124] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.622 [2024-05-15 11:18:29.787186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.622 [2024-05-15 11:18:29.787202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.622 [2024-05-15 11:18:29.787209] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.622 [2024-05-15 11:18:29.787215] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.622 [2024-05-15 11:18:29.787228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.622 qpair failed and we were unable to recover it. 00:26:32.622 [2024-05-15 11:18:29.797197] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.622 [2024-05-15 11:18:29.797255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.622 [2024-05-15 11:18:29.797270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.622 [2024-05-15 11:18:29.797277] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.622 [2024-05-15 11:18:29.797283] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.622 [2024-05-15 11:18:29.797296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.622 qpair failed and we were unable to recover it. 00:26:32.622 [2024-05-15 11:18:29.807213] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.622 [2024-05-15 11:18:29.807266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.622 [2024-05-15 11:18:29.807282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.622 [2024-05-15 11:18:29.807288] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.622 [2024-05-15 11:18:29.807295] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.622 [2024-05-15 11:18:29.807309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.622 qpair failed and we were unable to recover it. 00:26:32.622 [2024-05-15 11:18:29.817236] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.622 [2024-05-15 11:18:29.817294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.622 [2024-05-15 11:18:29.817309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.622 [2024-05-15 11:18:29.817316] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.622 [2024-05-15 11:18:29.817322] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.622 [2024-05-15 11:18:29.817335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.622 qpair failed and we were unable to recover it. 00:26:32.622 [2024-05-15 11:18:29.827265] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.622 [2024-05-15 11:18:29.827326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.622 [2024-05-15 11:18:29.827341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.622 [2024-05-15 11:18:29.827351] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.622 [2024-05-15 11:18:29.827356] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.622 [2024-05-15 11:18:29.827370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.622 qpair failed and we were unable to recover it. 00:26:32.622 [2024-05-15 11:18:29.837330] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.622 [2024-05-15 11:18:29.837388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.622 [2024-05-15 11:18:29.837402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.622 [2024-05-15 11:18:29.837409] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.622 [2024-05-15 11:18:29.837415] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.622 [2024-05-15 11:18:29.837429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.622 qpair failed and we were unable to recover it. 00:26:32.622 [2024-05-15 11:18:29.847329] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.622 [2024-05-15 11:18:29.847407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.622 [2024-05-15 11:18:29.847421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.623 [2024-05-15 11:18:29.847428] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.623 [2024-05-15 11:18:29.847433] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.623 [2024-05-15 11:18:29.847447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.623 qpair failed and we were unable to recover it. 00:26:32.623 [2024-05-15 11:18:29.857338] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.623 [2024-05-15 11:18:29.857390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.623 [2024-05-15 11:18:29.857404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.623 [2024-05-15 11:18:29.857411] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.623 [2024-05-15 11:18:29.857417] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.623 [2024-05-15 11:18:29.857431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.623 qpair failed and we were unable to recover it. 00:26:32.623 [2024-05-15 11:18:29.867359] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.623 [2024-05-15 11:18:29.867411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.623 [2024-05-15 11:18:29.867426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.623 [2024-05-15 11:18:29.867432] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.623 [2024-05-15 11:18:29.867439] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.623 [2024-05-15 11:18:29.867452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.623 qpair failed and we were unable to recover it. 00:26:32.623 [2024-05-15 11:18:29.877405] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.623 [2024-05-15 11:18:29.877459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.623 [2024-05-15 11:18:29.877474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.623 [2024-05-15 11:18:29.877481] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.623 [2024-05-15 11:18:29.877486] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.623 [2024-05-15 11:18:29.877500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.623 qpair failed and we were unable to recover it. 00:26:32.882 [2024-05-15 11:18:29.887439] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.882 [2024-05-15 11:18:29.887494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.882 [2024-05-15 11:18:29.887512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.882 [2024-05-15 11:18:29.887520] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.882 [2024-05-15 11:18:29.887526] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.882 [2024-05-15 11:18:29.887542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.882 qpair failed and we were unable to recover it. 00:26:32.882 [2024-05-15 11:18:29.897382] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.882 [2024-05-15 11:18:29.897439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.882 [2024-05-15 11:18:29.897457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.882 [2024-05-15 11:18:29.897465] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.882 [2024-05-15 11:18:29.897470] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.882 [2024-05-15 11:18:29.897486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.882 qpair failed and we were unable to recover it. 00:26:32.882 [2024-05-15 11:18:29.907466] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.882 [2024-05-15 11:18:29.907522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.882 [2024-05-15 11:18:29.907538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.882 [2024-05-15 11:18:29.907544] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.882 [2024-05-15 11:18:29.907550] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.882 [2024-05-15 11:18:29.907566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.882 qpair failed and we were unable to recover it. 00:26:32.882 [2024-05-15 11:18:29.917509] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.882 [2024-05-15 11:18:29.917566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.882 [2024-05-15 11:18:29.917584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.882 [2024-05-15 11:18:29.917591] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.882 [2024-05-15 11:18:29.917597] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.882 [2024-05-15 11:18:29.917611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.882 qpair failed and we were unable to recover it. 00:26:32.882 [2024-05-15 11:18:29.927528] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.882 [2024-05-15 11:18:29.927586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.882 [2024-05-15 11:18:29.927602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.882 [2024-05-15 11:18:29.927609] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.882 [2024-05-15 11:18:29.927614] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.882 [2024-05-15 11:18:29.927629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.882 qpair failed and we were unable to recover it. 00:26:32.882 [2024-05-15 11:18:29.937550] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.882 [2024-05-15 11:18:29.937606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.882 [2024-05-15 11:18:29.937620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.882 [2024-05-15 11:18:29.937627] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.882 [2024-05-15 11:18:29.937633] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.882 [2024-05-15 11:18:29.937647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.882 qpair failed and we were unable to recover it. 00:26:32.882 [2024-05-15 11:18:29.947536] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.882 [2024-05-15 11:18:29.947594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.882 [2024-05-15 11:18:29.947609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.882 [2024-05-15 11:18:29.947615] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.882 [2024-05-15 11:18:29.947622] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.882 [2024-05-15 11:18:29.947635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.882 qpair failed and we were unable to recover it. 00:26:32.882 [2024-05-15 11:18:29.957564] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.882 [2024-05-15 11:18:29.957627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.882 [2024-05-15 11:18:29.957642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.882 [2024-05-15 11:18:29.957649] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.882 [2024-05-15 11:18:29.957655] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.882 [2024-05-15 11:18:29.957669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.882 qpair failed and we were unable to recover it. 00:26:32.882 [2024-05-15 11:18:29.967644] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.882 [2024-05-15 11:18:29.967699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.882 [2024-05-15 11:18:29.967714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.882 [2024-05-15 11:18:29.967722] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.882 [2024-05-15 11:18:29.967728] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.882 [2024-05-15 11:18:29.967742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.882 qpair failed and we were unable to recover it. 00:26:32.882 [2024-05-15 11:18:29.977602] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.882 [2024-05-15 11:18:29.977659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.882 [2024-05-15 11:18:29.977673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.882 [2024-05-15 11:18:29.977680] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.882 [2024-05-15 11:18:29.977686] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.882 [2024-05-15 11:18:29.977700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.882 qpair failed and we were unable to recover it. 00:26:32.882 [2024-05-15 11:18:29.987731] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.882 [2024-05-15 11:18:29.987790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.882 [2024-05-15 11:18:29.987805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.882 [2024-05-15 11:18:29.987812] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.882 [2024-05-15 11:18:29.987818] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.882 [2024-05-15 11:18:29.987831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.882 qpair failed and we were unable to recover it. 00:26:32.882 [2024-05-15 11:18:29.997664] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.882 [2024-05-15 11:18:29.997731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.882 [2024-05-15 11:18:29.997746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.882 [2024-05-15 11:18:29.997753] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.882 [2024-05-15 11:18:29.997759] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.882 [2024-05-15 11:18:29.997772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.882 qpair failed and we were unable to recover it. 00:26:32.882 [2024-05-15 11:18:30.007695] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.882 [2024-05-15 11:18:30.007750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.883 [2024-05-15 11:18:30.007769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.883 [2024-05-15 11:18:30.007776] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.883 [2024-05-15 11:18:30.007782] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.883 [2024-05-15 11:18:30.007796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.883 qpair failed and we were unable to recover it. 00:26:32.883 [2024-05-15 11:18:30.017768] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.883 [2024-05-15 11:18:30.017837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.883 [2024-05-15 11:18:30.017854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.883 [2024-05-15 11:18:30.017861] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.883 [2024-05-15 11:18:30.017867] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.883 [2024-05-15 11:18:30.017882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.883 qpair failed and we were unable to recover it. 00:26:32.883 [2024-05-15 11:18:30.027799] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.883 [2024-05-15 11:18:30.027857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.883 [2024-05-15 11:18:30.027872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.883 [2024-05-15 11:18:30.027879] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.883 [2024-05-15 11:18:30.027885] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.883 [2024-05-15 11:18:30.027898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.883 qpair failed and we were unable to recover it. 00:26:32.883 [2024-05-15 11:18:30.037930] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.883 [2024-05-15 11:18:30.038016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.883 [2024-05-15 11:18:30.038035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.883 [2024-05-15 11:18:30.038042] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.883 [2024-05-15 11:18:30.038049] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.883 [2024-05-15 11:18:30.038066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.883 qpair failed and we were unable to recover it. 00:26:32.883 [2024-05-15 11:18:30.047838] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.883 [2024-05-15 11:18:30.047898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.883 [2024-05-15 11:18:30.047914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.883 [2024-05-15 11:18:30.047920] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.883 [2024-05-15 11:18:30.047926] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.883 [2024-05-15 11:18:30.047945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.883 qpair failed and we were unable to recover it. 00:26:32.883 [2024-05-15 11:18:30.057919] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.883 [2024-05-15 11:18:30.057974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.883 [2024-05-15 11:18:30.057989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.883 [2024-05-15 11:18:30.057996] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.883 [2024-05-15 11:18:30.058003] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.883 [2024-05-15 11:18:30.058017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.883 qpair failed and we were unable to recover it. 00:26:32.883 [2024-05-15 11:18:30.067920] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.883 [2024-05-15 11:18:30.067989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.883 [2024-05-15 11:18:30.068004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.883 [2024-05-15 11:18:30.068011] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.883 [2024-05-15 11:18:30.068017] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.883 [2024-05-15 11:18:30.068031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.883 qpair failed and we were unable to recover it. 00:26:32.883 [2024-05-15 11:18:30.077979] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.883 [2024-05-15 11:18:30.078037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.883 [2024-05-15 11:18:30.078052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.883 [2024-05-15 11:18:30.078059] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.883 [2024-05-15 11:18:30.078065] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.883 [2024-05-15 11:18:30.078078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.883 qpair failed and we were unable to recover it. 00:26:32.883 [2024-05-15 11:18:30.087971] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.883 [2024-05-15 11:18:30.088032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.883 [2024-05-15 11:18:30.088049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.883 [2024-05-15 11:18:30.088056] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.883 [2024-05-15 11:18:30.088062] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.883 [2024-05-15 11:18:30.088077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.883 qpair failed and we were unable to recover it. 00:26:32.883 [2024-05-15 11:18:30.098016] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.883 [2024-05-15 11:18:30.098070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.883 [2024-05-15 11:18:30.098089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.883 [2024-05-15 11:18:30.098096] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.883 [2024-05-15 11:18:30.098102] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.883 [2024-05-15 11:18:30.098116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.883 qpair failed and we were unable to recover it. 00:26:32.883 [2024-05-15 11:18:30.108041] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.883 [2024-05-15 11:18:30.108099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.883 [2024-05-15 11:18:30.108114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.883 [2024-05-15 11:18:30.108121] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.883 [2024-05-15 11:18:30.108127] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.883 [2024-05-15 11:18:30.108141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.883 qpair failed and we were unable to recover it. 00:26:32.883 [2024-05-15 11:18:30.118050] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.883 [2024-05-15 11:18:30.118109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.883 [2024-05-15 11:18:30.118124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.883 [2024-05-15 11:18:30.118131] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.883 [2024-05-15 11:18:30.118137] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.883 [2024-05-15 11:18:30.118151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.883 qpair failed and we were unable to recover it. 00:26:32.883 [2024-05-15 11:18:30.128095] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.883 [2024-05-15 11:18:30.128158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.883 [2024-05-15 11:18:30.128179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.883 [2024-05-15 11:18:30.128187] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.883 [2024-05-15 11:18:30.128193] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.883 [2024-05-15 11:18:30.128208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.883 qpair failed and we were unable to recover it. 00:26:32.883 [2024-05-15 11:18:30.138068] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.883 [2024-05-15 11:18:30.138126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.883 [2024-05-15 11:18:30.138143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.883 [2024-05-15 11:18:30.138151] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.883 [2024-05-15 11:18:30.138157] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:32.883 [2024-05-15 11:18:30.138179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.883 qpair failed and we were unable to recover it. 00:26:33.143 [2024-05-15 11:18:30.148123] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.143 [2024-05-15 11:18:30.148187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.143 [2024-05-15 11:18:30.148206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.143 [2024-05-15 11:18:30.148214] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.143 [2024-05-15 11:18:30.148220] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.143 [2024-05-15 11:18:30.148236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.143 qpair failed and we were unable to recover it. 00:26:33.143 [2024-05-15 11:18:30.158191] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.143 [2024-05-15 11:18:30.158252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.143 [2024-05-15 11:18:30.158270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.143 [2024-05-15 11:18:30.158277] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.143 [2024-05-15 11:18:30.158284] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.143 [2024-05-15 11:18:30.158299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.143 qpair failed and we were unable to recover it. 00:26:33.143 [2024-05-15 11:18:30.168246] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.143 [2024-05-15 11:18:30.168305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.143 [2024-05-15 11:18:30.168320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.143 [2024-05-15 11:18:30.168327] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.143 [2024-05-15 11:18:30.168333] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.143 [2024-05-15 11:18:30.168347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.143 qpair failed and we were unable to recover it. 00:26:33.143 [2024-05-15 11:18:30.178227] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.143 [2024-05-15 11:18:30.178284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.143 [2024-05-15 11:18:30.178300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.143 [2024-05-15 11:18:30.178307] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.143 [2024-05-15 11:18:30.178313] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.143 [2024-05-15 11:18:30.178327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.143 qpair failed and we were unable to recover it. 00:26:33.143 [2024-05-15 11:18:30.188198] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.143 [2024-05-15 11:18:30.188254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.143 [2024-05-15 11:18:30.188272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.143 [2024-05-15 11:18:30.188279] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.143 [2024-05-15 11:18:30.188285] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.143 [2024-05-15 11:18:30.188299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.143 qpair failed and we were unable to recover it. 00:26:33.143 [2024-05-15 11:18:30.198279] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.143 [2024-05-15 11:18:30.198332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.143 [2024-05-15 11:18:30.198347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.143 [2024-05-15 11:18:30.198354] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.143 [2024-05-15 11:18:30.198359] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.143 [2024-05-15 11:18:30.198373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.143 qpair failed and we were unable to recover it. 00:26:33.143 [2024-05-15 11:18:30.208256] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.143 [2024-05-15 11:18:30.208312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.143 [2024-05-15 11:18:30.208327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.143 [2024-05-15 11:18:30.208333] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.143 [2024-05-15 11:18:30.208339] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.144 [2024-05-15 11:18:30.208353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.144 qpair failed and we were unable to recover it. 00:26:33.144 [2024-05-15 11:18:30.218359] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.144 [2024-05-15 11:18:30.218416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.144 [2024-05-15 11:18:30.218430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.144 [2024-05-15 11:18:30.218437] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.144 [2024-05-15 11:18:30.218443] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.144 [2024-05-15 11:18:30.218457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.144 qpair failed and we were unable to recover it. 00:26:33.144 [2024-05-15 11:18:30.228428] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.144 [2024-05-15 11:18:30.228480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.144 [2024-05-15 11:18:30.228495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.144 [2024-05-15 11:18:30.228502] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.144 [2024-05-15 11:18:30.228511] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.144 [2024-05-15 11:18:30.228525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.144 qpair failed and we were unable to recover it. 00:26:33.144 [2024-05-15 11:18:30.238412] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.144 [2024-05-15 11:18:30.238472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.144 [2024-05-15 11:18:30.238486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.144 [2024-05-15 11:18:30.238493] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.144 [2024-05-15 11:18:30.238499] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.144 [2024-05-15 11:18:30.238513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.144 qpair failed and we were unable to recover it. 00:26:33.144 [2024-05-15 11:18:30.248443] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.144 [2024-05-15 11:18:30.248511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.144 [2024-05-15 11:18:30.248526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.144 [2024-05-15 11:18:30.248532] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.144 [2024-05-15 11:18:30.248538] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.144 [2024-05-15 11:18:30.248551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.144 qpair failed and we were unable to recover it. 00:26:33.144 [2024-05-15 11:18:30.258469] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.144 [2024-05-15 11:18:30.258526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.144 [2024-05-15 11:18:30.258541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.144 [2024-05-15 11:18:30.258548] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.144 [2024-05-15 11:18:30.258554] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.144 [2024-05-15 11:18:30.258567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.144 qpair failed and we were unable to recover it. 00:26:33.144 [2024-05-15 11:18:30.268488] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.144 [2024-05-15 11:18:30.268540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.144 [2024-05-15 11:18:30.268556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.144 [2024-05-15 11:18:30.268563] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.144 [2024-05-15 11:18:30.268568] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.144 [2024-05-15 11:18:30.268582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.144 qpair failed and we were unable to recover it. 00:26:33.144 [2024-05-15 11:18:30.278589] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.144 [2024-05-15 11:18:30.278644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.144 [2024-05-15 11:18:30.278662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.144 [2024-05-15 11:18:30.278669] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.144 [2024-05-15 11:18:30.278674] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.144 [2024-05-15 11:18:30.278688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.144 qpair failed and we were unable to recover it. 00:26:33.144 [2024-05-15 11:18:30.288538] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.144 [2024-05-15 11:18:30.288600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.144 [2024-05-15 11:18:30.288615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.144 [2024-05-15 11:18:30.288622] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.144 [2024-05-15 11:18:30.288627] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.144 [2024-05-15 11:18:30.288641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.144 qpair failed and we were unable to recover it. 00:26:33.144 [2024-05-15 11:18:30.298588] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.144 [2024-05-15 11:18:30.298638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.144 [2024-05-15 11:18:30.298653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.144 [2024-05-15 11:18:30.298660] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.144 [2024-05-15 11:18:30.298666] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.144 [2024-05-15 11:18:30.298681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.144 qpair failed and we were unable to recover it. 00:26:33.144 [2024-05-15 11:18:30.308648] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.144 [2024-05-15 11:18:30.308701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.144 [2024-05-15 11:18:30.308717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.144 [2024-05-15 11:18:30.308724] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.144 [2024-05-15 11:18:30.308730] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.144 [2024-05-15 11:18:30.308744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.144 qpair failed and we were unable to recover it. 00:26:33.144 [2024-05-15 11:18:30.318634] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.144 [2024-05-15 11:18:30.318691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.144 [2024-05-15 11:18:30.318706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.144 [2024-05-15 11:18:30.318713] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.144 [2024-05-15 11:18:30.318725] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.144 [2024-05-15 11:18:30.318738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.144 qpair failed and we were unable to recover it. 00:26:33.144 [2024-05-15 11:18:30.328694] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.144 [2024-05-15 11:18:30.328750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.144 [2024-05-15 11:18:30.328765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.144 [2024-05-15 11:18:30.328772] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.144 [2024-05-15 11:18:30.328778] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.144 [2024-05-15 11:18:30.328792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.144 qpair failed and we were unable to recover it. 00:26:33.144 [2024-05-15 11:18:30.338673] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.144 [2024-05-15 11:18:30.338726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.144 [2024-05-15 11:18:30.338742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.144 [2024-05-15 11:18:30.338749] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.144 [2024-05-15 11:18:30.338755] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.144 [2024-05-15 11:18:30.338768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.144 qpair failed and we were unable to recover it. 00:26:33.144 [2024-05-15 11:18:30.348710] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.145 [2024-05-15 11:18:30.348766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.145 [2024-05-15 11:18:30.348781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.145 [2024-05-15 11:18:30.348787] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.145 [2024-05-15 11:18:30.348793] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.145 [2024-05-15 11:18:30.348807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.145 qpair failed and we were unable to recover it. 00:26:33.145 [2024-05-15 11:18:30.358750] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.145 [2024-05-15 11:18:30.358805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.145 [2024-05-15 11:18:30.358820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.145 [2024-05-15 11:18:30.358827] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.145 [2024-05-15 11:18:30.358832] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.145 [2024-05-15 11:18:30.358846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.145 qpair failed and we were unable to recover it. 00:26:33.145 [2024-05-15 11:18:30.368692] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.145 [2024-05-15 11:18:30.368747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.145 [2024-05-15 11:18:30.368762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.145 [2024-05-15 11:18:30.368768] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.145 [2024-05-15 11:18:30.368774] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.145 [2024-05-15 11:18:30.368787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.145 qpair failed and we were unable to recover it. 00:26:33.145 [2024-05-15 11:18:30.378797] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.145 [2024-05-15 11:18:30.378855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.145 [2024-05-15 11:18:30.378869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.145 [2024-05-15 11:18:30.378876] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.145 [2024-05-15 11:18:30.378882] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.145 [2024-05-15 11:18:30.378895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.145 qpair failed and we were unable to recover it. 00:26:33.145 [2024-05-15 11:18:30.388755] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.145 [2024-05-15 11:18:30.388812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.145 [2024-05-15 11:18:30.388827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.145 [2024-05-15 11:18:30.388833] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.145 [2024-05-15 11:18:30.388839] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.145 [2024-05-15 11:18:30.388852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.145 qpair failed and we were unable to recover it. 00:26:33.145 [2024-05-15 11:18:30.398915] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.145 [2024-05-15 11:18:30.398973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.145 [2024-05-15 11:18:30.398987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.145 [2024-05-15 11:18:30.398994] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.145 [2024-05-15 11:18:30.399000] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.145 [2024-05-15 11:18:30.399014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.145 qpair failed and we were unable to recover it. 00:26:33.404 [2024-05-15 11:18:30.408875] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.404 [2024-05-15 11:18:30.408938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.404 [2024-05-15 11:18:30.408957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.404 [2024-05-15 11:18:30.408964] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.404 [2024-05-15 11:18:30.408973] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.404 [2024-05-15 11:18:30.408989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.404 qpair failed and we were unable to recover it. 00:26:33.404 [2024-05-15 11:18:30.418922] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.404 [2024-05-15 11:18:30.418979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.404 [2024-05-15 11:18:30.418998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.404 [2024-05-15 11:18:30.419005] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.404 [2024-05-15 11:18:30.419011] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.404 [2024-05-15 11:18:30.419026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.404 qpair failed and we were unable to recover it. 00:26:33.404 [2024-05-15 11:18:30.428943] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.404 [2024-05-15 11:18:30.429003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.404 [2024-05-15 11:18:30.429019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.404 [2024-05-15 11:18:30.429025] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.404 [2024-05-15 11:18:30.429031] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.404 [2024-05-15 11:18:30.429046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.404 qpair failed and we were unable to recover it. 00:26:33.404 [2024-05-15 11:18:30.439023] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.404 [2024-05-15 11:18:30.439080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.404 [2024-05-15 11:18:30.439095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.404 [2024-05-15 11:18:30.439102] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.404 [2024-05-15 11:18:30.439108] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.404 [2024-05-15 11:18:30.439121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.404 qpair failed and we were unable to recover it. 00:26:33.404 [2024-05-15 11:18:30.448999] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.404 [2024-05-15 11:18:30.449057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.404 [2024-05-15 11:18:30.449072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.404 [2024-05-15 11:18:30.449079] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.404 [2024-05-15 11:18:30.449084] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.404 [2024-05-15 11:18:30.449098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.404 qpair failed and we were unable to recover it. 00:26:33.404 [2024-05-15 11:18:30.459024] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.404 [2024-05-15 11:18:30.459108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.404 [2024-05-15 11:18:30.459123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.404 [2024-05-15 11:18:30.459129] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.404 [2024-05-15 11:18:30.459135] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.404 [2024-05-15 11:18:30.459149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.404 qpair failed and we were unable to recover it. 00:26:33.404 [2024-05-15 11:18:30.469061] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.404 [2024-05-15 11:18:30.469114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.404 [2024-05-15 11:18:30.469129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.404 [2024-05-15 11:18:30.469136] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.405 [2024-05-15 11:18:30.469142] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.405 [2024-05-15 11:18:30.469156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.405 qpair failed and we were unable to recover it. 00:26:33.405 [2024-05-15 11:18:30.479088] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.405 [2024-05-15 11:18:30.479149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.405 [2024-05-15 11:18:30.479167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.405 [2024-05-15 11:18:30.479175] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.405 [2024-05-15 11:18:30.479181] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.405 [2024-05-15 11:18:30.479195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.405 qpair failed and we were unable to recover it. 00:26:33.405 [2024-05-15 11:18:30.489107] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.405 [2024-05-15 11:18:30.489166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.405 [2024-05-15 11:18:30.489181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.405 [2024-05-15 11:18:30.489188] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.405 [2024-05-15 11:18:30.489194] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.405 [2024-05-15 11:18:30.489208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.405 qpair failed and we were unable to recover it. 00:26:33.405 [2024-05-15 11:18:30.499175] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.405 [2024-05-15 11:18:30.499224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.405 [2024-05-15 11:18:30.499238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.405 [2024-05-15 11:18:30.499246] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.405 [2024-05-15 11:18:30.499254] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.405 [2024-05-15 11:18:30.499268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.405 qpair failed and we were unable to recover it. 00:26:33.405 [2024-05-15 11:18:30.509197] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.405 [2024-05-15 11:18:30.509251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.405 [2024-05-15 11:18:30.509267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.405 [2024-05-15 11:18:30.509274] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.405 [2024-05-15 11:18:30.509280] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.405 [2024-05-15 11:18:30.509295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.405 qpair failed and we were unable to recover it. 00:26:33.405 [2024-05-15 11:18:30.519212] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.405 [2024-05-15 11:18:30.519267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.405 [2024-05-15 11:18:30.519282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.405 [2024-05-15 11:18:30.519288] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.405 [2024-05-15 11:18:30.519294] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.405 [2024-05-15 11:18:30.519308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.405 qpair failed and we were unable to recover it. 00:26:33.405 [2024-05-15 11:18:30.529203] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.405 [2024-05-15 11:18:30.529262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.405 [2024-05-15 11:18:30.529276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.405 [2024-05-15 11:18:30.529283] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.405 [2024-05-15 11:18:30.529289] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.405 [2024-05-15 11:18:30.529303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.405 qpair failed and we were unable to recover it. 00:26:33.405 [2024-05-15 11:18:30.539250] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.405 [2024-05-15 11:18:30.539305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.405 [2024-05-15 11:18:30.539319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.405 [2024-05-15 11:18:30.539326] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.405 [2024-05-15 11:18:30.539332] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.405 [2024-05-15 11:18:30.539346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.405 qpair failed and we were unable to recover it. 00:26:33.405 [2024-05-15 11:18:30.549250] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.405 [2024-05-15 11:18:30.549334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.405 [2024-05-15 11:18:30.549360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.405 [2024-05-15 11:18:30.549371] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.405 [2024-05-15 11:18:30.549380] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:33.405 [2024-05-15 11:18:30.549402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.405 qpair failed and we were unable to recover it. 00:26:33.405 [2024-05-15 11:18:30.559251] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.405 [2024-05-15 11:18:30.559305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.405 [2024-05-15 11:18:30.559321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.405 [2024-05-15 11:18:30.559328] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.405 [2024-05-15 11:18:30.559334] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:33.405 [2024-05-15 11:18:30.559350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.405 qpair failed and we were unable to recover it. 00:26:33.405 [2024-05-15 11:18:30.569340] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.405 [2024-05-15 11:18:30.569394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.405 [2024-05-15 11:18:30.569409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.405 [2024-05-15 11:18:30.569416] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.405 [2024-05-15 11:18:30.569422] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:33.405 [2024-05-15 11:18:30.569437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.405 qpair failed and we were unable to recover it. 00:26:33.405 [2024-05-15 11:18:30.579368] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.405 [2024-05-15 11:18:30.579422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.405 [2024-05-15 11:18:30.579437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.405 [2024-05-15 11:18:30.579445] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.405 [2024-05-15 11:18:30.579451] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:33.405 [2024-05-15 11:18:30.579466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.405 qpair failed and we were unable to recover it. 00:26:33.405 [2024-05-15 11:18:30.589386] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.405 [2024-05-15 11:18:30.589445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.405 [2024-05-15 11:18:30.589459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.405 [2024-05-15 11:18:30.589481] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.405 [2024-05-15 11:18:30.589487] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:33.405 [2024-05-15 11:18:30.589501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.405 qpair failed and we were unable to recover it. 00:26:33.405 [2024-05-15 11:18:30.599469] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.405 [2024-05-15 11:18:30.599529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.405 [2024-05-15 11:18:30.599543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.405 [2024-05-15 11:18:30.599550] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.405 [2024-05-15 11:18:30.599556] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:33.405 [2024-05-15 11:18:30.599570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.406 qpair failed and we were unable to recover it. 00:26:33.406 [2024-05-15 11:18:30.609451] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.406 [2024-05-15 11:18:30.609515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.406 [2024-05-15 11:18:30.609530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.406 [2024-05-15 11:18:30.609537] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.406 [2024-05-15 11:18:30.609543] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:33.406 [2024-05-15 11:18:30.609557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.406 qpair failed and we were unable to recover it. 00:26:33.406 [2024-05-15 11:18:30.619413] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.406 [2024-05-15 11:18:30.619470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.406 [2024-05-15 11:18:30.619485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.406 [2024-05-15 11:18:30.619492] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.406 [2024-05-15 11:18:30.619498] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:33.406 [2024-05-15 11:18:30.619512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.406 qpair failed and we were unable to recover it. 00:26:33.406 [2024-05-15 11:18:30.629490] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.406 [2024-05-15 11:18:30.629551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.406 [2024-05-15 11:18:30.629568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.406 [2024-05-15 11:18:30.629578] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.406 [2024-05-15 11:18:30.629587] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:33.406 [2024-05-15 11:18:30.629606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.406 qpair failed and we were unable to recover it. 00:26:33.406 [2024-05-15 11:18:30.639546] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.406 [2024-05-15 11:18:30.639602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.406 [2024-05-15 11:18:30.639617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.406 [2024-05-15 11:18:30.639624] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.406 [2024-05-15 11:18:30.639630] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:33.406 [2024-05-15 11:18:30.639645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.406 qpair failed and we were unable to recover it. 00:26:33.406 [2024-05-15 11:18:30.649574] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.406 [2024-05-15 11:18:30.649634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.406 [2024-05-15 11:18:30.649648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.406 [2024-05-15 11:18:30.649655] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.406 [2024-05-15 11:18:30.649661] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:33.406 [2024-05-15 11:18:30.649675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.406 qpair failed and we were unable to recover it. 00:26:33.406 [2024-05-15 11:18:30.659605] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.406 [2024-05-15 11:18:30.659659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.406 [2024-05-15 11:18:30.659674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.406 [2024-05-15 11:18:30.659681] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.406 [2024-05-15 11:18:30.659687] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:33.406 [2024-05-15 11:18:30.659701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.406 qpair failed and we were unable to recover it. 00:26:33.666 [2024-05-15 11:18:30.669667] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.666 [2024-05-15 11:18:30.669725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.666 [2024-05-15 11:18:30.669739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.666 [2024-05-15 11:18:30.669746] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.666 [2024-05-15 11:18:30.669752] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:33.666 [2024-05-15 11:18:30.669766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.666 qpair failed and we were unable to recover it. 00:26:33.666 [2024-05-15 11:18:30.679679] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.666 [2024-05-15 11:18:30.679750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.666 [2024-05-15 11:18:30.679768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.666 [2024-05-15 11:18:30.679775] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.666 [2024-05-15 11:18:30.679780] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:33.666 [2024-05-15 11:18:30.679795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.666 qpair failed and we were unable to recover it. 00:26:33.666 [2024-05-15 11:18:30.689652] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.666 [2024-05-15 11:18:30.689709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.666 [2024-05-15 11:18:30.689723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.666 [2024-05-15 11:18:30.689730] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.666 [2024-05-15 11:18:30.689736] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:33.666 [2024-05-15 11:18:30.689750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.666 qpair failed and we were unable to recover it. 00:26:33.666 [2024-05-15 11:18:30.699760] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.666 [2024-05-15 11:18:30.699817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.666 [2024-05-15 11:18:30.699831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.666 [2024-05-15 11:18:30.699838] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.666 [2024-05-15 11:18:30.699844] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:33.666 [2024-05-15 11:18:30.699858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.666 qpair failed and we were unable to recover it. 00:26:33.666 [2024-05-15 11:18:30.709753] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.666 [2024-05-15 11:18:30.709807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.666 [2024-05-15 11:18:30.709822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.666 [2024-05-15 11:18:30.709829] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.666 [2024-05-15 11:18:30.709835] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:33.666 [2024-05-15 11:18:30.709849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.667 qpair failed and we were unable to recover it. 00:26:33.667 [2024-05-15 11:18:30.719754] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.667 [2024-05-15 11:18:30.719851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.667 [2024-05-15 11:18:30.719865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.667 [2024-05-15 11:18:30.719872] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.667 [2024-05-15 11:18:30.719878] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:33.667 [2024-05-15 11:18:30.719895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.667 qpair failed and we were unable to recover it. 00:26:33.667 [2024-05-15 11:18:30.729825] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.667 [2024-05-15 11:18:30.729881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.667 [2024-05-15 11:18:30.729895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.667 [2024-05-15 11:18:30.729901] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.667 [2024-05-15 11:18:30.729907] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:33.667 [2024-05-15 11:18:30.729922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.667 qpair failed and we were unable to recover it. 00:26:33.667 [2024-05-15 11:18:30.739827] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.667 [2024-05-15 11:18:30.739879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.667 [2024-05-15 11:18:30.739893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.667 [2024-05-15 11:18:30.739900] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.667 [2024-05-15 11:18:30.739905] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:33.667 [2024-05-15 11:18:30.739920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.667 qpair failed and we were unable to recover it. 00:26:33.667 [2024-05-15 11:18:30.749856] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.667 [2024-05-15 11:18:30.749909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.667 [2024-05-15 11:18:30.749924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.667 [2024-05-15 11:18:30.749931] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.667 [2024-05-15 11:18:30.749937] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:33.667 [2024-05-15 11:18:30.749951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.667 qpair failed and we were unable to recover it. 00:26:33.667 [2024-05-15 11:18:30.759890] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.667 [2024-05-15 11:18:30.759945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.667 [2024-05-15 11:18:30.759960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.667 [2024-05-15 11:18:30.759967] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.667 [2024-05-15 11:18:30.759973] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:33.667 [2024-05-15 11:18:30.759987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.667 qpair failed and we were unable to recover it. 00:26:33.667 [2024-05-15 11:18:30.769909] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.667 [2024-05-15 11:18:30.769979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.667 [2024-05-15 11:18:30.769996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.667 [2024-05-15 11:18:30.770003] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.667 [2024-05-15 11:18:30.770009] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:33.667 [2024-05-15 11:18:30.770023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.667 qpair failed and we were unable to recover it. 00:26:33.667 [2024-05-15 11:18:30.779930] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.667 [2024-05-15 11:18:30.779985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.667 [2024-05-15 11:18:30.780000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.667 [2024-05-15 11:18:30.780007] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.667 [2024-05-15 11:18:30.780012] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:33.667 [2024-05-15 11:18:30.780027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.667 qpair failed and we were unable to recover it. 00:26:33.667 [2024-05-15 11:18:30.789965] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.667 [2024-05-15 11:18:30.790019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.667 [2024-05-15 11:18:30.790034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.667 [2024-05-15 11:18:30.790041] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.667 [2024-05-15 11:18:30.790046] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:33.667 [2024-05-15 11:18:30.790061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.667 qpair failed and we were unable to recover it. 00:26:33.667 [2024-05-15 11:18:30.799992] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.667 [2024-05-15 11:18:30.800053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.667 [2024-05-15 11:18:30.800067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.667 [2024-05-15 11:18:30.800074] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.667 [2024-05-15 11:18:30.800080] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:33.667 [2024-05-15 11:18:30.800095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.667 qpair failed and we were unable to recover it. 00:26:33.667 [2024-05-15 11:18:30.810009] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.667 [2024-05-15 11:18:30.810067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.667 [2024-05-15 11:18:30.810081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.667 [2024-05-15 11:18:30.810088] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.667 [2024-05-15 11:18:30.810094] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:33.667 [2024-05-15 11:18:30.810111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.667 qpair failed and we were unable to recover it. 00:26:33.667 [2024-05-15 11:18:30.820087] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.667 [2024-05-15 11:18:30.820141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.667 [2024-05-15 11:18:30.820156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.667 [2024-05-15 11:18:30.820163] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.667 [2024-05-15 11:18:30.820172] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:33.667 [2024-05-15 11:18:30.820186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.667 qpair failed and we were unable to recover it. 00:26:33.667 [2024-05-15 11:18:30.830076] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.667 [2024-05-15 11:18:30.830138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.667 [2024-05-15 11:18:30.830156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.667 [2024-05-15 11:18:30.830171] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.667 [2024-05-15 11:18:30.830180] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:33.667 [2024-05-15 11:18:30.830198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.667 qpair failed and we were unable to recover it. 00:26:33.667 [2024-05-15 11:18:30.840112] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.667 [2024-05-15 11:18:30.840167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.667 [2024-05-15 11:18:30.840182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.667 [2024-05-15 11:18:30.840189] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.667 [2024-05-15 11:18:30.840195] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:33.667 [2024-05-15 11:18:30.840210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.667 qpair failed and we were unable to recover it. 00:26:33.667 [2024-05-15 11:18:30.850126] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.667 [2024-05-15 11:18:30.850180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.667 [2024-05-15 11:18:30.850195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.667 [2024-05-15 11:18:30.850201] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.667 [2024-05-15 11:18:30.850207] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:33.668 [2024-05-15 11:18:30.850221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.668 qpair failed and we were unable to recover it. 00:26:33.668 [2024-05-15 11:18:30.860172] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.668 [2024-05-15 11:18:30.860233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.668 [2024-05-15 11:18:30.860248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.668 [2024-05-15 11:18:30.860255] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.668 [2024-05-15 11:18:30.860260] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:33.668 [2024-05-15 11:18:30.860275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.668 qpair failed and we were unable to recover it. 00:26:33.668 [2024-05-15 11:18:30.870197] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.668 [2024-05-15 11:18:30.870248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.668 [2024-05-15 11:18:30.870263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.668 [2024-05-15 11:18:30.870269] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.668 [2024-05-15 11:18:30.870275] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:33.668 [2024-05-15 11:18:30.870290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.668 qpair failed and we were unable to recover it. 00:26:33.668 [2024-05-15 11:18:30.880224] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.668 [2024-05-15 11:18:30.880281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.668 [2024-05-15 11:18:30.880296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.668 [2024-05-15 11:18:30.880303] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.668 [2024-05-15 11:18:30.880309] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:33.668 [2024-05-15 11:18:30.880323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.668 qpair failed and we were unable to recover it. 00:26:33.668 [2024-05-15 11:18:30.890240] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.668 [2024-05-15 11:18:30.890295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.668 [2024-05-15 11:18:30.890309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.668 [2024-05-15 11:18:30.890316] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.668 [2024-05-15 11:18:30.890322] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:33.668 [2024-05-15 11:18:30.890336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.668 qpair failed and we were unable to recover it. 00:26:33.668 [2024-05-15 11:18:30.900211] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.668 [2024-05-15 11:18:30.900267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.668 [2024-05-15 11:18:30.900281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.668 [2024-05-15 11:18:30.900288] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.668 [2024-05-15 11:18:30.900297] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:33.668 [2024-05-15 11:18:30.900312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.668 qpair failed and we were unable to recover it. 00:26:33.668 [2024-05-15 11:18:30.910305] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.668 [2024-05-15 11:18:30.910361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.668 [2024-05-15 11:18:30.910375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.668 [2024-05-15 11:18:30.910382] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.668 [2024-05-15 11:18:30.910388] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:33.668 [2024-05-15 11:18:30.910401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.668 qpair failed and we were unable to recover it. 00:26:33.668 [2024-05-15 11:18:30.920324] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.668 [2024-05-15 11:18:30.920380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.668 [2024-05-15 11:18:30.920395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.668 [2024-05-15 11:18:30.920401] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.668 [2024-05-15 11:18:30.920407] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:33.668 [2024-05-15 11:18:30.920420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.668 qpair failed and we were unable to recover it. 00:26:33.926 [2024-05-15 11:18:30.930294] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.926 [2024-05-15 11:18:30.930353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.926 [2024-05-15 11:18:30.930375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.926 [2024-05-15 11:18:30.930382] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.926 [2024-05-15 11:18:30.930388] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:33.926 [2024-05-15 11:18:30.930402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.926 qpair failed and we were unable to recover it. 00:26:33.926 [2024-05-15 11:18:30.940375] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.926 [2024-05-15 11:18:30.940433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.926 [2024-05-15 11:18:30.940447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.926 [2024-05-15 11:18:30.940454] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.926 [2024-05-15 11:18:30.940460] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa698000b90 00:26:33.926 [2024-05-15 11:18:30.940473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.926 qpair failed and we were unable to recover it. 00:26:33.926 [2024-05-15 11:18:30.950435] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.926 [2024-05-15 11:18:30.950499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.926 [2024-05-15 11:18:30.950526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.926 [2024-05-15 11:18:30.950537] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.926 [2024-05-15 11:18:30.950546] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.926 [2024-05-15 11:18:30.950567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.926 qpair failed and we were unable to recover it. 00:26:33.926 [2024-05-15 11:18:30.960398] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.926 [2024-05-15 11:18:30.960457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.926 [2024-05-15 11:18:30.960472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.926 [2024-05-15 11:18:30.960479] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.926 [2024-05-15 11:18:30.960485] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.927 [2024-05-15 11:18:30.960499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.927 qpair failed and we were unable to recover it. 00:26:33.927 [2024-05-15 11:18:30.970499] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.927 [2024-05-15 11:18:30.970567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.927 [2024-05-15 11:18:30.970583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.927 [2024-05-15 11:18:30.970590] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.927 [2024-05-15 11:18:30.970596] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.927 [2024-05-15 11:18:30.970610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.927 qpair failed and we were unable to recover it. 00:26:33.927 [2024-05-15 11:18:30.980511] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.927 [2024-05-15 11:18:30.980567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.927 [2024-05-15 11:18:30.980582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.927 [2024-05-15 11:18:30.980590] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.927 [2024-05-15 11:18:30.980596] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.927 [2024-05-15 11:18:30.980610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.927 qpair failed and we were unable to recover it. 00:26:33.927 [2024-05-15 11:18:30.990546] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.927 [2024-05-15 11:18:30.990613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.927 [2024-05-15 11:18:30.990628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.927 [2024-05-15 11:18:30.990638] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.927 [2024-05-15 11:18:30.990644] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.927 [2024-05-15 11:18:30.990658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.927 qpair failed and we were unable to recover it. 00:26:33.927 [2024-05-15 11:18:31.000522] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.927 [2024-05-15 11:18:31.000579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.927 [2024-05-15 11:18:31.000595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.927 [2024-05-15 11:18:31.000601] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.927 [2024-05-15 11:18:31.000607] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.927 [2024-05-15 11:18:31.000621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.927 qpair failed and we were unable to recover it. 00:26:33.927 [2024-05-15 11:18:31.010587] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.927 [2024-05-15 11:18:31.010639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.927 [2024-05-15 11:18:31.010654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.927 [2024-05-15 11:18:31.010661] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.927 [2024-05-15 11:18:31.010667] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.927 [2024-05-15 11:18:31.010681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.927 qpair failed and we were unable to recover it. 00:26:33.927 [2024-05-15 11:18:31.020634] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.927 [2024-05-15 11:18:31.020688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.927 [2024-05-15 11:18:31.020704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.927 [2024-05-15 11:18:31.020710] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.927 [2024-05-15 11:18:31.020716] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.927 [2024-05-15 11:18:31.020730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.927 qpair failed and we were unable to recover it. 00:26:33.927 [2024-05-15 11:18:31.030656] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.927 [2024-05-15 11:18:31.030708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.927 [2024-05-15 11:18:31.030722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.927 [2024-05-15 11:18:31.030729] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.927 [2024-05-15 11:18:31.030734] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.927 [2024-05-15 11:18:31.030748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.927 qpair failed and we were unable to recover it. 00:26:33.927 [2024-05-15 11:18:31.040701] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.927 [2024-05-15 11:18:31.040755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.927 [2024-05-15 11:18:31.040770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.927 [2024-05-15 11:18:31.040776] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.927 [2024-05-15 11:18:31.040782] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.927 [2024-05-15 11:18:31.040795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.927 qpair failed and we were unable to recover it. 00:26:33.927 [2024-05-15 11:18:31.050716] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.927 [2024-05-15 11:18:31.050775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.927 [2024-05-15 11:18:31.050790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.927 [2024-05-15 11:18:31.050797] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.927 [2024-05-15 11:18:31.050803] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.927 [2024-05-15 11:18:31.050816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.927 qpair failed and we were unable to recover it. 00:26:33.927 [2024-05-15 11:18:31.060800] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.927 [2024-05-15 11:18:31.060856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.927 [2024-05-15 11:18:31.060870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.927 [2024-05-15 11:18:31.060877] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.927 [2024-05-15 11:18:31.060882] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.927 [2024-05-15 11:18:31.060896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.927 qpair failed and we were unable to recover it. 00:26:33.927 [2024-05-15 11:18:31.070787] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.927 [2024-05-15 11:18:31.070846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.927 [2024-05-15 11:18:31.070861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.927 [2024-05-15 11:18:31.070867] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.927 [2024-05-15 11:18:31.070873] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.927 [2024-05-15 11:18:31.070886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.927 qpair failed and we were unable to recover it. 00:26:33.927 [2024-05-15 11:18:31.080816] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.927 [2024-05-15 11:18:31.080874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.927 [2024-05-15 11:18:31.080889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.927 [2024-05-15 11:18:31.080899] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.927 [2024-05-15 11:18:31.080905] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.927 [2024-05-15 11:18:31.080919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.927 qpair failed and we were unable to recover it. 00:26:33.927 [2024-05-15 11:18:31.090757] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.927 [2024-05-15 11:18:31.090818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.927 [2024-05-15 11:18:31.090833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.927 [2024-05-15 11:18:31.090839] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.927 [2024-05-15 11:18:31.090845] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.927 [2024-05-15 11:18:31.090859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.927 qpair failed and we were unable to recover it. 00:26:33.927 [2024-05-15 11:18:31.100808] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.927 [2024-05-15 11:18:31.100868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.927 [2024-05-15 11:18:31.100883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.927 [2024-05-15 11:18:31.100890] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.928 [2024-05-15 11:18:31.100896] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.928 [2024-05-15 11:18:31.100909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.928 qpair failed and we were unable to recover it. 00:26:33.928 [2024-05-15 11:18:31.110893] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.928 [2024-05-15 11:18:31.110951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.928 [2024-05-15 11:18:31.110966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.928 [2024-05-15 11:18:31.110973] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.928 [2024-05-15 11:18:31.110979] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.928 [2024-05-15 11:18:31.110993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.928 qpair failed and we were unable to recover it. 00:26:33.928 [2024-05-15 11:18:31.120928] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.928 [2024-05-15 11:18:31.120984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.928 [2024-05-15 11:18:31.121000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.928 [2024-05-15 11:18:31.121008] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.928 [2024-05-15 11:18:31.121014] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.928 [2024-05-15 11:18:31.121028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.928 qpair failed and we were unable to recover it. 00:26:33.928 [2024-05-15 11:18:31.130892] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.928 [2024-05-15 11:18:31.130949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.928 [2024-05-15 11:18:31.130965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.928 [2024-05-15 11:18:31.130972] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.928 [2024-05-15 11:18:31.130977] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.928 [2024-05-15 11:18:31.130992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.928 qpair failed and we were unable to recover it. 00:26:33.928 [2024-05-15 11:18:31.140904] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.928 [2024-05-15 11:18:31.140961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.928 [2024-05-15 11:18:31.140976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.928 [2024-05-15 11:18:31.140983] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.928 [2024-05-15 11:18:31.140989] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.928 [2024-05-15 11:18:31.141003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.928 qpair failed and we were unable to recover it. 00:26:33.928 [2024-05-15 11:18:31.151000] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.928 [2024-05-15 11:18:31.151055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.928 [2024-05-15 11:18:31.151070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.928 [2024-05-15 11:18:31.151078] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.928 [2024-05-15 11:18:31.151083] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.928 [2024-05-15 11:18:31.151098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.928 qpair failed and we were unable to recover it. 00:26:33.928 [2024-05-15 11:18:31.161008] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.928 [2024-05-15 11:18:31.161065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.928 [2024-05-15 11:18:31.161080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.928 [2024-05-15 11:18:31.161086] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.928 [2024-05-15 11:18:31.161092] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.928 [2024-05-15 11:18:31.161106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.928 qpair failed and we were unable to recover it. 00:26:33.928 [2024-05-15 11:18:31.171108] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.928 [2024-05-15 11:18:31.171170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.928 [2024-05-15 11:18:31.171185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.928 [2024-05-15 11:18:31.171195] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.928 [2024-05-15 11:18:31.171201] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.928 [2024-05-15 11:18:31.171215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.928 qpair failed and we were unable to recover it. 00:26:33.928 [2024-05-15 11:18:31.181018] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.928 [2024-05-15 11:18:31.181115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.928 [2024-05-15 11:18:31.181130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.928 [2024-05-15 11:18:31.181137] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.928 [2024-05-15 11:18:31.181142] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:33.928 [2024-05-15 11:18:31.181156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.928 qpair failed and we were unable to recover it. 00:26:34.186 [2024-05-15 11:18:31.191063] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.186 [2024-05-15 11:18:31.191122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.187 [2024-05-15 11:18:31.191141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.187 [2024-05-15 11:18:31.191148] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.187 [2024-05-15 11:18:31.191155] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.187 [2024-05-15 11:18:31.191176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.187 qpair failed and we were unable to recover it. 00:26:34.187 [2024-05-15 11:18:31.201177] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.187 [2024-05-15 11:18:31.201240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.187 [2024-05-15 11:18:31.201258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.187 [2024-05-15 11:18:31.201265] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.187 [2024-05-15 11:18:31.201272] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.187 [2024-05-15 11:18:31.201288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.187 qpair failed and we were unable to recover it. 00:26:34.187 [2024-05-15 11:18:31.211158] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.187 [2024-05-15 11:18:31.211222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.187 [2024-05-15 11:18:31.211238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.187 [2024-05-15 11:18:31.211245] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.187 [2024-05-15 11:18:31.211251] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.187 [2024-05-15 11:18:31.211266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.187 qpair failed and we were unable to recover it. 00:26:34.187 [2024-05-15 11:18:31.221180] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.187 [2024-05-15 11:18:31.221235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.187 [2024-05-15 11:18:31.221250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.187 [2024-05-15 11:18:31.221257] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.187 [2024-05-15 11:18:31.221263] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.187 [2024-05-15 11:18:31.221277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.187 qpair failed and we were unable to recover it. 00:26:34.187 [2024-05-15 11:18:31.231265] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.187 [2024-05-15 11:18:31.231320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.187 [2024-05-15 11:18:31.231336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.187 [2024-05-15 11:18:31.231343] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.187 [2024-05-15 11:18:31.231349] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.187 [2024-05-15 11:18:31.231362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.187 qpair failed and we were unable to recover it. 00:26:34.187 [2024-05-15 11:18:31.241277] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.187 [2024-05-15 11:18:31.241345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.187 [2024-05-15 11:18:31.241360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.187 [2024-05-15 11:18:31.241366] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.187 [2024-05-15 11:18:31.241373] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.187 [2024-05-15 11:18:31.241386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.187 qpair failed and we were unable to recover it. 00:26:34.187 [2024-05-15 11:18:31.251220] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.187 [2024-05-15 11:18:31.251276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.187 [2024-05-15 11:18:31.251290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.187 [2024-05-15 11:18:31.251297] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.187 [2024-05-15 11:18:31.251303] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.187 [2024-05-15 11:18:31.251317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.187 qpair failed and we were unable to recover it. 00:26:34.187 [2024-05-15 11:18:31.261338] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.187 [2024-05-15 11:18:31.261396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.187 [2024-05-15 11:18:31.261417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.187 [2024-05-15 11:18:31.261424] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.187 [2024-05-15 11:18:31.261431] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.187 [2024-05-15 11:18:31.261446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.187 qpair failed and we were unable to recover it. 00:26:34.187 [2024-05-15 11:18:31.271291] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.187 [2024-05-15 11:18:31.271342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.187 [2024-05-15 11:18:31.271357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.187 [2024-05-15 11:18:31.271364] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.187 [2024-05-15 11:18:31.271370] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.187 [2024-05-15 11:18:31.271384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.187 qpair failed and we were unable to recover it. 00:26:34.187 [2024-05-15 11:18:31.281328] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.187 [2024-05-15 11:18:31.281384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.187 [2024-05-15 11:18:31.281399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.187 [2024-05-15 11:18:31.281407] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.187 [2024-05-15 11:18:31.281413] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.187 [2024-05-15 11:18:31.281427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.187 qpair failed and we were unable to recover it. 00:26:34.187 [2024-05-15 11:18:31.291349] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.187 [2024-05-15 11:18:31.291410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.187 [2024-05-15 11:18:31.291425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.187 [2024-05-15 11:18:31.291433] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.187 [2024-05-15 11:18:31.291438] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.187 [2024-05-15 11:18:31.291452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.187 qpair failed and we were unable to recover it. 00:26:34.187 [2024-05-15 11:18:31.301429] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.187 [2024-05-15 11:18:31.301489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.187 [2024-05-15 11:18:31.301504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.187 [2024-05-15 11:18:31.301511] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.187 [2024-05-15 11:18:31.301517] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.187 [2024-05-15 11:18:31.301530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.187 qpair failed and we were unable to recover it. 00:26:34.187 [2024-05-15 11:18:31.311478] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.187 [2024-05-15 11:18:31.311533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.187 [2024-05-15 11:18:31.311548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.187 [2024-05-15 11:18:31.311555] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.187 [2024-05-15 11:18:31.311561] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.187 [2024-05-15 11:18:31.311574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.187 qpair failed and we were unable to recover it. 00:26:34.187 [2024-05-15 11:18:31.321469] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.187 [2024-05-15 11:18:31.321542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.187 [2024-05-15 11:18:31.321557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.187 [2024-05-15 11:18:31.321564] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.187 [2024-05-15 11:18:31.321570] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.187 [2024-05-15 11:18:31.321584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.187 qpair failed and we were unable to recover it. 00:26:34.187 [2024-05-15 11:18:31.331450] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.187 [2024-05-15 11:18:31.331502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.187 [2024-05-15 11:18:31.331517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.187 [2024-05-15 11:18:31.331524] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.187 [2024-05-15 11:18:31.331530] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.187 [2024-05-15 11:18:31.331544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.187 qpair failed and we were unable to recover it. 00:26:34.187 [2024-05-15 11:18:31.341527] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.187 [2024-05-15 11:18:31.341584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.187 [2024-05-15 11:18:31.341599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.187 [2024-05-15 11:18:31.341606] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.187 [2024-05-15 11:18:31.341611] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.187 [2024-05-15 11:18:31.341625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.187 qpair failed and we were unable to recover it. 00:26:34.187 [2024-05-15 11:18:31.351549] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.187 [2024-05-15 11:18:31.351608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.187 [2024-05-15 11:18:31.351626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.187 [2024-05-15 11:18:31.351633] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.187 [2024-05-15 11:18:31.351639] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.187 [2024-05-15 11:18:31.351652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.187 qpair failed and we were unable to recover it. 00:26:34.187 [2024-05-15 11:18:31.361606] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.187 [2024-05-15 11:18:31.361660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.187 [2024-05-15 11:18:31.361675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.187 [2024-05-15 11:18:31.361682] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.187 [2024-05-15 11:18:31.361688] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.187 [2024-05-15 11:18:31.361701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.187 qpair failed and we were unable to recover it. 00:26:34.187 [2024-05-15 11:18:31.371573] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.187 [2024-05-15 11:18:31.371632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.187 [2024-05-15 11:18:31.371648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.187 [2024-05-15 11:18:31.371655] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.187 [2024-05-15 11:18:31.371661] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.187 [2024-05-15 11:18:31.371674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.187 qpair failed and we were unable to recover it. 00:26:34.187 [2024-05-15 11:18:31.381598] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.187 [2024-05-15 11:18:31.381653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.187 [2024-05-15 11:18:31.381669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.187 [2024-05-15 11:18:31.381676] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.187 [2024-05-15 11:18:31.381682] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.187 [2024-05-15 11:18:31.381695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.187 qpair failed and we were unable to recover it. 00:26:34.187 [2024-05-15 11:18:31.391625] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.187 [2024-05-15 11:18:31.391710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.187 [2024-05-15 11:18:31.391726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.187 [2024-05-15 11:18:31.391733] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.187 [2024-05-15 11:18:31.391739] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.187 [2024-05-15 11:18:31.391756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.187 qpair failed and we were unable to recover it. 00:26:34.187 [2024-05-15 11:18:31.401761] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.187 [2024-05-15 11:18:31.401878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.187 [2024-05-15 11:18:31.401893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.187 [2024-05-15 11:18:31.401900] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.187 [2024-05-15 11:18:31.401907] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.187 [2024-05-15 11:18:31.401920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.187 qpair failed and we were unable to recover it. 00:26:34.187 [2024-05-15 11:18:31.411688] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.187 [2024-05-15 11:18:31.411748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.187 [2024-05-15 11:18:31.411762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.187 [2024-05-15 11:18:31.411769] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.187 [2024-05-15 11:18:31.411775] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.187 [2024-05-15 11:18:31.411788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.187 qpair failed and we were unable to recover it. 00:26:34.187 [2024-05-15 11:18:31.421781] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.187 [2024-05-15 11:18:31.421836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.187 [2024-05-15 11:18:31.421851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.187 [2024-05-15 11:18:31.421858] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.187 [2024-05-15 11:18:31.421864] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.187 [2024-05-15 11:18:31.421878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.187 qpair failed and we were unable to recover it. 00:26:34.187 [2024-05-15 11:18:31.431822] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.187 [2024-05-15 11:18:31.431886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.187 [2024-05-15 11:18:31.431900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.187 [2024-05-15 11:18:31.431907] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.187 [2024-05-15 11:18:31.431913] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.187 [2024-05-15 11:18:31.431927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.187 qpair failed and we were unable to recover it. 00:26:34.187 [2024-05-15 11:18:31.441779] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.187 [2024-05-15 11:18:31.441835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.187 [2024-05-15 11:18:31.441852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.187 [2024-05-15 11:18:31.441859] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.187 [2024-05-15 11:18:31.441865] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.187 [2024-05-15 11:18:31.441879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.187 qpair failed and we were unable to recover it. 00:26:34.447 [2024-05-15 11:18:31.451878] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.447 [2024-05-15 11:18:31.451944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.447 [2024-05-15 11:18:31.451963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.447 [2024-05-15 11:18:31.451971] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.447 [2024-05-15 11:18:31.451977] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.447 [2024-05-15 11:18:31.451993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.447 qpair failed and we were unable to recover it. 00:26:34.447 [2024-05-15 11:18:31.461913] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.447 [2024-05-15 11:18:31.461968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.447 [2024-05-15 11:18:31.461986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.447 [2024-05-15 11:18:31.461994] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.447 [2024-05-15 11:18:31.462000] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.447 [2024-05-15 11:18:31.462016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.447 qpair failed and we were unable to recover it. 00:26:34.447 [2024-05-15 11:18:31.471928] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.447 [2024-05-15 11:18:31.471984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.447 [2024-05-15 11:18:31.471999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.448 [2024-05-15 11:18:31.472007] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.448 [2024-05-15 11:18:31.472012] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.448 [2024-05-15 11:18:31.472027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.448 qpair failed and we were unable to recover it. 00:26:34.448 [2024-05-15 11:18:31.481951] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.448 [2024-05-15 11:18:31.482007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.448 [2024-05-15 11:18:31.482022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.448 [2024-05-15 11:18:31.482030] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.448 [2024-05-15 11:18:31.482036] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.448 [2024-05-15 11:18:31.482056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.448 qpair failed and we were unable to recover it. 00:26:34.448 [2024-05-15 11:18:31.491972] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.448 [2024-05-15 11:18:31.492026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.448 [2024-05-15 11:18:31.492041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.448 [2024-05-15 11:18:31.492048] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.448 [2024-05-15 11:18:31.492054] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.448 [2024-05-15 11:18:31.492068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.448 qpair failed and we were unable to recover it. 00:26:34.448 [2024-05-15 11:18:31.501994] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.448 [2024-05-15 11:18:31.502048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.448 [2024-05-15 11:18:31.502062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.448 [2024-05-15 11:18:31.502069] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.448 [2024-05-15 11:18:31.502075] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.448 [2024-05-15 11:18:31.502088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.448 qpair failed and we were unable to recover it. 00:26:34.448 [2024-05-15 11:18:31.512038] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.448 [2024-05-15 11:18:31.512095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.448 [2024-05-15 11:18:31.512112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.448 [2024-05-15 11:18:31.512119] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.448 [2024-05-15 11:18:31.512126] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.448 [2024-05-15 11:18:31.512140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.448 qpair failed and we were unable to recover it. 00:26:34.448 [2024-05-15 11:18:31.522090] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.448 [2024-05-15 11:18:31.522146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.448 [2024-05-15 11:18:31.522160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.448 [2024-05-15 11:18:31.522170] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.448 [2024-05-15 11:18:31.522177] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.448 [2024-05-15 11:18:31.522191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.448 qpair failed and we were unable to recover it. 00:26:34.448 [2024-05-15 11:18:31.532123] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.448 [2024-05-15 11:18:31.532212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.448 [2024-05-15 11:18:31.532230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.448 [2024-05-15 11:18:31.532236] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.448 [2024-05-15 11:18:31.532242] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.448 [2024-05-15 11:18:31.532256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.448 qpair failed and we were unable to recover it. 00:26:34.448 [2024-05-15 11:18:31.542116] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.448 [2024-05-15 11:18:31.542184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.448 [2024-05-15 11:18:31.542199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.448 [2024-05-15 11:18:31.542205] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.448 [2024-05-15 11:18:31.542211] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.448 [2024-05-15 11:18:31.542225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.448 qpair failed and we were unable to recover it. 00:26:34.448 [2024-05-15 11:18:31.552170] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.448 [2024-05-15 11:18:31.552223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.448 [2024-05-15 11:18:31.552238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.448 [2024-05-15 11:18:31.552244] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.448 [2024-05-15 11:18:31.552250] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.448 [2024-05-15 11:18:31.552264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.448 qpair failed and we were unable to recover it. 00:26:34.448 [2024-05-15 11:18:31.562195] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.448 [2024-05-15 11:18:31.562251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.448 [2024-05-15 11:18:31.562265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.448 [2024-05-15 11:18:31.562272] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.448 [2024-05-15 11:18:31.562278] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.448 [2024-05-15 11:18:31.562291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.448 qpair failed and we were unable to recover it. 00:26:34.448 [2024-05-15 11:18:31.572194] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.448 [2024-05-15 11:18:31.572253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.448 [2024-05-15 11:18:31.572267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.448 [2024-05-15 11:18:31.572274] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.448 [2024-05-15 11:18:31.572280] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.448 [2024-05-15 11:18:31.572296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.448 qpair failed and we were unable to recover it. 00:26:34.448 [2024-05-15 11:18:31.582252] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.448 [2024-05-15 11:18:31.582313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.448 [2024-05-15 11:18:31.582327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.448 [2024-05-15 11:18:31.582334] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.448 [2024-05-15 11:18:31.582340] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.448 [2024-05-15 11:18:31.582353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.448 qpair failed and we were unable to recover it. 00:26:34.448 [2024-05-15 11:18:31.592271] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.448 [2024-05-15 11:18:31.592354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.448 [2024-05-15 11:18:31.592369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.448 [2024-05-15 11:18:31.592376] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.448 [2024-05-15 11:18:31.592381] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.448 [2024-05-15 11:18:31.592395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.448 qpair failed and we were unable to recover it. 00:26:34.448 [2024-05-15 11:18:31.602319] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.448 [2024-05-15 11:18:31.602377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.448 [2024-05-15 11:18:31.602391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.448 [2024-05-15 11:18:31.602398] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.448 [2024-05-15 11:18:31.602404] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.448 [2024-05-15 11:18:31.602418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.448 qpair failed and we were unable to recover it. 00:26:34.448 [2024-05-15 11:18:31.612339] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.448 [2024-05-15 11:18:31.612396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.449 [2024-05-15 11:18:31.612411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.449 [2024-05-15 11:18:31.612417] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.449 [2024-05-15 11:18:31.612423] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.449 [2024-05-15 11:18:31.612437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.449 qpair failed and we were unable to recover it. 00:26:34.449 [2024-05-15 11:18:31.622370] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.449 [2024-05-15 11:18:31.622426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.449 [2024-05-15 11:18:31.622443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.449 [2024-05-15 11:18:31.622450] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.449 [2024-05-15 11:18:31.622456] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.449 [2024-05-15 11:18:31.622470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.449 qpair failed and we were unable to recover it. 00:26:34.449 [2024-05-15 11:18:31.632404] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.449 [2024-05-15 11:18:31.632465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.449 [2024-05-15 11:18:31.632480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.449 [2024-05-15 11:18:31.632487] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.449 [2024-05-15 11:18:31.632493] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.449 [2024-05-15 11:18:31.632507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.449 qpair failed and we were unable to recover it. 00:26:34.449 [2024-05-15 11:18:31.642433] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.449 [2024-05-15 11:18:31.642491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.449 [2024-05-15 11:18:31.642506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.449 [2024-05-15 11:18:31.642512] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.449 [2024-05-15 11:18:31.642518] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.449 [2024-05-15 11:18:31.642532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.449 qpair failed and we were unable to recover it. 00:26:34.449 [2024-05-15 11:18:31.652478] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.449 [2024-05-15 11:18:31.652532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.449 [2024-05-15 11:18:31.652547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.449 [2024-05-15 11:18:31.652553] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.449 [2024-05-15 11:18:31.652559] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.449 [2024-05-15 11:18:31.652573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.449 qpair failed and we were unable to recover it. 00:26:34.449 [2024-05-15 11:18:31.662483] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.449 [2024-05-15 11:18:31.662538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.449 [2024-05-15 11:18:31.662553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.449 [2024-05-15 11:18:31.662559] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.449 [2024-05-15 11:18:31.662568] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.449 [2024-05-15 11:18:31.662582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.449 qpair failed and we were unable to recover it. 00:26:34.449 [2024-05-15 11:18:31.672512] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.449 [2024-05-15 11:18:31.672571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.449 [2024-05-15 11:18:31.672585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.449 [2024-05-15 11:18:31.672592] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.449 [2024-05-15 11:18:31.672597] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.449 [2024-05-15 11:18:31.672611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.449 qpair failed and we were unable to recover it. 00:26:34.449 [2024-05-15 11:18:31.682564] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.449 [2024-05-15 11:18:31.682621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.449 [2024-05-15 11:18:31.682636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.449 [2024-05-15 11:18:31.682642] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.449 [2024-05-15 11:18:31.682648] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.449 [2024-05-15 11:18:31.682661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.449 qpair failed and we were unable to recover it. 00:26:34.449 [2024-05-15 11:18:31.692623] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.449 [2024-05-15 11:18:31.692680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.449 [2024-05-15 11:18:31.692695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.449 [2024-05-15 11:18:31.692702] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.449 [2024-05-15 11:18:31.692708] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.449 [2024-05-15 11:18:31.692721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.449 qpair failed and we were unable to recover it. 00:26:34.449 [2024-05-15 11:18:31.702639] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.449 [2024-05-15 11:18:31.702693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.449 [2024-05-15 11:18:31.702708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.449 [2024-05-15 11:18:31.702715] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.449 [2024-05-15 11:18:31.702720] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.449 [2024-05-15 11:18:31.702734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.449 qpair failed and we were unable to recover it. 00:26:34.707 [2024-05-15 11:18:31.712641] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.707 [2024-05-15 11:18:31.712704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.707 [2024-05-15 11:18:31.712723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.707 [2024-05-15 11:18:31.712731] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.707 [2024-05-15 11:18:31.712737] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.707 [2024-05-15 11:18:31.712752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.707 qpair failed and we were unable to recover it. 00:26:34.708 [2024-05-15 11:18:31.722686] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.708 [2024-05-15 11:18:31.722750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.708 [2024-05-15 11:18:31.722769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.708 [2024-05-15 11:18:31.722776] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.708 [2024-05-15 11:18:31.722782] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.708 [2024-05-15 11:18:31.722797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.708 qpair failed and we were unable to recover it. 00:26:34.708 [2024-05-15 11:18:31.732712] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.708 [2024-05-15 11:18:31.732771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.708 [2024-05-15 11:18:31.732787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.708 [2024-05-15 11:18:31.732793] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.708 [2024-05-15 11:18:31.732799] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.708 [2024-05-15 11:18:31.732813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.708 qpair failed and we were unable to recover it. 00:26:34.708 [2024-05-15 11:18:31.742797] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.708 [2024-05-15 11:18:31.742857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.708 [2024-05-15 11:18:31.742872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.708 [2024-05-15 11:18:31.742879] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.708 [2024-05-15 11:18:31.742885] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.708 [2024-05-15 11:18:31.742898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.708 qpair failed and we were unable to recover it. 00:26:34.708 [2024-05-15 11:18:31.752759] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.708 [2024-05-15 11:18:31.752814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.708 [2024-05-15 11:18:31.752829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.708 [2024-05-15 11:18:31.752835] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.708 [2024-05-15 11:18:31.752845] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.708 [2024-05-15 11:18:31.752859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.708 qpair failed and we were unable to recover it. 00:26:34.708 [2024-05-15 11:18:31.762810] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.708 [2024-05-15 11:18:31.762868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.708 [2024-05-15 11:18:31.762883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.708 [2024-05-15 11:18:31.762890] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.708 [2024-05-15 11:18:31.762896] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d5c10 00:26:34.708 [2024-05-15 11:18:31.762910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.708 qpair failed and we were unable to recover it. 00:26:34.708 [2024-05-15 11:18:31.772853] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.708 [2024-05-15 11:18:31.772928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.708 [2024-05-15 11:18:31.772949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.708 [2024-05-15 11:18:31.772957] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.708 [2024-05-15 11:18:31.772963] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa690000b90 00:26:34.708 [2024-05-15 11:18:31.772979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:34.708 qpair failed and we were unable to recover it. 00:26:34.708 [2024-05-15 11:18:31.782854] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.708 [2024-05-15 11:18:31.782913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.708 [2024-05-15 11:18:31.782929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.708 [2024-05-15 11:18:31.782936] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.708 [2024-05-15 11:18:31.782943] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa690000b90 00:26:34.708 [2024-05-15 11:18:31.782957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:34.708 qpair failed and we were unable to recover it. 00:26:34.708 [2024-05-15 11:18:31.792879] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.708 [2024-05-15 11:18:31.792945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.708 [2024-05-15 11:18:31.792971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.708 [2024-05-15 11:18:31.792983] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.708 [2024-05-15 11:18:31.792993] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa688000b90 00:26:34.708 [2024-05-15 11:18:31.793014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:34.708 qpair failed and we were unable to recover it. 00:26:34.708 [2024-05-15 11:18:31.802908] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.708 [2024-05-15 11:18:31.802965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.708 [2024-05-15 11:18:31.802981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.708 [2024-05-15 11:18:31.802988] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.708 [2024-05-15 11:18:31.802994] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa688000b90 00:26:34.708 [2024-05-15 11:18:31.803010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:34.708 qpair failed and we were unable to recover it. 00:26:34.708 [2024-05-15 11:18:31.803078] nvme_ctrlr.c:4341:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:26:34.708 A controller has encountered a failure and is being reset. 00:26:34.708 [2024-05-15 11:18:31.803130] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8e3770 (9): Bad file descriptor 00:26:34.966 Controller properly reset. 00:26:34.966 Initializing NVMe Controllers 00:26:34.966 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:34.966 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:34.966 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:26:34.966 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:26:34.966 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:26:34.966 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:26:34.966 Initialization complete. Launching workers. 00:26:34.966 Starting thread on core 1 00:26:34.966 Starting thread on core 2 00:26:34.966 Starting thread on core 3 00:26:34.966 Starting thread on core 0 00:26:34.966 11:18:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:26:34.966 00:26:34.966 real 0m11.490s 00:26:34.966 user 0m21.579s 00:26:34.966 sys 0m4.329s 00:26:34.966 11:18:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:26:34.966 11:18:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:34.966 ************************************ 00:26:34.966 END TEST nvmf_target_disconnect_tc2 00:26:34.966 ************************************ 00:26:34.966 11:18:32 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:26:34.966 11:18:32 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:26:34.966 11:18:32 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:26:34.966 11:18:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:34.966 11:18:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:26:34.966 11:18:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:34.966 11:18:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:26:34.966 11:18:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:34.966 11:18:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:34.966 rmmod nvme_tcp 00:26:34.966 rmmod nvme_fabrics 00:26:34.966 rmmod nvme_keyring 00:26:34.966 11:18:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:34.966 11:18:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:26:34.966 11:18:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:26:34.966 11:18:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 2409285 ']' 00:26:34.966 11:18:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 2409285 00:26:34.967 11:18:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@947 -- # '[' -z 2409285 ']' 00:26:34.967 11:18:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # kill -0 2409285 00:26:34.967 11:18:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # uname 00:26:34.967 11:18:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:26:34.967 11:18:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2409285 00:26:34.967 11:18:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # process_name=reactor_4 00:26:34.967 11:18:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # '[' reactor_4 = sudo ']' 00:26:34.967 11:18:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2409285' 00:26:34.967 killing process with pid 2409285 00:26:34.967 11:18:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # kill 2409285 00:26:34.967 [2024-05-15 11:18:32.141381] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:34.967 11:18:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@971 -- # wait 2409285 00:26:35.225 11:18:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:35.225 11:18:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:35.225 11:18:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:35.225 11:18:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:35.225 11:18:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:35.225 11:18:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:35.225 11:18:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:35.225 11:18:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:37.755 11:18:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:37.755 00:26:37.755 real 0m19.728s 00:26:37.755 user 0m49.472s 00:26:37.755 sys 0m8.890s 00:26:37.755 11:18:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # xtrace_disable 00:26:37.755 11:18:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:37.755 ************************************ 00:26:37.755 END TEST nvmf_target_disconnect 00:26:37.755 ************************************ 00:26:37.755 11:18:34 nvmf_tcp -- nvmf/nvmf.sh@125 -- # timing_exit host 00:26:37.755 11:18:34 nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:26:37.755 11:18:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:37.755 11:18:34 nvmf_tcp -- nvmf/nvmf.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:26:37.755 00:26:37.755 real 20m45.662s 00:26:37.755 user 45m21.263s 00:26:37.755 sys 6m12.518s 00:26:37.755 11:18:34 nvmf_tcp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:26:37.755 11:18:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:37.755 ************************************ 00:26:37.755 END TEST nvmf_tcp 00:26:37.755 ************************************ 00:26:37.755 11:18:34 -- spdk/autotest.sh@284 -- # [[ 0 -eq 0 ]] 00:26:37.755 11:18:34 -- spdk/autotest.sh@285 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:37.755 11:18:34 -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:26:37.755 11:18:34 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:26:37.755 11:18:34 -- common/autotest_common.sh@10 -- # set +x 00:26:37.755 ************************************ 00:26:37.755 START TEST spdkcli_nvmf_tcp 00:26:37.755 ************************************ 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:37.755 * Looking for test storage... 00:26:37.755 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2410962 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2410962 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@828 -- # '[' -z 2410962 ']' 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local max_retries=100 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:37.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # xtrace_disable 00:26:37.755 11:18:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:37.755 [2024-05-15 11:18:34.709227] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:26:37.756 [2024-05-15 11:18:34.709275] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2410962 ] 00:26:37.756 EAL: No free 2048 kB hugepages reported on node 1 00:26:37.756 [2024-05-15 11:18:34.763426] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:37.756 [2024-05-15 11:18:34.843637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:37.756 [2024-05-15 11:18:34.843640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:38.319 11:18:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:26:38.319 11:18:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@861 -- # return 0 00:26:38.319 11:18:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:26:38.319 11:18:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:26:38.319 11:18:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:38.319 11:18:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:26:38.319 11:18:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:26:38.319 11:18:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:26:38.319 11:18:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:26:38.319 11:18:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:38.319 11:18:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:26:38.319 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:26:38.319 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:26:38.319 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:26:38.319 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:26:38.319 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:26:38.319 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:26:38.319 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:38.319 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:26:38.319 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:26:38.319 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:38.319 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:38.319 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:26:38.319 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:38.319 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:38.319 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:26:38.319 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:38.319 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:38.319 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:38.319 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:38.319 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:26:38.319 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:26:38.319 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:38.319 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:26:38.319 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:38.319 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:26:38.319 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:26:38.319 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:26:38.319 ' 00:26:40.841 [2024-05-15 11:18:37.937017] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:42.209 [2024-05-15 11:18:39.112624] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:42.209 [2024-05-15 11:18:39.112977] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:26:44.102 [2024-05-15 11:18:41.279667] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:26:45.992 [2024-05-15 11:18:43.137399] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:26:47.359 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:26:47.359 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:26:47.359 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:26:47.359 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:26:47.359 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:26:47.359 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:26:47.359 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:26:47.359 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:26:47.359 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:26:47.359 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:26:47.359 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:47.359 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:47.359 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:26:47.359 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:47.359 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:47.359 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:26:47.359 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:47.359 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:26:47.359 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:26:47.359 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:47.359 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:26:47.359 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:26:47.359 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:26:47.359 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:26:47.359 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:47.359 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:26:47.359 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:26:47.359 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:26:47.616 11:18:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:26:47.616 11:18:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:26:47.616 11:18:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:47.616 11:18:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:26:47.616 11:18:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:26:47.616 11:18:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:47.616 11:18:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:26:47.616 11:18:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:26:47.872 11:18:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:26:47.872 11:18:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:26:47.872 11:18:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:26:47.872 11:18:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:26:47.872 11:18:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:47.873 11:18:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:26:47.873 11:18:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:26:47.873 11:18:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:48.128 11:18:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:26:48.128 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:26:48.128 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:26:48.128 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:26:48.128 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:26:48.128 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:26:48.128 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:26:48.128 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:26:48.128 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:26:48.129 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:26:48.129 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:26:48.129 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:26:48.129 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:26:48.129 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:26:48.129 ' 00:26:53.457 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:26:53.457 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:26:53.457 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:53.457 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:26:53.457 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:26:53.457 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:26:53.457 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:26:53.457 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:53.457 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:26:53.457 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:26:53.457 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:26:53.457 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:26:53.457 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:26:53.457 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:26:53.457 11:18:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:26:53.457 11:18:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:26:53.457 11:18:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:53.457 11:18:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2410962 00:26:53.457 11:18:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@947 -- # '[' -z 2410962 ']' 00:26:53.457 11:18:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # kill -0 2410962 00:26:53.457 11:18:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # uname 00:26:53.457 11:18:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:26:53.457 11:18:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2410962 00:26:53.457 11:18:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:26:53.457 11:18:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:26:53.457 11:18:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2410962' 00:26:53.457 killing process with pid 2410962 00:26:53.457 11:18:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # kill 2410962 00:26:53.457 [2024-05-15 11:18:50.160430] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:53.457 11:18:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@971 -- # wait 2410962 00:26:53.457 11:18:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:26:53.457 11:18:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:26:53.457 11:18:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2410962 ']' 00:26:53.457 11:18:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2410962 00:26:53.457 11:18:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@947 -- # '[' -z 2410962 ']' 00:26:53.457 11:18:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # kill -0 2410962 00:26:53.457 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 951: kill: (2410962) - No such process 00:26:53.457 11:18:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # echo 'Process with pid 2410962 is not found' 00:26:53.457 Process with pid 2410962 is not found 00:26:53.457 11:18:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:26:53.457 11:18:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:26:53.457 11:18:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:26:53.457 00:26:53.457 real 0m15.798s 00:26:53.457 user 0m32.743s 00:26:53.457 sys 0m0.673s 00:26:53.457 11:18:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:26:53.457 11:18:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:53.457 ************************************ 00:26:53.457 END TEST spdkcli_nvmf_tcp 00:26:53.457 ************************************ 00:26:53.457 11:18:50 -- spdk/autotest.sh@286 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:53.457 11:18:50 -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:26:53.457 11:18:50 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:26:53.457 11:18:50 -- common/autotest_common.sh@10 -- # set +x 00:26:53.457 ************************************ 00:26:53.457 START TEST nvmf_identify_passthru 00:26:53.457 ************************************ 00:26:53.457 11:18:50 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:53.457 * Looking for test storage... 00:26:53.457 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:53.457 11:18:50 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:53.457 11:18:50 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:26:53.457 11:18:50 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:53.457 11:18:50 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:53.457 11:18:50 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:53.457 11:18:50 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:53.457 11:18:50 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:53.457 11:18:50 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:53.457 11:18:50 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:53.457 11:18:50 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:53.457 11:18:50 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:53.457 11:18:50 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:53.457 11:18:50 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:53.457 11:18:50 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:53.457 11:18:50 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:53.457 11:18:50 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:53.457 11:18:50 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:53.457 11:18:50 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:53.457 11:18:50 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:53.457 11:18:50 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:53.457 11:18:50 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:53.457 11:18:50 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:53.457 11:18:50 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.457 11:18:50 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.457 11:18:50 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.457 11:18:50 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:26:53.457 11:18:50 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.457 11:18:50 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:26:53.457 11:18:50 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:53.457 11:18:50 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:53.457 11:18:50 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:53.457 11:18:50 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:53.457 11:18:50 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:53.457 11:18:50 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:53.457 11:18:50 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:53.457 11:18:50 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:53.458 11:18:50 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:53.458 11:18:50 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:53.458 11:18:50 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:53.458 11:18:50 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:53.458 11:18:50 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.458 11:18:50 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.458 11:18:50 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.458 11:18:50 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:26:53.458 11:18:50 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.458 11:18:50 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:26:53.458 11:18:50 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:53.458 11:18:50 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:53.458 11:18:50 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:53.458 11:18:50 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:53.458 11:18:50 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:53.458 11:18:50 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:53.458 11:18:50 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:53.458 11:18:50 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:53.458 11:18:50 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:53.458 11:18:50 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:53.458 11:18:50 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:26:53.458 11:18:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:58.715 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:58.715 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:26:58.715 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:58.715 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:58.715 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:58.716 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:58.716 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:58.716 Found net devices under 0000:86:00.0: cvl_0_0 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:58.716 Found net devices under 0000:86:00.1: cvl_0_1 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:58.716 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:58.716 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:26:58.716 00:26:58.716 --- 10.0.0.2 ping statistics --- 00:26:58.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:58.716 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:58.716 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:58.716 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:26:58.716 00:26:58.716 --- 10.0.0.1 ping statistics --- 00:26:58.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:58.716 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:58.716 11:18:55 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:58.716 11:18:55 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:26:58.716 11:18:55 nvmf_identify_passthru -- common/autotest_common.sh@721 -- # xtrace_disable 00:26:58.716 11:18:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:58.716 11:18:55 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:26:58.716 11:18:55 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=() 00:26:58.716 11:18:55 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # local bdfs 00:26:58.716 11:18:55 nvmf_identify_passthru -- common/autotest_common.sh@1522 -- # bdfs=($(get_nvme_bdfs)) 00:26:58.716 11:18:55 nvmf_identify_passthru -- common/autotest_common.sh@1522 -- # get_nvme_bdfs 00:26:58.716 11:18:55 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=() 00:26:58.716 11:18:55 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # local bdfs 00:26:58.716 11:18:55 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:26:58.717 11:18:55 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # jq -r '.config[].params.traddr' 00:26:58.717 11:18:55 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:58.975 11:18:56 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # (( 1 == 0 )) 00:26:58.975 11:18:56 nvmf_identify_passthru -- common/autotest_common.sh@1516 -- # printf '%s\n' 0000:5e:00.0 00:26:58.975 11:18:56 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # echo 0000:5e:00.0 00:26:58.975 11:18:56 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:26:58.975 11:18:56 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:26:58.975 11:18:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:26:58.975 11:18:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:26:58.975 11:18:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:26:58.975 EAL: No free 2048 kB hugepages reported on node 1 00:27:03.161 11:19:00 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:27:03.161 11:19:00 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:27:03.161 11:19:00 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:27:03.161 11:19:00 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:27:03.161 EAL: No free 2048 kB hugepages reported on node 1 00:27:07.357 11:19:04 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:27:07.357 11:19:04 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:27:07.357 11:19:04 nvmf_identify_passthru -- common/autotest_common.sh@727 -- # xtrace_disable 00:27:07.357 11:19:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:07.357 11:19:04 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:27:07.357 11:19:04 nvmf_identify_passthru -- common/autotest_common.sh@721 -- # xtrace_disable 00:27:07.357 11:19:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:07.357 11:19:04 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2418150 00:27:07.357 11:19:04 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:07.357 11:19:04 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2418150 00:27:07.357 11:19:04 nvmf_identify_passthru -- common/autotest_common.sh@828 -- # '[' -z 2418150 ']' 00:27:07.357 11:19:04 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:07.357 11:19:04 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local max_retries=100 00:27:07.357 11:19:04 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:07.358 11:19:04 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:07.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:07.358 11:19:04 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # xtrace_disable 00:27:07.358 11:19:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:07.358 [2024-05-15 11:19:04.375653] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:27:07.358 [2024-05-15 11:19:04.375700] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:07.358 EAL: No free 2048 kB hugepages reported on node 1 00:27:07.358 [2024-05-15 11:19:04.428615] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:07.358 [2024-05-15 11:19:04.507966] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:07.358 [2024-05-15 11:19:04.508002] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:07.358 [2024-05-15 11:19:04.508009] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:07.358 [2024-05-15 11:19:04.508015] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:07.358 [2024-05-15 11:19:04.508020] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:07.358 [2024-05-15 11:19:04.508088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:07.358 [2024-05-15 11:19:04.508104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:07.358 [2024-05-15 11:19:04.508123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:07.358 [2024-05-15 11:19:04.508124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:08.290 11:19:05 nvmf_identify_passthru -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:27:08.290 11:19:05 nvmf_identify_passthru -- common/autotest_common.sh@861 -- # return 0 00:27:08.290 11:19:05 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:27:08.290 11:19:05 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:08.290 11:19:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:08.290 INFO: Log level set to 20 00:27:08.290 INFO: Requests: 00:27:08.290 { 00:27:08.290 "jsonrpc": "2.0", 00:27:08.290 "method": "nvmf_set_config", 00:27:08.290 "id": 1, 00:27:08.290 "params": { 00:27:08.290 "admin_cmd_passthru": { 00:27:08.290 "identify_ctrlr": true 00:27:08.290 } 00:27:08.290 } 00:27:08.290 } 00:27:08.290 00:27:08.290 INFO: response: 00:27:08.290 { 00:27:08.290 "jsonrpc": "2.0", 00:27:08.290 "id": 1, 00:27:08.290 "result": true 00:27:08.290 } 00:27:08.290 00:27:08.290 11:19:05 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:08.290 11:19:05 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:27:08.290 11:19:05 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:08.290 11:19:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:08.290 INFO: Setting log level to 20 00:27:08.290 INFO: Setting log level to 20 00:27:08.290 INFO: Log level set to 20 00:27:08.290 INFO: Log level set to 20 00:27:08.290 INFO: Requests: 00:27:08.290 { 00:27:08.290 "jsonrpc": "2.0", 00:27:08.290 "method": "framework_start_init", 00:27:08.290 "id": 1 00:27:08.290 } 00:27:08.290 00:27:08.290 INFO: Requests: 00:27:08.290 { 00:27:08.290 "jsonrpc": "2.0", 00:27:08.290 "method": "framework_start_init", 00:27:08.290 "id": 1 00:27:08.290 } 00:27:08.290 00:27:08.290 [2024-05-15 11:19:05.295663] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:27:08.290 INFO: response: 00:27:08.290 { 00:27:08.290 "jsonrpc": "2.0", 00:27:08.290 "id": 1, 00:27:08.290 "result": true 00:27:08.290 } 00:27:08.290 00:27:08.290 INFO: response: 00:27:08.290 { 00:27:08.290 "jsonrpc": "2.0", 00:27:08.290 "id": 1, 00:27:08.290 "result": true 00:27:08.290 } 00:27:08.290 00:27:08.290 11:19:05 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:08.290 11:19:05 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:08.290 11:19:05 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:08.290 11:19:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:08.290 INFO: Setting log level to 40 00:27:08.290 INFO: Setting log level to 40 00:27:08.290 INFO: Setting log level to 40 00:27:08.290 [2024-05-15 11:19:05.309151] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:08.290 11:19:05 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:08.290 11:19:05 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:27:08.290 11:19:05 nvmf_identify_passthru -- common/autotest_common.sh@727 -- # xtrace_disable 00:27:08.290 11:19:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:08.290 11:19:05 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:27:08.290 11:19:05 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:08.290 11:19:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:11.569 Nvme0n1 00:27:11.569 11:19:08 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:11.569 11:19:08 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:27:11.569 11:19:08 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:11.569 11:19:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:11.569 11:19:08 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:11.569 11:19:08 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:11.569 11:19:08 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:11.569 11:19:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:11.569 11:19:08 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:11.569 11:19:08 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:11.569 11:19:08 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:11.569 11:19:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:11.569 [2024-05-15 11:19:08.200444] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:11.569 [2024-05-15 11:19:08.200686] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:11.569 11:19:08 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:11.569 11:19:08 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:27:11.569 11:19:08 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:11.569 11:19:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:11.569 [ 00:27:11.569 { 00:27:11.569 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:11.569 "subtype": "Discovery", 00:27:11.569 "listen_addresses": [], 00:27:11.569 "allow_any_host": true, 00:27:11.569 "hosts": [] 00:27:11.569 }, 00:27:11.569 { 00:27:11.569 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:11.569 "subtype": "NVMe", 00:27:11.569 "listen_addresses": [ 00:27:11.569 { 00:27:11.569 "trtype": "TCP", 00:27:11.569 "adrfam": "IPv4", 00:27:11.569 "traddr": "10.0.0.2", 00:27:11.569 "trsvcid": "4420" 00:27:11.569 } 00:27:11.569 ], 00:27:11.569 "allow_any_host": true, 00:27:11.569 "hosts": [], 00:27:11.569 "serial_number": "SPDK00000000000001", 00:27:11.569 "model_number": "SPDK bdev Controller", 00:27:11.569 "max_namespaces": 1, 00:27:11.569 "min_cntlid": 1, 00:27:11.569 "max_cntlid": 65519, 00:27:11.569 "namespaces": [ 00:27:11.569 { 00:27:11.569 "nsid": 1, 00:27:11.569 "bdev_name": "Nvme0n1", 00:27:11.569 "name": "Nvme0n1", 00:27:11.569 "nguid": "488D154E05FC4AB389F6091C88354732", 00:27:11.569 "uuid": "488d154e-05fc-4ab3-89f6-091c88354732" 00:27:11.569 } 00:27:11.569 ] 00:27:11.569 } 00:27:11.569 ] 00:27:11.569 11:19:08 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:11.569 11:19:08 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:11.569 11:19:08 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:27:11.569 11:19:08 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:27:11.569 EAL: No free 2048 kB hugepages reported on node 1 00:27:11.569 11:19:08 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:27:11.569 11:19:08 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:11.569 11:19:08 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:27:11.569 11:19:08 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:27:11.569 EAL: No free 2048 kB hugepages reported on node 1 00:27:11.569 11:19:08 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:27:11.569 11:19:08 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:27:11.569 11:19:08 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:27:11.569 11:19:08 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:11.569 11:19:08 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:11.569 11:19:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:11.569 11:19:08 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:11.569 11:19:08 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:27:11.569 11:19:08 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:27:11.569 11:19:08 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:11.569 11:19:08 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:27:11.569 11:19:08 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:11.569 11:19:08 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:27:11.569 11:19:08 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:11.569 11:19:08 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:11.569 rmmod nvme_tcp 00:27:11.569 rmmod nvme_fabrics 00:27:11.569 rmmod nvme_keyring 00:27:11.569 11:19:08 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:11.569 11:19:08 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:27:11.569 11:19:08 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:27:11.569 11:19:08 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 2418150 ']' 00:27:11.569 11:19:08 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 2418150 00:27:11.569 11:19:08 nvmf_identify_passthru -- common/autotest_common.sh@947 -- # '[' -z 2418150 ']' 00:27:11.570 11:19:08 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # kill -0 2418150 00:27:11.570 11:19:08 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # uname 00:27:11.570 11:19:08 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:27:11.570 11:19:08 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2418150 00:27:11.827 11:19:08 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:27:11.827 11:19:08 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:27:11.827 11:19:08 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2418150' 00:27:11.827 killing process with pid 2418150 00:27:11.827 11:19:08 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # kill 2418150 00:27:11.827 [2024-05-15 11:19:08.848417] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:11.827 11:19:08 nvmf_identify_passthru -- common/autotest_common.sh@971 -- # wait 2418150 00:27:13.201 11:19:10 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:13.201 11:19:10 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:13.201 11:19:10 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:13.201 11:19:10 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:13.201 11:19:10 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:13.201 11:19:10 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:13.201 11:19:10 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:13.201 11:19:10 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:15.729 11:19:12 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:15.729 00:27:15.729 real 0m21.977s 00:27:15.729 user 0m30.649s 00:27:15.729 sys 0m4.844s 00:27:15.729 11:19:12 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # xtrace_disable 00:27:15.729 11:19:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:15.729 ************************************ 00:27:15.729 END TEST nvmf_identify_passthru 00:27:15.729 ************************************ 00:27:15.729 11:19:12 -- spdk/autotest.sh@288 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:15.730 11:19:12 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:27:15.730 11:19:12 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:27:15.730 11:19:12 -- common/autotest_common.sh@10 -- # set +x 00:27:15.730 ************************************ 00:27:15.730 START TEST nvmf_dif 00:27:15.730 ************************************ 00:27:15.730 11:19:12 nvmf_dif -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:15.730 * Looking for test storage... 00:27:15.730 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:15.730 11:19:12 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:15.730 11:19:12 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:27:15.730 11:19:12 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:15.730 11:19:12 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:15.730 11:19:12 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:15.730 11:19:12 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:15.730 11:19:12 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:15.730 11:19:12 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:15.730 11:19:12 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:15.730 11:19:12 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:15.730 11:19:12 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:15.730 11:19:12 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:15.730 11:19:12 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:15.730 11:19:12 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:15.730 11:19:12 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:15.730 11:19:12 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:15.730 11:19:12 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:15.730 11:19:12 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:15.730 11:19:12 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:15.730 11:19:12 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:15.730 11:19:12 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:15.730 11:19:12 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:15.730 11:19:12 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.730 11:19:12 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.730 11:19:12 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.730 11:19:12 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:27:15.730 11:19:12 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.730 11:19:12 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:27:15.730 11:19:12 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:15.730 11:19:12 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:15.730 11:19:12 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:15.730 11:19:12 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:15.730 11:19:12 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:15.730 11:19:12 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:15.730 11:19:12 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:15.730 11:19:12 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:15.730 11:19:12 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:27:15.730 11:19:12 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:27:15.730 11:19:12 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:27:15.730 11:19:12 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:27:15.730 11:19:12 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:27:15.730 11:19:12 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:15.730 11:19:12 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:15.730 11:19:12 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:15.730 11:19:12 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:15.730 11:19:12 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:15.730 11:19:12 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:15.730 11:19:12 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:15.730 11:19:12 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:15.730 11:19:12 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:15.730 11:19:12 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:15.730 11:19:12 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:27:15.730 11:19:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:21.001 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:21.001 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:21.001 Found net devices under 0000:86:00.0: cvl_0_0 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:21.001 Found net devices under 0000:86:00.1: cvl_0_1 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:21.001 11:19:17 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:21.002 11:19:17 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:21.002 11:19:17 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:21.002 11:19:17 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:21.002 11:19:17 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:21.002 11:19:17 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:21.002 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:21.002 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:27:21.002 00:27:21.002 --- 10.0.0.2 ping statistics --- 00:27:21.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:21.002 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:27:21.002 11:19:17 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:21.002 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:21.002 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:27:21.002 00:27:21.002 --- 10.0.0.1 ping statistics --- 00:27:21.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:21.002 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:27:21.002 11:19:17 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:21.002 11:19:17 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:27:21.002 11:19:17 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:27:21.002 11:19:17 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:23.546 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:23.546 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:27:23.546 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:27:23.546 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:27:23.546 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:27:23.546 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:27:23.546 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:27:23.546 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:27:23.546 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:27:23.546 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:27:23.546 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:27:23.546 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:27:23.546 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:27:23.546 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:27:23.546 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:27:23.546 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:27:23.546 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:27:23.546 11:19:20 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:23.546 11:19:20 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:23.546 11:19:20 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:23.546 11:19:20 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:23.546 11:19:20 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:23.546 11:19:20 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:23.546 11:19:20 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:27:23.546 11:19:20 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:27:23.546 11:19:20 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:23.546 11:19:20 nvmf_dif -- common/autotest_common.sh@721 -- # xtrace_disable 00:27:23.546 11:19:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:23.546 11:19:20 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=2423615 00:27:23.546 11:19:20 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 2423615 00:27:23.546 11:19:20 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:27:23.546 11:19:20 nvmf_dif -- common/autotest_common.sh@828 -- # '[' -z 2423615 ']' 00:27:23.546 11:19:20 nvmf_dif -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:23.546 11:19:20 nvmf_dif -- common/autotest_common.sh@833 -- # local max_retries=100 00:27:23.546 11:19:20 nvmf_dif -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:23.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:23.546 11:19:20 nvmf_dif -- common/autotest_common.sh@837 -- # xtrace_disable 00:27:23.546 11:19:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:23.546 [2024-05-15 11:19:20.534670] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:27:23.546 [2024-05-15 11:19:20.534711] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:23.546 EAL: No free 2048 kB hugepages reported on node 1 00:27:23.546 [2024-05-15 11:19:20.590949] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:23.546 [2024-05-15 11:19:20.669304] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:23.546 [2024-05-15 11:19:20.669339] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:23.546 [2024-05-15 11:19:20.669346] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:23.546 [2024-05-15 11:19:20.669352] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:23.546 [2024-05-15 11:19:20.669358] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:23.546 [2024-05-15 11:19:20.669377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:24.111 11:19:21 nvmf_dif -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:27:24.111 11:19:21 nvmf_dif -- common/autotest_common.sh@861 -- # return 0 00:27:24.111 11:19:21 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:24.111 11:19:21 nvmf_dif -- common/autotest_common.sh@727 -- # xtrace_disable 00:27:24.111 11:19:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:24.111 11:19:21 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:24.111 11:19:21 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:27:24.111 11:19:21 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:27:24.111 11:19:21 nvmf_dif -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:24.111 11:19:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:24.369 [2024-05-15 11:19:21.378153] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:24.369 11:19:21 nvmf_dif -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:24.369 11:19:21 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:27:24.369 11:19:21 nvmf_dif -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:27:24.369 11:19:21 nvmf_dif -- common/autotest_common.sh@1104 -- # xtrace_disable 00:27:24.369 11:19:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:24.369 ************************************ 00:27:24.369 START TEST fio_dif_1_default 00:27:24.369 ************************************ 00:27:24.369 11:19:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # fio_dif_1 00:27:24.369 11:19:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:27:24.369 11:19:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:27:24.369 11:19:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:27:24.369 11:19:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:27:24.369 11:19:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:27:24.369 11:19:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:24.369 11:19:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:24.369 11:19:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:24.369 bdev_null0 00:27:24.369 11:19:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:24.369 11:19:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:24.369 11:19:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:24.369 11:19:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:24.369 11:19:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:24.369 11:19:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:24.369 11:19:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:24.369 11:19:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:24.369 11:19:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:24.369 11:19:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:24.369 11:19:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:24.369 11:19:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:24.369 [2024-05-15 11:19:21.454299] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:24.369 [2024-05-15 11:19:21.454483] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:24.369 11:19:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:24.369 11:19:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:27:24.369 11:19:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:27:24.369 11:19:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:24.369 11:19:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:27:24.369 11:19:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:24.369 11:19:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:27:24.369 11:19:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:24.369 11:19:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1353 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:24.369 11:19:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:27:24.369 11:19:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:24.369 { 00:27:24.369 "params": { 00:27:24.369 "name": "Nvme$subsystem", 00:27:24.369 "trtype": "$TEST_TRANSPORT", 00:27:24.369 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:24.369 "adrfam": "ipv4", 00:27:24.369 "trsvcid": "$NVMF_PORT", 00:27:24.369 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:24.369 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:24.369 "hdgst": ${hdgst:-false}, 00:27:24.369 "ddgst": ${ddgst:-false} 00:27:24.369 }, 00:27:24.369 "method": "bdev_nvme_attach_controller" 00:27:24.369 } 00:27:24.369 EOF 00:27:24.369 )") 00:27:24.369 11:19:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:27:24.369 11:19:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:27:24.369 11:19:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:24.369 11:19:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:27:24.369 11:19:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local sanitizers 00:27:24.369 11:19:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:24.369 11:19:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # shift 00:27:24.369 11:19:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local asan_lib= 00:27:24.369 11:19:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:27:24.369 11:19:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:27:24.369 11:19:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:27:24.369 11:19:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:27:24.370 11:19:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:24.370 11:19:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # grep libasan 00:27:24.370 11:19:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:27:24.370 11:19:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:27:24.370 11:19:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:27:24.370 11:19:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:24.370 "params": { 00:27:24.370 "name": "Nvme0", 00:27:24.370 "trtype": "tcp", 00:27:24.370 "traddr": "10.0.0.2", 00:27:24.370 "adrfam": "ipv4", 00:27:24.370 "trsvcid": "4420", 00:27:24.370 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:24.370 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:24.370 "hdgst": false, 00:27:24.370 "ddgst": false 00:27:24.370 }, 00:27:24.370 "method": "bdev_nvme_attach_controller" 00:27:24.370 }' 00:27:24.370 11:19:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # asan_lib= 00:27:24.370 11:19:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:27:24.370 11:19:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:27:24.370 11:19:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:24.370 11:19:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:27:24.370 11:19:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:27:24.370 11:19:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # asan_lib= 00:27:24.370 11:19:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:27:24.370 11:19:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:24.370 11:19:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:24.627 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:24.627 fio-3.35 00:27:24.627 Starting 1 thread 00:27:24.627 EAL: No free 2048 kB hugepages reported on node 1 00:27:36.829 00:27:36.829 filename0: (groupid=0, jobs=1): err= 0: pid=2424129: Wed May 15 11:19:32 2024 00:27:36.829 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10010msec) 00:27:36.829 slat (nsec): min=6045, max=26430, avg=6407.57, stdev=1298.65 00:27:36.829 clat (usec): min=40820, max=43519, avg=41005.13, stdev=201.81 00:27:36.829 lat (usec): min=40826, max=43545, avg=41011.53, stdev=202.20 00:27:36.829 clat percentiles (usec): 00:27:36.829 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:27:36.829 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:27:36.829 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:27:36.829 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:27:36.829 | 99.99th=[43779] 00:27:36.829 bw ( KiB/s): min= 384, max= 416, per=99.48%, avg=388.80, stdev=11.72, samples=20 00:27:36.829 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:27:36.829 lat (msec) : 50=100.00% 00:27:36.829 cpu : usr=94.63%, sys=5.13%, ctx=14, majf=0, minf=205 00:27:36.829 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:36.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.829 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.829 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:36.829 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:36.829 00:27:36.829 Run status group 0 (all jobs): 00:27:36.829 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10010-10010msec 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.829 00:27:36.829 real 0m11.135s 00:27:36.829 user 0m15.931s 00:27:36.829 sys 0m0.800s 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # xtrace_disable 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:36.829 ************************************ 00:27:36.829 END TEST fio_dif_1_default 00:27:36.829 ************************************ 00:27:36.829 11:19:32 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:27:36.829 11:19:32 nvmf_dif -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:27:36.829 11:19:32 nvmf_dif -- common/autotest_common.sh@1104 -- # xtrace_disable 00:27:36.829 11:19:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:36.829 ************************************ 00:27:36.829 START TEST fio_dif_1_multi_subsystems 00:27:36.829 ************************************ 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # fio_dif_1_multi_subsystems 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:36.829 bdev_null0 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:36.829 [2024-05-15 11:19:32.664911] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:36.829 bdev_null1 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1353 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:36.829 { 00:27:36.829 "params": { 00:27:36.829 "name": "Nvme$subsystem", 00:27:36.829 "trtype": "$TEST_TRANSPORT", 00:27:36.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:36.829 "adrfam": "ipv4", 00:27:36.829 "trsvcid": "$NVMF_PORT", 00:27:36.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:36.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:36.829 "hdgst": ${hdgst:-false}, 00:27:36.829 "ddgst": ${ddgst:-false} 00:27:36.829 }, 00:27:36.829 "method": "bdev_nvme_attach_controller" 00:27:36.829 } 00:27:36.829 EOF 00:27:36.829 )") 00:27:36.829 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:27:36.830 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:27:36.830 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:36.830 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:27:36.830 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local sanitizers 00:27:36.830 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:36.830 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # shift 00:27:36.830 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local asan_lib= 00:27:36.830 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:27:36.830 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:27:36.830 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:27:36.830 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:36.830 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:36.830 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:27:36.830 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # grep libasan 00:27:36.830 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:27:36.830 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:36.830 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:36.830 { 00:27:36.830 "params": { 00:27:36.830 "name": "Nvme$subsystem", 00:27:36.830 "trtype": "$TEST_TRANSPORT", 00:27:36.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:36.830 "adrfam": "ipv4", 00:27:36.830 "trsvcid": "$NVMF_PORT", 00:27:36.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:36.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:36.830 "hdgst": ${hdgst:-false}, 00:27:36.830 "ddgst": ${ddgst:-false} 00:27:36.830 }, 00:27:36.830 "method": "bdev_nvme_attach_controller" 00:27:36.830 } 00:27:36.830 EOF 00:27:36.830 )") 00:27:36.830 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:27:36.830 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:36.830 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:27:36.830 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:27:36.830 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:27:36.830 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:36.830 "params": { 00:27:36.830 "name": "Nvme0", 00:27:36.830 "trtype": "tcp", 00:27:36.830 "traddr": "10.0.0.2", 00:27:36.830 "adrfam": "ipv4", 00:27:36.830 "trsvcid": "4420", 00:27:36.830 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:36.830 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:36.830 "hdgst": false, 00:27:36.830 "ddgst": false 00:27:36.830 }, 00:27:36.830 "method": "bdev_nvme_attach_controller" 00:27:36.830 },{ 00:27:36.830 "params": { 00:27:36.830 "name": "Nvme1", 00:27:36.830 "trtype": "tcp", 00:27:36.830 "traddr": "10.0.0.2", 00:27:36.830 "adrfam": "ipv4", 00:27:36.830 "trsvcid": "4420", 00:27:36.830 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:36.830 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:36.830 "hdgst": false, 00:27:36.830 "ddgst": false 00:27:36.830 }, 00:27:36.830 "method": "bdev_nvme_attach_controller" 00:27:36.830 }' 00:27:36.830 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # asan_lib= 00:27:36.830 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:27:36.830 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:27:36.830 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:36.830 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:27:36.830 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:27:36.830 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # asan_lib= 00:27:36.830 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:27:36.830 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:36.830 11:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:36.830 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:36.830 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:36.830 fio-3.35 00:27:36.830 Starting 2 threads 00:27:36.830 EAL: No free 2048 kB hugepages reported on node 1 00:27:46.784 00:27:46.784 filename0: (groupid=0, jobs=1): err= 0: pid=2426114: Wed May 15 11:19:43 2024 00:27:46.784 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10008msec) 00:27:46.784 slat (nsec): min=6157, max=40859, avg=7952.37, stdev=2661.41 00:27:46.784 clat (usec): min=40762, max=41975, avg=40989.34, stdev=122.35 00:27:46.784 lat (usec): min=40768, max=41987, avg=40997.29, stdev=122.61 00:27:46.784 clat percentiles (usec): 00:27:46.784 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:27:46.784 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:27:46.784 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:27:46.784 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:27:46.784 | 99.99th=[42206] 00:27:46.784 bw ( KiB/s): min= 384, max= 416, per=40.31%, avg=388.80, stdev=11.72, samples=20 00:27:46.784 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:27:46.784 lat (msec) : 50=100.00% 00:27:46.784 cpu : usr=97.53%, sys=2.22%, ctx=5, majf=0, minf=163 00:27:46.784 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:46.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:46.784 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:46.784 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:46.784 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:46.784 filename1: (groupid=0, jobs=1): err= 0: pid=2426115: Wed May 15 11:19:43 2024 00:27:46.784 read: IOPS=143, BW=573KiB/s (586kB/s)(5728KiB/10002msec) 00:27:46.784 slat (nsec): min=6141, max=28333, avg=7524.22, stdev=2202.92 00:27:46.784 clat (usec): min=443, max=42497, avg=27916.63, stdev=19099.75 00:27:46.784 lat (usec): min=449, max=42504, avg=27924.16, stdev=19099.45 00:27:46.784 clat percentiles (usec): 00:27:46.784 | 1.00th=[ 453], 5.00th=[ 457], 10.00th=[ 465], 20.00th=[ 494], 00:27:46.784 | 30.00th=[ 635], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:27:46.784 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:27:46.784 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:27:46.784 | 99.99th=[42730] 00:27:46.784 bw ( KiB/s): min= 384, max= 768, per=59.12%, avg=569.26, stdev=188.59, samples=19 00:27:46.784 iops : min= 96, max= 192, avg=142.32, stdev=47.15, samples=19 00:27:46.784 lat (usec) : 500=20.74%, 750=11.94% 00:27:46.784 lat (msec) : 50=67.32% 00:27:46.784 cpu : usr=97.78%, sys=1.97%, ctx=9, majf=0, minf=79 00:27:46.784 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:46.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:46.784 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:46.784 issued rwts: total=1432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:46.784 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:46.784 00:27:46.784 Run status group 0 (all jobs): 00:27:46.784 READ: bw=962KiB/s (986kB/s), 390KiB/s-573KiB/s (399kB/s-586kB/s), io=9632KiB (9863kB), run=10002-10008msec 00:27:46.784 11:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:27:46.784 11:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:27:46.784 11:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:27:46.784 11:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:46.784 11:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:27:46.784 11:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:46.784 11:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:46.784 11:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:46.784 11:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:46.784 11:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:46.784 11:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:46.784 11:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:46.784 11:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:46.784 11:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:27:46.784 11:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:46.784 11:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:27:46.784 11:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:46.784 11:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:46.784 11:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:46.784 11:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:46.784 11:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:46.784 11:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:46.784 11:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:46.784 11:19:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:46.784 00:27:46.784 real 0m11.371s 00:27:46.784 user 0m26.604s 00:27:46.784 sys 0m0.685s 00:27:46.784 11:19:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # xtrace_disable 00:27:46.784 11:19:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:46.784 ************************************ 00:27:46.785 END TEST fio_dif_1_multi_subsystems 00:27:46.785 ************************************ 00:27:46.785 11:19:44 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:27:46.785 11:19:44 nvmf_dif -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:27:46.785 11:19:44 nvmf_dif -- common/autotest_common.sh@1104 -- # xtrace_disable 00:27:46.785 11:19:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:47.043 ************************************ 00:27:47.043 START TEST fio_dif_rand_params 00:27:47.043 ************************************ 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # fio_dif_rand_params 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:47.043 bdev_null0 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:47.043 [2024-05-15 11:19:44.107420] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1353 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:47.043 { 00:27:47.043 "params": { 00:27:47.043 "name": "Nvme$subsystem", 00:27:47.043 "trtype": "$TEST_TRANSPORT", 00:27:47.043 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:47.043 "adrfam": "ipv4", 00:27:47.043 "trsvcid": "$NVMF_PORT", 00:27:47.043 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:47.043 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:47.043 "hdgst": ${hdgst:-false}, 00:27:47.043 "ddgst": ${ddgst:-false} 00:27:47.043 }, 00:27:47.043 "method": "bdev_nvme_attach_controller" 00:27:47.043 } 00:27:47.043 EOF 00:27:47.043 )") 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local sanitizers 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # shift 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local asan_lib= 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libasan 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:27:47.043 11:19:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:47.043 "params": { 00:27:47.043 "name": "Nvme0", 00:27:47.043 "trtype": "tcp", 00:27:47.043 "traddr": "10.0.0.2", 00:27:47.043 "adrfam": "ipv4", 00:27:47.043 "trsvcid": "4420", 00:27:47.043 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:47.043 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:47.043 "hdgst": false, 00:27:47.043 "ddgst": false 00:27:47.043 }, 00:27:47.044 "method": "bdev_nvme_attach_controller" 00:27:47.044 }' 00:27:47.044 11:19:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:27:47.044 11:19:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:27:47.044 11:19:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:27:47.044 11:19:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:47.044 11:19:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:27:47.044 11:19:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:27:47.044 11:19:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:27:47.044 11:19:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:27:47.044 11:19:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:47.044 11:19:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:47.302 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:47.302 ... 00:27:47.302 fio-3.35 00:27:47.302 Starting 3 threads 00:27:47.302 EAL: No free 2048 kB hugepages reported on node 1 00:27:53.854 00:27:53.854 filename0: (groupid=0, jobs=1): err= 0: pid=2427927: Wed May 15 11:19:49 2024 00:27:53.854 read: IOPS=308, BW=38.6MiB/s (40.5MB/s)(193MiB/5008msec) 00:27:53.854 slat (nsec): min=6449, max=25860, avg=10588.76, stdev=2442.08 00:27:53.854 clat (usec): min=3281, max=86910, avg=9699.66, stdev=9051.14 00:27:53.854 lat (usec): min=3288, max=86919, avg=9710.25, stdev=9051.30 00:27:53.854 clat percentiles (usec): 00:27:53.854 | 1.00th=[ 3818], 5.00th=[ 4113], 10.00th=[ 5276], 20.00th=[ 6194], 00:27:53.854 | 30.00th=[ 6652], 40.00th=[ 7308], 50.00th=[ 8029], 60.00th=[ 8586], 00:27:53.854 | 70.00th=[ 9110], 80.00th=[ 9634], 90.00th=[10552], 95.00th=[12387], 00:27:53.854 | 99.00th=[49546], 99.50th=[50070], 99.90th=[52167], 99.95th=[86508], 00:27:53.854 | 99.99th=[86508] 00:27:53.854 bw ( KiB/s): min=29696, max=50176, per=34.76%, avg=39552.00, stdev=6590.31, samples=10 00:27:53.854 iops : min= 232, max= 392, avg=309.00, stdev=51.49, samples=10 00:27:53.854 lat (msec) : 4=3.04%, 10=81.24%, 20=10.93%, 50=4.20%, 100=0.58% 00:27:53.854 cpu : usr=94.81%, sys=4.89%, ctx=8, majf=0, minf=54 00:27:53.854 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:53.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:53.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:53.854 issued rwts: total=1546,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:53.854 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:53.854 filename0: (groupid=0, jobs=1): err= 0: pid=2427928: Wed May 15 11:19:49 2024 00:27:53.854 read: IOPS=306, BW=38.3MiB/s (40.2MB/s)(192MiB/5005msec) 00:27:53.854 slat (nsec): min=6405, max=37284, avg=10625.67, stdev=2652.76 00:27:53.854 clat (usec): min=3613, max=89513, avg=9772.40, stdev=8570.60 00:27:53.854 lat (usec): min=3620, max=89525, avg=9783.03, stdev=8570.74 00:27:53.854 clat percentiles (usec): 00:27:53.854 | 1.00th=[ 3949], 5.00th=[ 4178], 10.00th=[ 4817], 20.00th=[ 6259], 00:27:53.854 | 30.00th=[ 6718], 40.00th=[ 7439], 50.00th=[ 8455], 60.00th=[ 8979], 00:27:53.854 | 70.00th=[ 9634], 80.00th=[10290], 90.00th=[11338], 95.00th=[12649], 00:27:53.854 | 99.00th=[49546], 99.50th=[50070], 99.90th=[51643], 99.95th=[89654], 00:27:53.854 | 99.99th=[89654] 00:27:53.854 bw ( KiB/s): min=25856, max=52480, per=34.47%, avg=39219.20, stdev=7430.18, samples=10 00:27:53.854 iops : min= 202, max= 410, avg=306.40, stdev=58.05, samples=10 00:27:53.854 lat (msec) : 4=1.56%, 10=74.71%, 20=19.49%, 50=3.46%, 100=0.78% 00:27:53.854 cpu : usr=93.84%, sys=5.86%, ctx=9, majf=0, minf=98 00:27:53.854 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:53.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:53.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:53.854 issued rwts: total=1534,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:53.854 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:53.854 filename0: (groupid=0, jobs=1): err= 0: pid=2427929: Wed May 15 11:19:49 2024 00:27:53.854 read: IOPS=278, BW=34.8MiB/s (36.5MB/s)(176MiB/5045msec) 00:27:53.854 slat (nsec): min=6380, max=26851, avg=10639.14, stdev=2507.09 00:27:53.854 clat (usec): min=3352, max=90592, avg=10729.67, stdev=10974.60 00:27:53.854 lat (usec): min=3359, max=90604, avg=10740.31, stdev=10974.72 00:27:53.854 clat percentiles (usec): 00:27:53.854 | 1.00th=[ 3785], 5.00th=[ 4146], 10.00th=[ 5342], 20.00th=[ 6325], 00:27:53.854 | 30.00th=[ 6915], 40.00th=[ 7832], 50.00th=[ 8455], 60.00th=[ 8979], 00:27:53.854 | 70.00th=[ 9503], 80.00th=[10290], 90.00th=[11469], 95.00th=[46924], 00:27:53.854 | 99.00th=[50070], 99.50th=[52167], 99.90th=[90702], 99.95th=[90702], 00:27:53.854 | 99.99th=[90702] 00:27:53.854 bw ( KiB/s): min=12288, max=43776, per=31.56%, avg=35916.80, stdev=9290.01, samples=10 00:27:53.854 iops : min= 96, max= 342, avg=280.60, stdev=72.58, samples=10 00:27:53.854 lat (msec) : 4=3.70%, 10=71.96%, 20=18.29%, 50=4.77%, 100=1.28% 00:27:53.854 cpu : usr=95.16%, sys=4.56%, ctx=13, majf=0, minf=105 00:27:53.854 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:53.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:53.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:53.854 issued rwts: total=1405,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:53.854 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:53.854 00:27:53.854 Run status group 0 (all jobs): 00:27:53.854 READ: bw=111MiB/s (117MB/s), 34.8MiB/s-38.6MiB/s (36.5MB/s-40.5MB/s), io=561MiB (588MB), run=5005-5045msec 00:27:53.854 11:19:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:27:53.854 11:19:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:53.854 11:19:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:53.854 11:19:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:53.854 11:19:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:53.854 11:19:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:53.854 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.854 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:53.854 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.854 11:19:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:53.854 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.854 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:53.854 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.854 11:19:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:53.855 bdev_null0 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:53.855 [2024-05-15 11:19:50.162419] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:53.855 bdev_null1 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:53.855 bdev_null2 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1353 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:53.855 { 00:27:53.855 "params": { 00:27:53.855 "name": "Nvme$subsystem", 00:27:53.855 "trtype": "$TEST_TRANSPORT", 00:27:53.855 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.855 "adrfam": "ipv4", 00:27:53.855 "trsvcid": "$NVMF_PORT", 00:27:53.855 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.855 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.855 "hdgst": ${hdgst:-false}, 00:27:53.855 "ddgst": ${ddgst:-false} 00:27:53.855 }, 00:27:53.855 "method": "bdev_nvme_attach_controller" 00:27:53.855 } 00:27:53.855 EOF 00:27:53.855 )") 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local sanitizers 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # shift 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local asan_lib= 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libasan 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:53.855 { 00:27:53.855 "params": { 00:27:53.855 "name": "Nvme$subsystem", 00:27:53.855 "trtype": "$TEST_TRANSPORT", 00:27:53.855 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.855 "adrfam": "ipv4", 00:27:53.855 "trsvcid": "$NVMF_PORT", 00:27:53.855 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.855 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.855 "hdgst": ${hdgst:-false}, 00:27:53.855 "ddgst": ${ddgst:-false} 00:27:53.855 }, 00:27:53.855 "method": "bdev_nvme_attach_controller" 00:27:53.855 } 00:27:53.855 EOF 00:27:53.855 )") 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:53.855 11:19:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:53.855 { 00:27:53.855 "params": { 00:27:53.855 "name": "Nvme$subsystem", 00:27:53.855 "trtype": "$TEST_TRANSPORT", 00:27:53.855 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.855 "adrfam": "ipv4", 00:27:53.855 "trsvcid": "$NVMF_PORT", 00:27:53.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.856 "hdgst": ${hdgst:-false}, 00:27:53.856 "ddgst": ${ddgst:-false} 00:27:53.856 }, 00:27:53.856 "method": "bdev_nvme_attach_controller" 00:27:53.856 } 00:27:53.856 EOF 00:27:53.856 )") 00:27:53.856 11:19:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:53.856 11:19:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:27:53.856 11:19:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:27:53.856 11:19:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:53.856 "params": { 00:27:53.856 "name": "Nvme0", 00:27:53.856 "trtype": "tcp", 00:27:53.856 "traddr": "10.0.0.2", 00:27:53.856 "adrfam": "ipv4", 00:27:53.856 "trsvcid": "4420", 00:27:53.856 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:53.856 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:53.856 "hdgst": false, 00:27:53.856 "ddgst": false 00:27:53.856 }, 00:27:53.856 "method": "bdev_nvme_attach_controller" 00:27:53.856 },{ 00:27:53.856 "params": { 00:27:53.856 "name": "Nvme1", 00:27:53.856 "trtype": "tcp", 00:27:53.856 "traddr": "10.0.0.2", 00:27:53.856 "adrfam": "ipv4", 00:27:53.856 "trsvcid": "4420", 00:27:53.856 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:53.856 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:53.856 "hdgst": false, 00:27:53.856 "ddgst": false 00:27:53.856 }, 00:27:53.856 "method": "bdev_nvme_attach_controller" 00:27:53.856 },{ 00:27:53.856 "params": { 00:27:53.856 "name": "Nvme2", 00:27:53.856 "trtype": "tcp", 00:27:53.856 "traddr": "10.0.0.2", 00:27:53.856 "adrfam": "ipv4", 00:27:53.856 "trsvcid": "4420", 00:27:53.856 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:53.856 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:53.856 "hdgst": false, 00:27:53.856 "ddgst": false 00:27:53.856 }, 00:27:53.856 "method": "bdev_nvme_attach_controller" 00:27:53.856 }' 00:27:53.856 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:27:53.856 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:27:53.856 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:27:53.856 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:53.856 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:27:53.856 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:27:53.856 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:27:53.856 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:27:53.856 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:53.856 11:19:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:53.856 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:53.856 ... 00:27:53.856 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:53.856 ... 00:27:53.856 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:53.856 ... 00:27:53.856 fio-3.35 00:27:53.856 Starting 24 threads 00:27:53.856 EAL: No free 2048 kB hugepages reported on node 1 00:28:06.042 00:28:06.042 filename0: (groupid=0, jobs=1): err= 0: pid=2429187: Wed May 15 11:20:01 2024 00:28:06.042 read: IOPS=619, BW=2476KiB/s (2536kB/s)(24.2MiB/10002msec) 00:28:06.042 slat (nsec): min=7130, max=76676, avg=26889.59, stdev=16332.64 00:28:06.042 clat (usec): min=20789, max=40518, avg=25561.46, stdev=1240.94 00:28:06.042 lat (usec): min=20817, max=40582, avg=25588.35, stdev=1241.88 00:28:06.042 clat percentiles (usec): 00:28:06.042 | 1.00th=[23200], 5.00th=[24773], 10.00th=[24773], 20.00th=[25035], 00:28:06.042 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25297], 60.00th=[25297], 00:28:06.042 | 70.00th=[25297], 80.00th=[26084], 90.00th=[27395], 95.00th=[27657], 00:28:06.042 | 99.00th=[27919], 99.50th=[28181], 99.90th=[40109], 99.95th=[40633], 00:28:06.042 | 99.99th=[40633] 00:28:06.042 bw ( KiB/s): min= 2304, max= 2565, per=4.15%, avg=2471.95, stdev=73.67, samples=20 00:28:06.042 iops : min= 576, max= 641, avg=617.95, stdev=18.37, samples=20 00:28:06.042 lat (msec) : 50=100.00% 00:28:06.042 cpu : usr=98.42%, sys=0.81%, ctx=63, majf=0, minf=37 00:28:06.042 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:06.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.042 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.042 issued rwts: total=6192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.042 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.042 filename0: (groupid=0, jobs=1): err= 0: pid=2429188: Wed May 15 11:20:01 2024 00:28:06.042 read: IOPS=626, BW=2506KiB/s (2566kB/s)(24.5MiB/10010msec) 00:28:06.042 slat (nsec): min=6402, max=75955, avg=24216.44, stdev=13820.94 00:28:06.042 clat (usec): min=3740, max=28943, avg=25343.81, stdev=2580.90 00:28:06.042 lat (usec): min=3758, max=28950, avg=25368.03, stdev=2580.95 00:28:06.042 clat percentiles (usec): 00:28:06.042 | 1.00th=[ 5800], 5.00th=[24773], 10.00th=[24773], 20.00th=[25035], 00:28:06.042 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25297], 00:28:06.042 | 70.00th=[25560], 80.00th=[26084], 90.00th=[27395], 95.00th=[27657], 00:28:06.042 | 99.00th=[28181], 99.50th=[28181], 99.90th=[28705], 99.95th=[28967], 00:28:06.042 | 99.99th=[28967] 00:28:06.042 bw ( KiB/s): min= 2304, max= 3072, per=4.20%, avg=2502.40, stdev=152.44, samples=20 00:28:06.042 iops : min= 576, max= 768, avg=625.60, stdev=38.11, samples=20 00:28:06.042 lat (msec) : 4=0.45%, 10=0.83%, 20=0.26%, 50=98.47% 00:28:06.042 cpu : usr=98.19%, sys=1.12%, ctx=144, majf=0, minf=54 00:28:06.042 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:06.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.042 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.043 issued rwts: total=6272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.043 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.043 filename0: (groupid=0, jobs=1): err= 0: pid=2429189: Wed May 15 11:20:01 2024 00:28:06.043 read: IOPS=619, BW=2480KiB/s (2539kB/s)(24.2MiB/10013msec) 00:28:06.043 slat (nsec): min=6898, max=85508, avg=33762.17, stdev=13486.75 00:28:06.043 clat (usec): min=14913, max=36315, avg=25520.65, stdev=1254.51 00:28:06.043 lat (usec): min=14936, max=36333, avg=25554.42, stdev=1254.69 00:28:06.043 clat percentiles (usec): 00:28:06.043 | 1.00th=[22938], 5.00th=[24773], 10.00th=[24773], 20.00th=[25035], 00:28:06.043 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25297], 60.00th=[25297], 00:28:06.043 | 70.00th=[25560], 80.00th=[26084], 90.00th=[27395], 95.00th=[27657], 00:28:06.043 | 99.00th=[28181], 99.50th=[28443], 99.90th=[36439], 99.95th=[36439], 00:28:06.043 | 99.99th=[36439] 00:28:06.043 bw ( KiB/s): min= 2304, max= 2565, per=4.16%, avg=2478.35, stdev=75.56, samples=20 00:28:06.043 iops : min= 576, max= 641, avg=619.55, stdev=18.84, samples=20 00:28:06.043 lat (msec) : 20=0.42%, 50=99.58% 00:28:06.043 cpu : usr=98.82%, sys=0.74%, ctx=40, majf=0, minf=28 00:28:06.043 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:06.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.043 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.043 issued rwts: total=6208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.043 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.043 filename0: (groupid=0, jobs=1): err= 0: pid=2429190: Wed May 15 11:20:01 2024 00:28:06.043 read: IOPS=619, BW=2480KiB/s (2539kB/s)(24.2MiB/10013msec) 00:28:06.043 slat (nsec): min=7885, max=92104, avg=34125.05, stdev=14541.51 00:28:06.043 clat (usec): min=14890, max=36346, avg=25495.69, stdev=1251.77 00:28:06.043 lat (usec): min=14904, max=36368, avg=25529.82, stdev=1252.69 00:28:06.043 clat percentiles (usec): 00:28:06.043 | 1.00th=[22938], 5.00th=[24773], 10.00th=[24773], 20.00th=[25035], 00:28:06.043 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25297], 60.00th=[25297], 00:28:06.043 | 70.00th=[25560], 80.00th=[26084], 90.00th=[27395], 95.00th=[27657], 00:28:06.043 | 99.00th=[28181], 99.50th=[28443], 99.90th=[36439], 99.95th=[36439], 00:28:06.043 | 99.99th=[36439] 00:28:06.043 bw ( KiB/s): min= 2304, max= 2565, per=4.16%, avg=2478.35, stdev=75.56, samples=20 00:28:06.043 iops : min= 576, max= 641, avg=619.55, stdev=18.84, samples=20 00:28:06.043 lat (msec) : 20=0.42%, 50=99.58% 00:28:06.043 cpu : usr=98.26%, sys=1.07%, ctx=49, majf=0, minf=31 00:28:06.043 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:06.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.043 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.043 issued rwts: total=6208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.043 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.043 filename0: (groupid=0, jobs=1): err= 0: pid=2429191: Wed May 15 11:20:01 2024 00:28:06.043 read: IOPS=618, BW=2476KiB/s (2535kB/s)(24.2MiB/10004msec) 00:28:06.043 slat (usec): min=7, max=105, avg=38.43, stdev=15.66 00:28:06.043 clat (usec): min=11641, max=49777, avg=25521.66, stdev=1715.57 00:28:06.043 lat (usec): min=11686, max=49797, avg=25560.08, stdev=1715.38 00:28:06.043 clat percentiles (usec): 00:28:06.043 | 1.00th=[23200], 5.00th=[24511], 10.00th=[24773], 20.00th=[24773], 00:28:06.043 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25297], 60.00th=[25297], 00:28:06.043 | 70.00th=[25560], 80.00th=[26084], 90.00th=[27395], 95.00th=[27657], 00:28:06.043 | 99.00th=[28181], 99.50th=[28705], 99.90th=[49546], 99.95th=[49546], 00:28:06.043 | 99.99th=[49546] 00:28:06.043 bw ( KiB/s): min= 2304, max= 2560, per=4.14%, avg=2465.89, stdev=83.21, samples=19 00:28:06.043 iops : min= 576, max= 640, avg=616.47, stdev=20.80, samples=19 00:28:06.043 lat (msec) : 20=0.26%, 50=99.74% 00:28:06.043 cpu : usr=98.86%, sys=0.60%, ctx=33, majf=0, minf=36 00:28:06.043 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:06.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.043 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.043 issued rwts: total=6192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.043 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.043 filename0: (groupid=0, jobs=1): err= 0: pid=2429192: Wed May 15 11:20:01 2024 00:28:06.043 read: IOPS=619, BW=2480KiB/s (2539kB/s)(24.2MiB/10013msec) 00:28:06.043 slat (usec): min=6, max=100, avg=30.88, stdev=18.84 00:28:06.043 clat (usec): min=13200, max=36614, avg=25544.84, stdev=1342.33 00:28:06.043 lat (usec): min=13211, max=36634, avg=25575.71, stdev=1343.01 00:28:06.043 clat percentiles (usec): 00:28:06.043 | 1.00th=[23200], 5.00th=[24773], 10.00th=[24773], 20.00th=[25035], 00:28:06.043 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25297], 00:28:06.043 | 70.00th=[25560], 80.00th=[26084], 90.00th=[27395], 95.00th=[27657], 00:28:06.043 | 99.00th=[28181], 99.50th=[28443], 99.90th=[35914], 99.95th=[36439], 00:28:06.043 | 99.99th=[36439] 00:28:06.043 bw ( KiB/s): min= 2304, max= 2565, per=4.16%, avg=2478.35, stdev=75.56, samples=20 00:28:06.043 iops : min= 576, max= 641, avg=619.55, stdev=18.84, samples=20 00:28:06.043 lat (msec) : 20=0.48%, 50=99.52% 00:28:06.043 cpu : usr=98.61%, sys=0.89%, ctx=55, majf=0, minf=39 00:28:06.043 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:06.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.043 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.043 issued rwts: total=6208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.043 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.043 filename0: (groupid=0, jobs=1): err= 0: pid=2429193: Wed May 15 11:20:01 2024 00:28:06.043 read: IOPS=619, BW=2476KiB/s (2535kB/s)(24.2MiB/10003msec) 00:28:06.043 slat (usec): min=5, max=106, avg=41.44, stdev=19.81 00:28:06.043 clat (usec): min=11546, max=48689, avg=25458.36, stdev=1672.87 00:28:06.043 lat (usec): min=11560, max=48703, avg=25499.81, stdev=1673.45 00:28:06.043 clat percentiles (usec): 00:28:06.043 | 1.00th=[23200], 5.00th=[24511], 10.00th=[24773], 20.00th=[24773], 00:28:06.043 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25035], 60.00th=[25297], 00:28:06.043 | 70.00th=[25297], 80.00th=[26084], 90.00th=[27132], 95.00th=[27657], 00:28:06.043 | 99.00th=[27919], 99.50th=[28443], 99.90th=[48497], 99.95th=[48497], 00:28:06.043 | 99.99th=[48497] 00:28:06.043 bw ( KiB/s): min= 2304, max= 2560, per=4.14%, avg=2465.68, stdev=83.63, samples=19 00:28:06.043 iops : min= 576, max= 640, avg=616.42, stdev=20.91, samples=19 00:28:06.043 lat (msec) : 20=0.26%, 50=99.74% 00:28:06.043 cpu : usr=98.70%, sys=0.67%, ctx=54, majf=0, minf=26 00:28:06.043 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:06.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.043 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.043 issued rwts: total=6192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.043 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.043 filename0: (groupid=0, jobs=1): err= 0: pid=2429194: Wed May 15 11:20:01 2024 00:28:06.043 read: IOPS=626, BW=2507KiB/s (2567kB/s)(24.5MiB/10009msec) 00:28:06.043 slat (usec): min=7, max=107, avg=49.41, stdev=22.00 00:28:06.043 clat (usec): min=3652, max=28757, avg=25105.97, stdev=2578.49 00:28:06.043 lat (usec): min=3663, max=28777, avg=25155.38, stdev=2581.75 00:28:06.043 clat percentiles (usec): 00:28:06.043 | 1.00th=[ 5342], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:28:06.043 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25035], 60.00th=[25297], 00:28:06.043 | 70.00th=[25297], 80.00th=[25822], 90.00th=[27132], 95.00th=[27657], 00:28:06.043 | 99.00th=[27919], 99.50th=[28181], 99.90th=[28705], 99.95th=[28705], 00:28:06.043 | 99.99th=[28705] 00:28:06.043 bw ( KiB/s): min= 2304, max= 3078, per=4.20%, avg=2502.70, stdev=153.30, samples=20 00:28:06.043 iops : min= 576, max= 769, avg=625.65, stdev=38.23, samples=20 00:28:06.043 lat (msec) : 4=0.53%, 10=0.75%, 20=0.26%, 50=98.47% 00:28:06.043 cpu : usr=99.05%, sys=0.49%, ctx=44, majf=0, minf=39 00:28:06.043 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:06.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.043 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.043 issued rwts: total=6272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.043 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.043 filename1: (groupid=0, jobs=1): err= 0: pid=2429195: Wed May 15 11:20:01 2024 00:28:06.043 read: IOPS=619, BW=2480KiB/s (2539kB/s)(24.2MiB/10013msec) 00:28:06.043 slat (usec): min=7, max=100, avg=40.69, stdev=19.91 00:28:06.043 clat (usec): min=14980, max=36134, avg=25429.04, stdev=1250.13 00:28:06.043 lat (usec): min=15003, max=36171, avg=25469.74, stdev=1251.64 00:28:06.043 clat percentiles (usec): 00:28:06.043 | 1.00th=[22938], 5.00th=[24511], 10.00th=[24773], 20.00th=[24773], 00:28:06.043 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25035], 60.00th=[25297], 00:28:06.043 | 70.00th=[25560], 80.00th=[26084], 90.00th=[27132], 95.00th=[27657], 00:28:06.043 | 99.00th=[28181], 99.50th=[28443], 99.90th=[35914], 99.95th=[35914], 00:28:06.043 | 99.99th=[35914] 00:28:06.043 bw ( KiB/s): min= 2304, max= 2565, per=4.16%, avg=2478.35, stdev=75.56, samples=20 00:28:06.043 iops : min= 576, max= 641, avg=619.55, stdev=18.84, samples=20 00:28:06.043 lat (msec) : 20=0.43%, 50=99.57% 00:28:06.043 cpu : usr=99.10%, sys=0.51%, ctx=17, majf=0, minf=50 00:28:06.043 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:06.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.043 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.043 issued rwts: total=6208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.043 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.043 filename1: (groupid=0, jobs=1): err= 0: pid=2429196: Wed May 15 11:20:01 2024 00:28:06.043 read: IOPS=625, BW=2504KiB/s (2564kB/s)(24.5MiB/10020msec) 00:28:06.043 slat (nsec): min=7026, max=76936, avg=25168.48, stdev=15130.36 00:28:06.043 clat (usec): min=3850, max=33300, avg=25346.26, stdev=2465.29 00:28:06.043 lat (usec): min=3862, max=33313, avg=25371.43, stdev=2466.09 00:28:06.043 clat percentiles (usec): 00:28:06.043 | 1.00th=[ 9110], 5.00th=[24773], 10.00th=[24773], 20.00th=[25035], 00:28:06.043 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25297], 00:28:06.043 | 70.00th=[25560], 80.00th=[26084], 90.00th=[27395], 95.00th=[27657], 00:28:06.043 | 99.00th=[28181], 99.50th=[28181], 99.90th=[33162], 99.95th=[33162], 00:28:06.043 | 99.99th=[33424] 00:28:06.043 bw ( KiB/s): min= 2304, max= 2944, per=4.20%, avg=2502.40, stdev=127.83, samples=20 00:28:06.043 iops : min= 576, max= 736, avg=625.60, stdev=31.96, samples=20 00:28:06.043 lat (msec) : 4=0.22%, 10=1.02%, 20=0.10%, 50=98.66% 00:28:06.043 cpu : usr=98.71%, sys=0.68%, ctx=33, majf=0, minf=44 00:28:06.043 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:28:06.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.043 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.043 issued rwts: total=6272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.043 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.043 filename1: (groupid=0, jobs=1): err= 0: pid=2429197: Wed May 15 11:20:01 2024 00:28:06.043 read: IOPS=620, BW=2482KiB/s (2542kB/s)(24.2MiB/10003msec) 00:28:06.043 slat (usec): min=5, max=112, avg=44.09, stdev=24.04 00:28:06.043 clat (usec): min=5406, max=43293, avg=25330.44, stdev=1801.15 00:28:06.043 lat (usec): min=5420, max=43310, avg=25374.53, stdev=1803.05 00:28:06.043 clat percentiles (usec): 00:28:06.043 | 1.00th=[22938], 5.00th=[24511], 10.00th=[24511], 20.00th=[24773], 00:28:06.043 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25035], 60.00th=[25297], 00:28:06.043 | 70.00th=[25297], 80.00th=[25822], 90.00th=[27132], 95.00th=[27657], 00:28:06.043 | 99.00th=[27919], 99.50th=[28443], 99.90th=[43254], 99.95th=[43254], 00:28:06.043 | 99.99th=[43254] 00:28:06.043 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2472.42, stdev=95.91, samples=19 00:28:06.043 iops : min= 576, max= 640, avg=618.11, stdev=23.98, samples=19 00:28:06.043 lat (msec) : 10=0.26%, 20=0.26%, 50=99.48% 00:28:06.043 cpu : usr=99.08%, sys=0.51%, ctx=23, majf=0, minf=38 00:28:06.043 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:06.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.043 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.043 issued rwts: total=6208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.043 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.043 filename1: (groupid=0, jobs=1): err= 0: pid=2429198: Wed May 15 11:20:01 2024 00:28:06.043 read: IOPS=619, BW=2478KiB/s (2537kB/s)(24.2MiB/10021msec) 00:28:06.043 slat (usec): min=8, max=109, avg=40.03, stdev=20.69 00:28:06.043 clat (usec): min=14912, max=36246, avg=25407.20, stdev=1247.38 00:28:06.043 lat (usec): min=14934, max=36268, avg=25447.22, stdev=1249.65 00:28:06.043 clat percentiles (usec): 00:28:06.043 | 1.00th=[22938], 5.00th=[24511], 10.00th=[24773], 20.00th=[24773], 00:28:06.043 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25035], 60.00th=[25297], 00:28:06.043 | 70.00th=[25297], 80.00th=[25822], 90.00th=[27132], 95.00th=[27657], 00:28:06.043 | 99.00th=[28181], 99.50th=[28443], 99.90th=[35914], 99.95th=[36439], 00:28:06.043 | 99.99th=[36439] 00:28:06.043 bw ( KiB/s): min= 2304, max= 2565, per=4.16%, avg=2478.35, stdev=75.56, samples=20 00:28:06.043 iops : min= 576, max= 641, avg=619.55, stdev=18.84, samples=20 00:28:06.043 lat (msec) : 20=0.40%, 50=99.60% 00:28:06.043 cpu : usr=98.98%, sys=0.59%, ctx=46, majf=0, minf=32 00:28:06.043 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:06.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.043 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.043 issued rwts: total=6208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.043 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.043 filename1: (groupid=0, jobs=1): err= 0: pid=2429199: Wed May 15 11:20:01 2024 00:28:06.043 read: IOPS=626, BW=2506KiB/s (2566kB/s)(24.5MiB/10012msec) 00:28:06.043 slat (usec): min=7, max=104, avg=39.27, stdev=23.49 00:28:06.043 clat (usec): min=3735, max=33547, avg=25265.95, stdev=2662.08 00:28:06.043 lat (usec): min=3757, max=33577, avg=25305.21, stdev=2663.76 00:28:06.043 clat percentiles (usec): 00:28:06.043 | 1.00th=[ 5342], 5.00th=[24249], 10.00th=[24511], 20.00th=[25035], 00:28:06.043 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25297], 00:28:06.043 | 70.00th=[25560], 80.00th=[26084], 90.00th=[27395], 95.00th=[27919], 00:28:06.043 | 99.00th=[28443], 99.50th=[29230], 99.90th=[31327], 99.95th=[32113], 00:28:06.043 | 99.99th=[33424] 00:28:06.043 bw ( KiB/s): min= 2320, max= 3078, per=4.20%, avg=2502.70, stdev=148.81, samples=20 00:28:06.043 iops : min= 580, max= 769, avg=625.65, stdev=37.10, samples=20 00:28:06.043 lat (msec) : 4=0.51%, 10=0.77%, 20=0.26%, 50=98.47% 00:28:06.043 cpu : usr=97.18%, sys=1.64%, ctx=291, majf=0, minf=53 00:28:06.043 IO depths : 1=1.4%, 2=4.7%, 4=22.0%, 8=60.8%, 16=11.1%, 32=0.0%, >=64=0.0% 00:28:06.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.043 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.043 issued rwts: total=6272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.043 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.043 filename1: (groupid=0, jobs=1): err= 0: pid=2429200: Wed May 15 11:20:01 2024 00:28:06.043 read: IOPS=619, BW=2480KiB/s (2539kB/s)(24.2MiB/10013msec) 00:28:06.043 slat (usec): min=6, max=101, avg=38.48, stdev=21.39 00:28:06.043 clat (usec): min=14896, max=36454, avg=25404.76, stdev=1256.42 00:28:06.043 lat (usec): min=14911, max=36477, avg=25443.24, stdev=1259.01 00:28:06.043 clat percentiles (usec): 00:28:06.043 | 1.00th=[22938], 5.00th=[24511], 10.00th=[24773], 20.00th=[24773], 00:28:06.043 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25035], 60.00th=[25297], 00:28:06.043 | 70.00th=[25297], 80.00th=[25822], 90.00th=[27132], 95.00th=[27657], 00:28:06.043 | 99.00th=[28181], 99.50th=[28443], 99.90th=[36439], 99.95th=[36439], 00:28:06.043 | 99.99th=[36439] 00:28:06.043 bw ( KiB/s): min= 2304, max= 2565, per=4.16%, avg=2478.35, stdev=75.56, samples=20 00:28:06.043 iops : min= 576, max= 641, avg=619.55, stdev=18.84, samples=20 00:28:06.043 lat (msec) : 20=0.45%, 50=99.55% 00:28:06.043 cpu : usr=97.61%, sys=1.20%, ctx=165, majf=0, minf=31 00:28:06.043 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:06.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.043 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.043 issued rwts: total=6208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.043 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.043 filename1: (groupid=0, jobs=1): err= 0: pid=2429201: Wed May 15 11:20:01 2024 00:28:06.043 read: IOPS=620, BW=2482KiB/s (2542kB/s)(24.2MiB/10003msec) 00:28:06.043 slat (usec): min=6, max=109, avg=42.61, stdev=23.24 00:28:06.043 clat (usec): min=5445, max=43191, avg=25352.54, stdev=1797.48 00:28:06.043 lat (usec): min=5460, max=43208, avg=25395.15, stdev=1799.09 00:28:06.043 clat percentiles (usec): 00:28:06.043 | 1.00th=[22938], 5.00th=[24511], 10.00th=[24773], 20.00th=[24773], 00:28:06.043 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25035], 60.00th=[25297], 00:28:06.043 | 70.00th=[25297], 80.00th=[25822], 90.00th=[27132], 95.00th=[27657], 00:28:06.043 | 99.00th=[27919], 99.50th=[28443], 99.90th=[43254], 99.95th=[43254], 00:28:06.043 | 99.99th=[43254] 00:28:06.043 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2472.42, stdev=95.91, samples=19 00:28:06.043 iops : min= 576, max= 640, avg=618.11, stdev=23.98, samples=19 00:28:06.043 lat (msec) : 10=0.26%, 20=0.26%, 50=99.48% 00:28:06.043 cpu : usr=98.83%, sys=0.63%, ctx=84, majf=0, minf=25 00:28:06.043 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:06.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.043 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.043 issued rwts: total=6208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.043 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.043 filename1: (groupid=0, jobs=1): err= 0: pid=2429202: Wed May 15 11:20:01 2024 00:28:06.043 read: IOPS=618, BW=2476KiB/s (2535kB/s)(24.2MiB/10005msec) 00:28:06.043 slat (usec): min=7, max=109, avg=45.77, stdev=22.99 00:28:06.043 clat (usec): min=11485, max=55987, avg=25411.94, stdev=1799.87 00:28:06.043 lat (usec): min=11505, max=56008, avg=25457.71, stdev=1800.38 00:28:06.043 clat percentiles (usec): 00:28:06.043 | 1.00th=[23200], 5.00th=[24511], 10.00th=[24511], 20.00th=[24773], 00:28:06.043 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25035], 60.00th=[25297], 00:28:06.043 | 70.00th=[25297], 80.00th=[25822], 90.00th=[27132], 95.00th=[27657], 00:28:06.043 | 99.00th=[28181], 99.50th=[28443], 99.90th=[51119], 99.95th=[51119], 00:28:06.043 | 99.99th=[55837] 00:28:06.043 bw ( KiB/s): min= 2304, max= 2560, per=4.14%, avg=2465.68, stdev=83.63, samples=19 00:28:06.043 iops : min= 576, max= 640, avg=616.42, stdev=20.91, samples=19 00:28:06.043 lat (msec) : 20=0.29%, 50=99.45%, 100=0.26% 00:28:06.043 cpu : usr=99.11%, sys=0.50%, ctx=13, majf=0, minf=35 00:28:06.043 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:06.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.043 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.043 issued rwts: total=6192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.043 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.043 filename2: (groupid=0, jobs=1): err= 0: pid=2429203: Wed May 15 11:20:01 2024 00:28:06.043 read: IOPS=625, BW=2502KiB/s (2562kB/s)(24.4MiB/10003msec) 00:28:06.043 slat (nsec): min=6284, max=65327, avg=13585.66, stdev=8360.73 00:28:06.043 clat (usec): min=1778, max=69026, avg=25528.85, stdev=3055.57 00:28:06.043 lat (usec): min=1787, max=69046, avg=25542.44, stdev=3055.00 00:28:06.043 clat percentiles (usec): 00:28:06.043 | 1.00th=[16450], 5.00th=[21627], 10.00th=[25035], 20.00th=[25297], 00:28:06.043 | 30.00th=[25297], 40.00th=[25297], 50.00th=[25297], 60.00th=[25297], 00:28:06.043 | 70.00th=[25560], 80.00th=[26346], 90.00th=[27919], 95.00th=[27919], 00:28:06.043 | 99.00th=[32375], 99.50th=[35390], 99.90th=[57934], 99.95th=[57934], 00:28:06.043 | 99.99th=[68682] 00:28:06.043 bw ( KiB/s): min= 2352, max= 2608, per=4.17%, avg=2485.05, stdev=77.11, samples=19 00:28:06.043 iops : min= 588, max= 652, avg=621.26, stdev=19.28, samples=19 00:28:06.043 lat (msec) : 2=0.10%, 4=0.06%, 10=0.26%, 20=2.33%, 50=96.99% 00:28:06.043 lat (msec) : 100=0.26% 00:28:06.044 cpu : usr=98.87%, sys=0.73%, ctx=35, majf=0, minf=41 00:28:06.044 IO depths : 1=0.1%, 2=0.3%, 4=1.5%, 8=80.3%, 16=18.0%, 32=0.0%, >=64=0.0% 00:28:06.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.044 complete : 0=0.0%, 4=89.5%, 8=9.9%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.044 issued rwts: total=6256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.044 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.044 filename2: (groupid=0, jobs=1): err= 0: pid=2429204: Wed May 15 11:20:01 2024 00:28:06.044 read: IOPS=619, BW=2478KiB/s (2537kB/s)(24.2MiB/10005msec) 00:28:06.044 slat (nsec): min=5731, max=79394, avg=37350.66, stdev=13594.98 00:28:06.044 clat (usec): min=4686, max=46495, avg=25496.64, stdev=1720.28 00:28:06.044 lat (usec): min=4695, max=46511, avg=25533.99, stdev=1720.53 00:28:06.044 clat percentiles (usec): 00:28:06.044 | 1.00th=[23200], 5.00th=[24773], 10.00th=[24773], 20.00th=[25035], 00:28:06.044 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25297], 60.00th=[25297], 00:28:06.044 | 70.00th=[25560], 80.00th=[26084], 90.00th=[27395], 95.00th=[27657], 00:28:06.044 | 99.00th=[27919], 99.50th=[28443], 99.90th=[46400], 99.95th=[46400], 00:28:06.044 | 99.99th=[46400] 00:28:06.044 bw ( KiB/s): min= 2304, max= 2560, per=4.14%, avg=2465.68, stdev=71.93, samples=19 00:28:06.044 iops : min= 576, max= 640, avg=616.42, stdev=17.98, samples=19 00:28:06.044 lat (msec) : 10=0.10%, 20=0.26%, 50=99.65% 00:28:06.044 cpu : usr=98.78%, sys=0.82%, ctx=33, majf=0, minf=32 00:28:06.044 IO depths : 1=6.3%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:06.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.044 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.044 issued rwts: total=6198,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.044 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.044 filename2: (groupid=0, jobs=1): err= 0: pid=2429205: Wed May 15 11:20:01 2024 00:28:06.044 read: IOPS=619, BW=2480KiB/s (2539kB/s)(24.2MiB/10013msec) 00:28:06.044 slat (nsec): min=6576, max=81991, avg=29741.16, stdev=13668.15 00:28:06.044 clat (usec): min=11114, max=40368, avg=25571.42, stdev=1306.43 00:28:06.044 lat (usec): min=11123, max=40396, avg=25601.17, stdev=1306.24 00:28:06.044 clat percentiles (usec): 00:28:06.044 | 1.00th=[22938], 5.00th=[24773], 10.00th=[24773], 20.00th=[25035], 00:28:06.044 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25297], 00:28:06.044 | 70.00th=[25560], 80.00th=[26084], 90.00th=[27395], 95.00th=[27657], 00:28:06.044 | 99.00th=[28181], 99.50th=[28443], 99.90th=[35914], 99.95th=[36963], 00:28:06.044 | 99.99th=[40109] 00:28:06.044 bw ( KiB/s): min= 2304, max= 2565, per=4.16%, avg=2478.35, stdev=75.56, samples=20 00:28:06.044 iops : min= 576, max= 641, avg=619.55, stdev=18.84, samples=20 00:28:06.044 lat (msec) : 20=0.42%, 50=99.58% 00:28:06.044 cpu : usr=98.90%, sys=0.70%, ctx=27, majf=0, minf=26 00:28:06.044 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:06.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.044 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.044 issued rwts: total=6208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.044 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.044 filename2: (groupid=0, jobs=1): err= 0: pid=2429206: Wed May 15 11:20:01 2024 00:28:06.044 read: IOPS=619, BW=2480KiB/s (2539kB/s)(24.2MiB/10013msec) 00:28:06.044 slat (usec): min=8, max=100, avg=38.36, stdev=21.22 00:28:06.044 clat (usec): min=14892, max=36340, avg=25410.69, stdev=1253.69 00:28:06.044 lat (usec): min=14916, max=36361, avg=25449.05, stdev=1256.08 00:28:06.044 clat percentiles (usec): 00:28:06.044 | 1.00th=[22938], 5.00th=[24511], 10.00th=[24773], 20.00th=[24773], 00:28:06.044 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25035], 60.00th=[25297], 00:28:06.044 | 70.00th=[25297], 80.00th=[25822], 90.00th=[27132], 95.00th=[27657], 00:28:06.044 | 99.00th=[28181], 99.50th=[28443], 99.90th=[36439], 99.95th=[36439], 00:28:06.044 | 99.99th=[36439] 00:28:06.044 bw ( KiB/s): min= 2304, max= 2565, per=4.16%, avg=2478.35, stdev=75.56, samples=20 00:28:06.044 iops : min= 576, max= 641, avg=619.55, stdev=18.84, samples=20 00:28:06.044 lat (msec) : 20=0.45%, 50=99.55% 00:28:06.044 cpu : usr=98.25%, sys=0.91%, ctx=213, majf=0, minf=30 00:28:06.044 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:06.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.044 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.044 issued rwts: total=6208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.044 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.044 filename2: (groupid=0, jobs=1): err= 0: pid=2429207: Wed May 15 11:20:01 2024 00:28:06.044 read: IOPS=619, BW=2478KiB/s (2538kB/s)(24.2MiB/10003msec) 00:28:06.044 slat (usec): min=6, max=105, avg=39.03, stdev=20.22 00:28:06.044 clat (usec): min=5492, max=66157, avg=25448.04, stdev=2441.48 00:28:06.044 lat (usec): min=5507, max=66177, avg=25487.06, stdev=2441.53 00:28:06.044 clat percentiles (usec): 00:28:06.044 | 1.00th=[22152], 5.00th=[24511], 10.00th=[24773], 20.00th=[24773], 00:28:06.044 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25035], 60.00th=[25297], 00:28:06.044 | 70.00th=[25297], 80.00th=[25822], 90.00th=[27395], 95.00th=[27657], 00:28:06.044 | 99.00th=[28181], 99.50th=[33817], 99.90th=[57934], 99.95th=[58459], 00:28:06.044 | 99.99th=[66323] 00:28:06.044 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2468.21, stdev=99.47, samples=19 00:28:06.044 iops : min= 576, max= 640, avg=617.05, stdev=24.87, samples=19 00:28:06.044 lat (msec) : 10=0.26%, 20=0.45%, 50=99.03%, 100=0.26% 00:28:06.044 cpu : usr=98.65%, sys=0.81%, ctx=42, majf=0, minf=45 00:28:06.044 IO depths : 1=5.7%, 2=11.9%, 4=24.9%, 8=50.7%, 16=6.8%, 32=0.0%, >=64=0.0% 00:28:06.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.044 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.044 issued rwts: total=6198,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.044 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.044 filename2: (groupid=0, jobs=1): err= 0: pid=2429208: Wed May 15 11:20:01 2024 00:28:06.044 read: IOPS=617, BW=2470KiB/s (2529kB/s)(24.1MiB/10003msec) 00:28:06.044 slat (usec): min=5, max=107, avg=44.90, stdev=23.53 00:28:06.044 clat (usec): min=21569, max=60535, avg=25467.59, stdev=2035.94 00:28:06.044 lat (usec): min=21579, max=60553, avg=25512.49, stdev=2036.12 00:28:06.044 clat percentiles (usec): 00:28:06.044 | 1.00th=[23200], 5.00th=[24511], 10.00th=[24511], 20.00th=[24773], 00:28:06.044 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25035], 60.00th=[25297], 00:28:06.044 | 70.00th=[25297], 80.00th=[25822], 90.00th=[27132], 95.00th=[27657], 00:28:06.044 | 99.00th=[28181], 99.50th=[28443], 99.90th=[60556], 99.95th=[60556], 00:28:06.044 | 99.99th=[60556] 00:28:06.044 bw ( KiB/s): min= 2304, max= 2560, per=4.14%, avg=2465.68, stdev=83.63, samples=19 00:28:06.044 iops : min= 576, max= 640, avg=616.42, stdev=20.91, samples=19 00:28:06.044 lat (msec) : 50=99.74%, 100=0.26% 00:28:06.044 cpu : usr=99.07%, sys=0.54%, ctx=14, majf=0, minf=36 00:28:06.044 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:06.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.044 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.044 issued rwts: total=6176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.044 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.044 filename2: (groupid=0, jobs=1): err= 0: pid=2429209: Wed May 15 11:20:01 2024 00:28:06.044 read: IOPS=618, BW=2474KiB/s (2533kB/s)(24.2MiB/10011msec) 00:28:06.044 slat (nsec): min=7407, max=93322, avg=36107.82, stdev=13798.52 00:28:06.044 clat (usec): min=11652, max=56822, avg=25569.94, stdev=1982.46 00:28:06.044 lat (usec): min=11666, max=56842, avg=25606.05, stdev=1982.20 00:28:06.044 clat percentiles (usec): 00:28:06.044 | 1.00th=[23200], 5.00th=[24773], 10.00th=[24773], 20.00th=[25035], 00:28:06.044 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25297], 60.00th=[25297], 00:28:06.044 | 70.00th=[25560], 80.00th=[26084], 90.00th=[27395], 95.00th=[27657], 00:28:06.044 | 99.00th=[28181], 99.50th=[28443], 99.90th=[56886], 99.95th=[56886], 00:28:06.044 | 99.99th=[56886] 00:28:06.044 bw ( KiB/s): min= 2304, max= 2560, per=4.14%, avg=2465.68, stdev=93.89, samples=19 00:28:06.044 iops : min= 576, max= 640, avg=616.42, stdev=23.47, samples=19 00:28:06.044 lat (msec) : 20=0.26%, 50=99.48%, 100=0.26% 00:28:06.044 cpu : usr=98.85%, sys=0.76%, ctx=28, majf=0, minf=33 00:28:06.044 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:06.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.044 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.044 issued rwts: total=6192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.044 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.044 filename2: (groupid=0, jobs=1): err= 0: pid=2429210: Wed May 15 11:20:01 2024 00:28:06.044 read: IOPS=619, BW=2476KiB/s (2536kB/s)(24.2MiB/10002msec) 00:28:06.044 slat (nsec): min=6541, max=76934, avg=24090.56, stdev=13684.73 00:28:06.044 clat (usec): min=20795, max=40544, avg=25630.09, stdev=1243.97 00:28:06.044 lat (usec): min=20810, max=40582, avg=25654.18, stdev=1243.54 00:28:06.044 clat percentiles (usec): 00:28:06.044 | 1.00th=[23200], 5.00th=[24773], 10.00th=[24773], 20.00th=[25035], 00:28:06.044 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25297], 00:28:06.044 | 70.00th=[25560], 80.00th=[26084], 90.00th=[27395], 95.00th=[27919], 00:28:06.044 | 99.00th=[27919], 99.50th=[28181], 99.90th=[40109], 99.95th=[40633], 00:28:06.044 | 99.99th=[40633] 00:28:06.044 bw ( KiB/s): min= 2304, max= 2565, per=4.15%, avg=2471.95, stdev=73.67, samples=20 00:28:06.044 iops : min= 576, max= 641, avg=617.95, stdev=18.37, samples=20 00:28:06.044 lat (msec) : 50=100.00% 00:28:06.044 cpu : usr=98.78%, sys=0.81%, ctx=22, majf=0, minf=53 00:28:06.044 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:06.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.044 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.044 issued rwts: total=6192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.044 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.044 00:28:06.044 Run status group 0 (all jobs): 00:28:06.044 READ: bw=58.1MiB/s (61.0MB/s), 2470KiB/s-2507KiB/s (2529kB/s-2567kB/s), io=583MiB (611MB), run=10002-10021msec 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.044 bdev_null0 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:06.044 11:20:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.044 [2024-05-15 11:20:02.015435] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.044 bdev_null1 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1353 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:06.044 { 00:28:06.044 "params": { 00:28:06.044 "name": "Nvme$subsystem", 00:28:06.044 "trtype": "$TEST_TRANSPORT", 00:28:06.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:06.044 "adrfam": "ipv4", 00:28:06.044 "trsvcid": "$NVMF_PORT", 00:28:06.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:06.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:06.044 "hdgst": ${hdgst:-false}, 00:28:06.044 "ddgst": ${ddgst:-false} 00:28:06.044 }, 00:28:06.044 "method": "bdev_nvme_attach_controller" 00:28:06.044 } 00:28:06.044 EOF 00:28:06.044 )") 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local sanitizers 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # shift 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local asan_lib= 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libasan 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:06.044 11:20:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:06.044 { 00:28:06.044 "params": { 00:28:06.044 "name": "Nvme$subsystem", 00:28:06.044 "trtype": "$TEST_TRANSPORT", 00:28:06.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:06.044 "adrfam": "ipv4", 00:28:06.044 "trsvcid": "$NVMF_PORT", 00:28:06.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:06.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:06.045 "hdgst": ${hdgst:-false}, 00:28:06.045 "ddgst": ${ddgst:-false} 00:28:06.045 }, 00:28:06.045 "method": "bdev_nvme_attach_controller" 00:28:06.045 } 00:28:06.045 EOF 00:28:06.045 )") 00:28:06.045 11:20:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:06.045 11:20:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:06.045 11:20:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:06.045 11:20:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:06.045 11:20:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:06.045 11:20:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:06.045 "params": { 00:28:06.045 "name": "Nvme0", 00:28:06.045 "trtype": "tcp", 00:28:06.045 "traddr": "10.0.0.2", 00:28:06.045 "adrfam": "ipv4", 00:28:06.045 "trsvcid": "4420", 00:28:06.045 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:06.045 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:06.045 "hdgst": false, 00:28:06.045 "ddgst": false 00:28:06.045 }, 00:28:06.045 "method": "bdev_nvme_attach_controller" 00:28:06.045 },{ 00:28:06.045 "params": { 00:28:06.045 "name": "Nvme1", 00:28:06.045 "trtype": "tcp", 00:28:06.045 "traddr": "10.0.0.2", 00:28:06.045 "adrfam": "ipv4", 00:28:06.045 "trsvcid": "4420", 00:28:06.045 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:06.045 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:06.045 "hdgst": false, 00:28:06.045 "ddgst": false 00:28:06.045 }, 00:28:06.045 "method": "bdev_nvme_attach_controller" 00:28:06.045 }' 00:28:06.045 11:20:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:28:06.045 11:20:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:28:06.045 11:20:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:28:06.045 11:20:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:06.045 11:20:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:28:06.045 11:20:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:28:06.045 11:20:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:28:06.045 11:20:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:28:06.045 11:20:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:06.045 11:20:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:06.045 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:06.045 ... 00:28:06.045 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:06.045 ... 00:28:06.045 fio-3.35 00:28:06.045 Starting 4 threads 00:28:06.045 EAL: No free 2048 kB hugepages reported on node 1 00:28:11.342 00:28:11.342 filename0: (groupid=0, jobs=1): err= 0: pid=2431162: Wed May 15 11:20:08 2024 00:28:11.342 read: IOPS=2755, BW=21.5MiB/s (22.6MB/s)(108MiB/5001msec) 00:28:11.342 slat (nsec): min=6294, max=57952, avg=9859.62, stdev=3970.91 00:28:11.342 clat (usec): min=676, max=6236, avg=2875.62, stdev=498.52 00:28:11.342 lat (usec): min=699, max=6258, avg=2885.48, stdev=498.33 00:28:11.342 clat percentiles (usec): 00:28:11.342 | 1.00th=[ 1827], 5.00th=[ 2212], 10.00th=[ 2343], 20.00th=[ 2507], 00:28:11.342 | 30.00th=[ 2606], 40.00th=[ 2737], 50.00th=[ 2835], 60.00th=[ 2933], 00:28:11.342 | 70.00th=[ 3032], 80.00th=[ 3163], 90.00th=[ 3490], 95.00th=[ 3720], 00:28:11.342 | 99.00th=[ 4490], 99.50th=[ 4817], 99.90th=[ 5276], 99.95th=[ 5538], 00:28:11.342 | 99.99th=[ 6259] 00:28:11.342 bw ( KiB/s): min=20992, max=22784, per=26.08%, avg=22067.56, stdev=705.08, samples=9 00:28:11.342 iops : min= 2624, max= 2848, avg=2758.44, stdev=88.13, samples=9 00:28:11.342 lat (usec) : 750=0.02%, 1000=0.09% 00:28:11.342 lat (msec) : 2=1.63%, 4=95.30%, 10=2.96% 00:28:11.342 cpu : usr=93.70%, sys=4.34%, ctx=211, majf=0, minf=0 00:28:11.342 IO depths : 1=0.3%, 2=3.3%, 4=66.8%, 8=29.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:11.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.342 complete : 0=0.0%, 4=94.1%, 8=5.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.342 issued rwts: total=13779,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:11.342 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:11.342 filename0: (groupid=0, jobs=1): err= 0: pid=2431163: Wed May 15 11:20:08 2024 00:28:11.342 read: IOPS=2917, BW=22.8MiB/s (23.9MB/s)(114MiB/5009msec) 00:28:11.342 slat (nsec): min=6274, max=50547, avg=9480.84, stdev=3335.92 00:28:11.342 clat (usec): min=806, max=12308, avg=2710.10, stdev=481.22 00:28:11.342 lat (usec): min=813, max=12320, avg=2719.58, stdev=481.43 00:28:11.342 clat percentiles (usec): 00:28:11.342 | 1.00th=[ 1860], 5.00th=[ 2089], 10.00th=[ 2212], 20.00th=[ 2376], 00:28:11.342 | 30.00th=[ 2474], 40.00th=[ 2540], 50.00th=[ 2671], 60.00th=[ 2769], 00:28:11.342 | 70.00th=[ 2900], 80.00th=[ 2999], 90.00th=[ 3228], 95.00th=[ 3458], 00:28:11.342 | 99.00th=[ 4047], 99.50th=[ 4293], 99.90th=[ 4948], 99.95th=[12125], 00:28:11.342 | 99.99th=[12256] 00:28:11.342 bw ( KiB/s): min=21402, max=24880, per=27.63%, avg=23377.00, stdev=1110.50, samples=10 00:28:11.342 iops : min= 2675, max= 3110, avg=2922.10, stdev=138.86, samples=10 00:28:11.342 lat (usec) : 1000=0.03% 00:28:11.342 lat (msec) : 2=2.75%, 4=96.02%, 10=1.14%, 20=0.05% 00:28:11.342 cpu : usr=97.12%, sys=2.50%, ctx=8, majf=0, minf=9 00:28:11.342 IO depths : 1=0.1%, 2=13.3%, 4=58.8%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:11.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.342 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.342 issued rwts: total=14616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:11.342 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:11.342 filename1: (groupid=0, jobs=1): err= 0: pid=2431164: Wed May 15 11:20:08 2024 00:28:11.342 read: IOPS=2482, BW=19.4MiB/s (20.3MB/s)(97.0MiB/5001msec) 00:28:11.342 slat (nsec): min=6305, max=48242, avg=9833.88, stdev=3706.46 00:28:11.342 clat (usec): min=722, max=6435, avg=3193.65, stdev=565.43 00:28:11.342 lat (usec): min=736, max=6442, avg=3203.48, stdev=564.81 00:28:11.342 clat percentiles (usec): 00:28:11.342 | 1.00th=[ 2212], 5.00th=[ 2507], 10.00th=[ 2671], 20.00th=[ 2802], 00:28:11.342 | 30.00th=[ 2900], 40.00th=[ 2966], 50.00th=[ 3032], 60.00th=[ 3163], 00:28:11.342 | 70.00th=[ 3326], 80.00th=[ 3490], 90.00th=[ 3949], 95.00th=[ 4490], 00:28:11.342 | 99.00th=[ 5080], 99.50th=[ 5211], 99.90th=[ 5604], 99.95th=[ 5669], 00:28:11.342 | 99.99th=[ 6456] 00:28:11.342 bw ( KiB/s): min=19008, max=20768, per=23.48%, avg=19870.22, stdev=588.55, samples=9 00:28:11.342 iops : min= 2376, max= 2596, avg=2483.78, stdev=73.57, samples=9 00:28:11.342 lat (usec) : 750=0.01% 00:28:11.342 lat (msec) : 2=0.41%, 4=90.36%, 10=9.22% 00:28:11.342 cpu : usr=97.20%, sys=2.42%, ctx=8, majf=0, minf=10 00:28:11.342 IO depths : 1=0.2%, 2=1.8%, 4=70.9%, 8=27.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:11.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.342 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.342 issued rwts: total=12414,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:11.342 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:11.342 filename1: (groupid=0, jobs=1): err= 0: pid=2431165: Wed May 15 11:20:08 2024 00:28:11.342 read: IOPS=2431, BW=19.0MiB/s (19.9MB/s)(95.2MiB/5010msec) 00:28:11.342 slat (nsec): min=6273, max=49446, avg=9303.49, stdev=3435.92 00:28:11.342 clat (usec): min=707, max=12288, avg=3262.22, stdev=614.43 00:28:11.342 lat (usec): min=722, max=12301, avg=3271.53, stdev=613.90 00:28:11.342 clat percentiles (usec): 00:28:11.342 | 1.00th=[ 2278], 5.00th=[ 2638], 10.00th=[ 2737], 20.00th=[ 2868], 00:28:11.342 | 30.00th=[ 2933], 40.00th=[ 2999], 50.00th=[ 3097], 60.00th=[ 3228], 00:28:11.342 | 70.00th=[ 3359], 80.00th=[ 3589], 90.00th=[ 4047], 95.00th=[ 4555], 00:28:11.342 | 99.00th=[ 5145], 99.50th=[ 5276], 99.90th=[ 6063], 99.95th=[11731], 00:28:11.342 | 99.99th=[12256] 00:28:11.342 bw ( KiB/s): min=18512, max=20304, per=23.02%, avg=19480.00, stdev=527.88, samples=10 00:28:11.342 iops : min= 2314, max= 2538, avg=2435.00, stdev=65.98, samples=10 00:28:11.342 lat (usec) : 750=0.01%, 1000=0.04% 00:28:11.342 lat (msec) : 2=0.13%, 4=89.06%, 10=10.70%, 20=0.07% 00:28:11.342 cpu : usr=97.36%, sys=2.28%, ctx=10, majf=0, minf=9 00:28:11.342 IO depths : 1=0.1%, 2=1.5%, 4=71.6%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:11.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.342 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.342 issued rwts: total=12183,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:11.342 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:11.342 00:28:11.342 Run status group 0 (all jobs): 00:28:11.342 READ: bw=82.6MiB/s (86.6MB/s), 19.0MiB/s-22.8MiB/s (19.9MB/s-23.9MB/s), io=414MiB (434MB), run=5001-5010msec 00:28:11.342 11:20:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:28:11.342 11:20:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:11.342 11:20:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:11.342 11:20:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:11.342 11:20:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:11.342 11:20:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:11.342 11:20:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:11.342 11:20:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:11.342 11:20:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:11.342 11:20:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:11.342 11:20:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:11.342 11:20:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:11.342 11:20:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:11.342 11:20:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:11.342 11:20:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:11.342 11:20:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:11.342 11:20:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:11.342 11:20:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:11.342 11:20:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:11.342 11:20:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:11.342 11:20:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:11.342 11:20:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:11.342 11:20:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:11.342 11:20:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:11.342 00:28:11.342 real 0m24.347s 00:28:11.342 user 4m51.230s 00:28:11.342 sys 0m4.246s 00:28:11.342 11:20:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # xtrace_disable 00:28:11.342 11:20:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:11.342 ************************************ 00:28:11.342 END TEST fio_dif_rand_params 00:28:11.342 ************************************ 00:28:11.342 11:20:08 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:28:11.342 11:20:08 nvmf_dif -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:28:11.342 11:20:08 nvmf_dif -- common/autotest_common.sh@1104 -- # xtrace_disable 00:28:11.342 11:20:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:11.342 ************************************ 00:28:11.342 START TEST fio_dif_digest 00:28:11.342 ************************************ 00:28:11.342 11:20:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # fio_dif_digest 00:28:11.342 11:20:08 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:28:11.342 11:20:08 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:28:11.342 11:20:08 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:28:11.342 11:20:08 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:28:11.342 11:20:08 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:28:11.342 11:20:08 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:11.343 bdev_null0 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:11.343 [2024-05-15 11:20:08.532281] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1353 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:11.343 { 00:28:11.343 "params": { 00:28:11.343 "name": "Nvme$subsystem", 00:28:11.343 "trtype": "$TEST_TRANSPORT", 00:28:11.343 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:11.343 "adrfam": "ipv4", 00:28:11.343 "trsvcid": "$NVMF_PORT", 00:28:11.343 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:11.343 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:11.343 "hdgst": ${hdgst:-false}, 00:28:11.343 "ddgst": ${ddgst:-false} 00:28:11.343 }, 00:28:11.343 "method": "bdev_nvme_attach_controller" 00:28:11.343 } 00:28:11.343 EOF 00:28:11.343 )") 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local sanitizers 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # shift 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local asan_lib= 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # grep libasan 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:11.343 "params": { 00:28:11.343 "name": "Nvme0", 00:28:11.343 "trtype": "tcp", 00:28:11.343 "traddr": "10.0.0.2", 00:28:11.343 "adrfam": "ipv4", 00:28:11.343 "trsvcid": "4420", 00:28:11.343 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:11.343 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:11.343 "hdgst": true, 00:28:11.343 "ddgst": true 00:28:11.343 }, 00:28:11.343 "method": "bdev_nvme_attach_controller" 00:28:11.343 }' 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # asan_lib= 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:28:11.343 11:20:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:28:11.606 11:20:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # asan_lib= 00:28:11.606 11:20:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:28:11.606 11:20:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:11.606 11:20:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:11.864 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:11.864 ... 00:28:11.864 fio-3.35 00:28:11.864 Starting 3 threads 00:28:11.864 EAL: No free 2048 kB hugepages reported on node 1 00:28:24.042 00:28:24.042 filename0: (groupid=0, jobs=1): err= 0: pid=2432438: Wed May 15 11:20:19 2024 00:28:24.042 read: IOPS=295, BW=36.9MiB/s (38.7MB/s)(371MiB/10046msec) 00:28:24.042 slat (nsec): min=6698, max=44755, avg=23128.15, stdev=6432.65 00:28:24.042 clat (usec): min=7731, max=52622, avg=10117.71, stdev=1275.15 00:28:24.042 lat (usec): min=7747, max=52633, avg=10140.84, stdev=1275.14 00:28:24.042 clat percentiles (usec): 00:28:24.042 | 1.00th=[ 8356], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9503], 00:28:24.042 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10290], 00:28:24.042 | 70.00th=[10421], 80.00th=[10683], 90.00th=[10945], 95.00th=[11207], 00:28:24.042 | 99.00th=[11863], 99.50th=[12125], 99.90th=[12649], 99.95th=[49021], 00:28:24.042 | 99.99th=[52691] 00:28:24.042 bw ( KiB/s): min=36424, max=39600, per=35.31%, avg=37966.60, stdev=608.36, samples=20 00:28:24.042 iops : min= 284, max= 309, avg=296.45, stdev= 4.80, samples=20 00:28:24.042 lat (msec) : 10=44.52%, 20=55.41%, 50=0.03%, 100=0.03% 00:28:24.042 cpu : usr=95.60%, sys=4.07%, ctx=25, majf=0, minf=123 00:28:24.042 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:24.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.042 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.042 issued rwts: total=2967,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.042 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:24.042 filename0: (groupid=0, jobs=1): err= 0: pid=2432439: Wed May 15 11:20:19 2024 00:28:24.043 read: IOPS=268, BW=33.6MiB/s (35.3MB/s)(338MiB/10045msec) 00:28:24.043 slat (usec): min=6, max=108, avg=19.29, stdev= 8.32 00:28:24.043 clat (usec): min=7368, max=47513, avg=11115.39, stdev=1230.37 00:28:24.043 lat (usec): min=7385, max=47524, avg=11134.68, stdev=1230.68 00:28:24.043 clat percentiles (usec): 00:28:24.043 | 1.00th=[ 9241], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10552], 00:28:24.043 | 30.00th=[10683], 40.00th=[10814], 50.00th=[11076], 60.00th=[11207], 00:28:24.043 | 70.00th=[11469], 80.00th=[11731], 90.00th=[11994], 95.00th=[12387], 00:28:24.043 | 99.00th=[13042], 99.50th=[13435], 99.90th=[14615], 99.95th=[45351], 00:28:24.043 | 99.99th=[47449] 00:28:24.043 bw ( KiB/s): min=33536, max=36096, per=32.14%, avg=34560.00, stdev=674.76, samples=20 00:28:24.043 iops : min= 262, max= 282, avg=270.00, stdev= 5.27, samples=20 00:28:24.043 lat (msec) : 10=6.44%, 20=93.49%, 50=0.07% 00:28:24.043 cpu : usr=96.83%, sys=2.83%, ctx=23, majf=0, minf=138 00:28:24.043 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:24.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.043 issued rwts: total=2702,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.043 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:24.043 filename0: (groupid=0, jobs=1): err= 0: pid=2432440: Wed May 15 11:20:19 2024 00:28:24.043 read: IOPS=275, BW=34.5MiB/s (36.1MB/s)(346MiB/10045msec) 00:28:24.043 slat (nsec): min=6604, max=73767, avg=19409.89, stdev=8364.89 00:28:24.043 clat (usec): min=7018, max=47897, avg=10842.02, stdev=1210.26 00:28:24.043 lat (usec): min=7032, max=47909, avg=10861.43, stdev=1210.45 00:28:24.043 clat percentiles (usec): 00:28:24.043 | 1.00th=[ 9110], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10159], 00:28:24.043 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10814], 60.00th=[10945], 00:28:24.043 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11731], 95.00th=[12125], 00:28:24.043 | 99.00th=[12649], 99.50th=[12780], 99.90th=[13566], 99.95th=[45351], 00:28:24.043 | 99.99th=[47973] 00:28:24.043 bw ( KiB/s): min=34560, max=37120, per=32.95%, avg=35430.40, stdev=645.50, samples=20 00:28:24.043 iops : min= 270, max= 290, avg=276.80, stdev= 5.04, samples=20 00:28:24.043 lat (msec) : 10=12.64%, 20=87.29%, 50=0.07% 00:28:24.043 cpu : usr=97.45%, sys=2.23%, ctx=18, majf=0, minf=169 00:28:24.043 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:24.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.043 issued rwts: total=2770,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.043 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:24.043 00:28:24.043 Run status group 0 (all jobs): 00:28:24.043 READ: bw=105MiB/s (110MB/s), 33.6MiB/s-36.9MiB/s (35.3MB/s-38.7MB/s), io=1055MiB (1106MB), run=10045-10046msec 00:28:24.043 11:20:19 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:28:24.043 11:20:19 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:28:24.043 11:20:19 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:28:24.043 11:20:19 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:24.043 11:20:19 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:28:24.043 11:20:19 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:24.043 11:20:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:24.043 11:20:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:24.043 11:20:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:24.043 11:20:19 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:24.043 11:20:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:24.043 11:20:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:24.043 11:20:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:24.043 00:28:24.043 real 0m11.173s 00:28:24.043 user 0m35.950s 00:28:24.043 sys 0m1.257s 00:28:24.043 11:20:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # xtrace_disable 00:28:24.043 11:20:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:24.043 ************************************ 00:28:24.043 END TEST fio_dif_digest 00:28:24.043 ************************************ 00:28:24.043 11:20:19 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:28:24.043 11:20:19 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:28:24.043 11:20:19 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:24.043 11:20:19 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:28:24.043 11:20:19 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:24.043 11:20:19 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:28:24.043 11:20:19 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:24.043 11:20:19 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:24.043 rmmod nvme_tcp 00:28:24.043 rmmod nvme_fabrics 00:28:24.043 rmmod nvme_keyring 00:28:24.043 11:20:19 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:24.043 11:20:19 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:28:24.043 11:20:19 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:28:24.043 11:20:19 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 2423615 ']' 00:28:24.043 11:20:19 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 2423615 00:28:24.043 11:20:19 nvmf_dif -- common/autotest_common.sh@947 -- # '[' -z 2423615 ']' 00:28:24.043 11:20:19 nvmf_dif -- common/autotest_common.sh@951 -- # kill -0 2423615 00:28:24.043 11:20:19 nvmf_dif -- common/autotest_common.sh@952 -- # uname 00:28:24.043 11:20:19 nvmf_dif -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:28:24.043 11:20:19 nvmf_dif -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2423615 00:28:24.043 11:20:19 nvmf_dif -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:28:24.043 11:20:19 nvmf_dif -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:28:24.043 11:20:19 nvmf_dif -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2423615' 00:28:24.043 killing process with pid 2423615 00:28:24.043 11:20:19 nvmf_dif -- common/autotest_common.sh@966 -- # kill 2423615 00:28:24.043 [2024-05-15 11:20:19.826805] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:24.043 11:20:19 nvmf_dif -- common/autotest_common.sh@971 -- # wait 2423615 00:28:24.043 11:20:20 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:28:24.043 11:20:20 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:25.415 Waiting for block devices as requested 00:28:25.415 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:28:25.415 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:25.673 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:25.673 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:25.673 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:25.931 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:25.931 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:25.931 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:25.931 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:26.188 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:26.188 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:26.188 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:26.188 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:26.445 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:26.445 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:26.445 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:26.703 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:26.703 11:20:23 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:26.703 11:20:23 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:26.703 11:20:23 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:26.703 11:20:23 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:26.703 11:20:23 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:26.703 11:20:23 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:26.703 11:20:23 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:29.229 11:20:25 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:29.229 00:28:29.229 real 1m13.385s 00:28:29.229 user 7m9.821s 00:28:29.229 sys 0m17.734s 00:28:29.229 11:20:25 nvmf_dif -- common/autotest_common.sh@1123 -- # xtrace_disable 00:28:29.229 11:20:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:29.229 ************************************ 00:28:29.229 END TEST nvmf_dif 00:28:29.229 ************************************ 00:28:29.229 11:20:25 -- spdk/autotest.sh@289 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:29.229 11:20:25 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:28:29.229 11:20:25 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:28:29.229 11:20:25 -- common/autotest_common.sh@10 -- # set +x 00:28:29.229 ************************************ 00:28:29.229 START TEST nvmf_abort_qd_sizes 00:28:29.229 ************************************ 00:28:29.229 11:20:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:29.229 * Looking for test storage... 00:28:29.229 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:29.229 11:20:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:29.229 11:20:26 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:28:29.229 11:20:26 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:29.229 11:20:26 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:29.229 11:20:26 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:29.229 11:20:26 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:29.229 11:20:26 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:29.229 11:20:26 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:29.229 11:20:26 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:29.229 11:20:26 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:29.229 11:20:26 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:29.229 11:20:26 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:29.229 11:20:26 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:29.229 11:20:26 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:29.229 11:20:26 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:29.229 11:20:26 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:29.229 11:20:26 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:29.229 11:20:26 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:29.229 11:20:26 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:29.229 11:20:26 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:29.229 11:20:26 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:29.229 11:20:26 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:29.229 11:20:26 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.229 11:20:26 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.229 11:20:26 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.229 11:20:26 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:28:29.229 11:20:26 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.229 11:20:26 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:28:29.229 11:20:26 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:29.229 11:20:26 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:29.229 11:20:26 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:29.229 11:20:26 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:29.229 11:20:26 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:29.229 11:20:26 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:29.229 11:20:26 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:29.229 11:20:26 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:29.229 11:20:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:28:29.229 11:20:26 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:29.229 11:20:26 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:29.229 11:20:26 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:29.229 11:20:26 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:29.229 11:20:26 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:29.229 11:20:26 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:29.229 11:20:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:29.229 11:20:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:29.229 11:20:26 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:29.229 11:20:26 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:29.229 11:20:26 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:28:29.229 11:20:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:34.482 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:34.482 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:34.482 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:34.483 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:34.483 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:34.483 Found net devices under 0000:86:00.0: cvl_0_0 00:28:34.483 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:34.483 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:34.483 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:34.483 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:34.483 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:34.483 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:34.483 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:34.483 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:34.483 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:34.483 Found net devices under 0000:86:00.1: cvl_0_1 00:28:34.483 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:34.483 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:34.483 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:28:34.483 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:34.483 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:34.483 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:34.483 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:34.483 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:34.483 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:34.483 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:34.483 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:34.483 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:34.483 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:34.483 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:34.483 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:34.483 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:34.483 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:34.483 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:34.483 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:34.483 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:34.483 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:34.483 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:34.483 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:34.483 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:34.483 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:34.483 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:34.483 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:34.483 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:28:34.483 00:28:34.483 --- 10.0.0.2 ping statistics --- 00:28:34.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:34.483 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:28:34.483 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:34.483 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:34.483 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:28:34.483 00:28:34.483 --- 10.0.0.1 ping statistics --- 00:28:34.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:34.483 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:28:34.483 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:34.483 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:28:34.483 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:28:34.483 11:20:31 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:36.381 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:36.381 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:36.381 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:36.381 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:36.381 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:36.381 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:36.381 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:36.381 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:36.381 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:36.381 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:36.381 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:36.381 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:36.381 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:36.381 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:36.381 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:36.381 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:37.317 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:28:37.317 11:20:34 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:37.317 11:20:34 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:37.317 11:20:34 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:37.317 11:20:34 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:37.317 11:20:34 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:37.317 11:20:34 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:37.317 11:20:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:28:37.317 11:20:34 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:37.317 11:20:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@721 -- # xtrace_disable 00:28:37.317 11:20:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:37.317 11:20:34 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=2440131 00:28:37.317 11:20:34 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 2440131 00:28:37.317 11:20:34 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:28:37.317 11:20:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@828 -- # '[' -z 2440131 ']' 00:28:37.317 11:20:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:37.317 11:20:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local max_retries=100 00:28:37.317 11:20:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:37.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:37.317 11:20:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # xtrace_disable 00:28:37.317 11:20:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:37.574 [2024-05-15 11:20:34.584432] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:28:37.574 [2024-05-15 11:20:34.584469] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:37.574 EAL: No free 2048 kB hugepages reported on node 1 00:28:37.574 [2024-05-15 11:20:34.640490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:37.574 [2024-05-15 11:20:34.724611] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:37.574 [2024-05-15 11:20:34.724646] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:37.575 [2024-05-15 11:20:34.724653] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:37.575 [2024-05-15 11:20:34.724659] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:37.575 [2024-05-15 11:20:34.724664] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:37.575 [2024-05-15 11:20:34.724704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:37.575 [2024-05-15 11:20:34.724810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:37.575 [2024-05-15 11:20:34.724919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:37.575 [2024-05-15 11:20:34.724921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:38.140 11:20:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:28:38.140 11:20:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@861 -- # return 0 00:28:38.140 11:20:35 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:38.140 11:20:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@727 -- # xtrace_disable 00:28:38.140 11:20:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:38.398 11:20:35 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:38.398 11:20:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:28:38.398 11:20:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:28:38.398 11:20:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:28:38.398 11:20:35 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:28:38.398 11:20:35 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:28:38.398 11:20:35 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:5e:00.0 ]] 00:28:38.398 11:20:35 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:28:38.398 11:20:35 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:28:38.398 11:20:35 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:28:38.398 11:20:35 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:28:38.398 11:20:35 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:28:38.398 11:20:35 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:28:38.398 11:20:35 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:28:38.398 11:20:35 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:5e:00.0 00:28:38.398 11:20:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:28:38.398 11:20:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:28:38.398 11:20:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:28:38.398 11:20:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:28:38.398 11:20:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1104 -- # xtrace_disable 00:28:38.398 11:20:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:38.398 ************************************ 00:28:38.398 START TEST spdk_target_abort 00:28:38.398 ************************************ 00:28:38.398 11:20:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # spdk_target 00:28:38.398 11:20:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:28:38.398 11:20:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:28:38.398 11:20:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:38.398 11:20:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:41.733 spdk_targetn1 00:28:41.733 11:20:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:41.733 11:20:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:41.733 11:20:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:41.733 11:20:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:41.733 [2024-05-15 11:20:38.309232] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:41.733 11:20:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:41.733 11:20:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:28:41.733 11:20:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:41.733 11:20:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:41.733 11:20:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:41.733 11:20:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:28:41.733 11:20:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:41.733 11:20:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:41.733 11:20:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:41.733 11:20:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:28:41.733 11:20:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:41.733 11:20:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:41.733 [2024-05-15 11:20:38.341978] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:41.733 [2024-05-15 11:20:38.342225] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:41.733 11:20:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:41.733 11:20:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:28:41.733 11:20:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:41.733 11:20:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:41.733 11:20:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:28:41.733 11:20:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:41.733 11:20:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:41.733 11:20:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:41.733 11:20:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:41.733 11:20:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:41.733 11:20:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:41.733 11:20:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:41.733 11:20:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:41.733 11:20:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:41.733 11:20:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:41.733 11:20:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:28:41.733 11:20:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:41.733 11:20:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:41.733 11:20:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:41.733 11:20:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:41.733 11:20:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:41.733 11:20:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:41.733 EAL: No free 2048 kB hugepages reported on node 1 00:28:45.016 Initializing NVMe Controllers 00:28:45.016 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:45.016 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:45.016 Initialization complete. Launching workers. 00:28:45.016 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 16623, failed: 0 00:28:45.017 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1359, failed to submit 15264 00:28:45.017 success 790, unsuccess 569, failed 0 00:28:45.017 11:20:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:45.017 11:20:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:45.017 EAL: No free 2048 kB hugepages reported on node 1 00:28:48.309 Initializing NVMe Controllers 00:28:48.309 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:48.309 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:48.309 Initialization complete. Launching workers. 00:28:48.309 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8526, failed: 0 00:28:48.309 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1245, failed to submit 7281 00:28:48.309 success 309, unsuccess 936, failed 0 00:28:48.309 11:20:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:48.309 11:20:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:48.309 EAL: No free 2048 kB hugepages reported on node 1 00:28:50.845 Initializing NVMe Controllers 00:28:50.845 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:50.845 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:50.845 Initialization complete. Launching workers. 00:28:50.845 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37510, failed: 0 00:28:50.845 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2909, failed to submit 34601 00:28:50.845 success 584, unsuccess 2325, failed 0 00:28:50.845 11:20:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:28:50.845 11:20:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:50.845 11:20:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:51.105 11:20:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:51.105 11:20:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:28:51.105 11:20:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:51.105 11:20:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:52.484 11:20:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:52.484 11:20:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2440131 00:28:52.484 11:20:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@947 -- # '[' -z 2440131 ']' 00:28:52.484 11:20:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # kill -0 2440131 00:28:52.484 11:20:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # uname 00:28:52.484 11:20:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:28:52.484 11:20:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2440131 00:28:52.484 11:20:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:28:52.484 11:20:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:28:52.484 11:20:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2440131' 00:28:52.484 killing process with pid 2440131 00:28:52.484 11:20:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # kill 2440131 00:28:52.484 [2024-05-15 11:20:49.442453] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:52.484 11:20:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # wait 2440131 00:28:52.484 00:28:52.484 real 0m14.179s 00:28:52.484 user 0m56.527s 00:28:52.484 sys 0m2.194s 00:28:52.484 11:20:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # xtrace_disable 00:28:52.484 11:20:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:52.484 ************************************ 00:28:52.484 END TEST spdk_target_abort 00:28:52.484 ************************************ 00:28:52.484 11:20:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:28:52.484 11:20:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:28:52.484 11:20:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1104 -- # xtrace_disable 00:28:52.484 11:20:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:52.484 ************************************ 00:28:52.484 START TEST kernel_target_abort 00:28:52.484 ************************************ 00:28:52.484 11:20:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # kernel_target 00:28:52.484 11:20:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:28:52.484 11:20:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:28:52.484 11:20:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:52.484 11:20:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:52.484 11:20:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.484 11:20:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.484 11:20:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:52.484 11:20:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.484 11:20:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:52.484 11:20:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:52.484 11:20:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:52.484 11:20:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:52.484 11:20:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:52.484 11:20:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:28:52.484 11:20:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:52.484 11:20:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:52.484 11:20:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:52.484 11:20:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:28:52.484 11:20:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:28:52.484 11:20:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:28:52.743 11:20:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:52.743 11:20:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:54.644 Waiting for block devices as requested 00:28:54.644 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:28:54.644 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:54.903 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:54.903 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:54.903 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:54.903 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:55.162 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:55.162 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:55.162 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:55.162 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:55.421 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:55.421 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:55.421 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:55.685 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:55.685 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:55.685 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:55.685 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:55.944 11:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:55.944 11:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:55.945 11:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:28:55.945 11:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:28:55.945 11:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:55.945 11:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:28:55.945 11:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:28:55.945 11:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:28:55.945 11:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:55.945 No valid GPT data, bailing 00:28:55.945 11:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:55.945 11:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:28:55.945 11:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:28:55.945 11:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:28:55.945 11:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:28:55.945 11:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:55.945 11:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:55.945 11:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:55.945 11:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:55.945 11:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:28:55.945 11:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:28:55.945 11:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:28:55.945 11:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:28:55.945 11:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:28:55.945 11:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:28:55.945 11:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:28:55.945 11:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:55.945 11:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:28:55.945 00:28:55.945 Discovery Log Number of Records 2, Generation counter 2 00:28:55.945 =====Discovery Log Entry 0====== 00:28:55.945 trtype: tcp 00:28:55.945 adrfam: ipv4 00:28:55.945 subtype: current discovery subsystem 00:28:55.945 treq: not specified, sq flow control disable supported 00:28:55.945 portid: 1 00:28:55.945 trsvcid: 4420 00:28:55.945 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:55.945 traddr: 10.0.0.1 00:28:55.945 eflags: none 00:28:55.945 sectype: none 00:28:55.945 =====Discovery Log Entry 1====== 00:28:55.945 trtype: tcp 00:28:55.945 adrfam: ipv4 00:28:55.945 subtype: nvme subsystem 00:28:55.945 treq: not specified, sq flow control disable supported 00:28:55.945 portid: 1 00:28:55.945 trsvcid: 4420 00:28:55.945 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:55.945 traddr: 10.0.0.1 00:28:55.945 eflags: none 00:28:55.945 sectype: none 00:28:55.945 11:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:28:55.945 11:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:55.945 11:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:55.945 11:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:28:55.945 11:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:55.945 11:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:55.945 11:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:55.945 11:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:55.945 11:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:55.945 11:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:55.945 11:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:55.945 11:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:55.945 11:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:55.945 11:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:55.945 11:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:28:55.945 11:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:55.945 11:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:28:55.945 11:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:55.945 11:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:55.945 11:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:55.945 11:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:56.202 EAL: No free 2048 kB hugepages reported on node 1 00:28:59.483 Initializing NVMe Controllers 00:28:59.483 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:59.483 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:59.483 Initialization complete. Launching workers. 00:28:59.483 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 89658, failed: 0 00:28:59.483 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 89658, failed to submit 0 00:28:59.483 success 0, unsuccess 89658, failed 0 00:28:59.483 11:20:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:59.483 11:20:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:59.483 EAL: No free 2048 kB hugepages reported on node 1 00:29:02.758 Initializing NVMe Controllers 00:29:02.758 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:02.758 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:02.758 Initialization complete. Launching workers. 00:29:02.758 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 144133, failed: 0 00:29:02.758 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35694, failed to submit 108439 00:29:02.758 success 0, unsuccess 35694, failed 0 00:29:02.758 11:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:02.758 11:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:02.758 EAL: No free 2048 kB hugepages reported on node 1 00:29:05.283 Initializing NVMe Controllers 00:29:05.283 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:05.283 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:05.283 Initialization complete. Launching workers. 00:29:05.283 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 134400, failed: 0 00:29:05.283 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33642, failed to submit 100758 00:29:05.283 success 0, unsuccess 33642, failed 0 00:29:05.283 11:21:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:29:05.283 11:21:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:29:05.283 11:21:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:29:05.283 11:21:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:05.283 11:21:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:05.283 11:21:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:05.283 11:21:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:05.283 11:21:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:29:05.283 11:21:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:29:05.283 11:21:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:07.854 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:07.854 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:07.854 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:07.854 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:07.854 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:07.854 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:07.854 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:07.854 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:07.854 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:07.854 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:07.854 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:07.854 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:08.120 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:08.120 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:08.120 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:08.120 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:09.052 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:29:09.052 00:29:09.052 real 0m16.396s 00:29:09.052 user 0m8.354s 00:29:09.052 sys 0m4.382s 00:29:09.052 11:21:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # xtrace_disable 00:29:09.052 11:21:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:09.052 ************************************ 00:29:09.052 END TEST kernel_target_abort 00:29:09.052 ************************************ 00:29:09.052 11:21:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:09.052 11:21:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:29:09.053 11:21:06 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:09.053 11:21:06 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:29:09.053 11:21:06 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:09.053 11:21:06 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:29:09.053 11:21:06 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:09.053 11:21:06 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:09.053 rmmod nvme_tcp 00:29:09.053 rmmod nvme_fabrics 00:29:09.053 rmmod nvme_keyring 00:29:09.053 11:21:06 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:09.053 11:21:06 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:29:09.053 11:21:06 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:29:09.053 11:21:06 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 2440131 ']' 00:29:09.053 11:21:06 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 2440131 00:29:09.053 11:21:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@947 -- # '[' -z 2440131 ']' 00:29:09.053 11:21:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@951 -- # kill -0 2440131 00:29:09.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 951: kill: (2440131) - No such process 00:29:09.053 11:21:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@974 -- # echo 'Process with pid 2440131 is not found' 00:29:09.053 Process with pid 2440131 is not found 00:29:09.053 11:21:06 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:29:09.053 11:21:06 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:11.579 Waiting for block devices as requested 00:29:11.579 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:29:11.579 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:11.837 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:11.837 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:11.837 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:11.837 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:12.095 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:12.095 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:12.095 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:12.095 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:12.353 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:12.353 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:12.353 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:12.353 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:12.610 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:12.610 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:12.610 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:12.610 11:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:12.610 11:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:12.610 11:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:12.610 11:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:12.610 11:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:12.610 11:21:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:12.610 11:21:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:15.142 11:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:15.142 00:29:15.142 real 0m45.973s 00:29:15.142 user 1m8.422s 00:29:15.142 sys 0m14.415s 00:29:15.142 11:21:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # xtrace_disable 00:29:15.142 11:21:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:15.142 ************************************ 00:29:15.142 END TEST nvmf_abort_qd_sizes 00:29:15.142 ************************************ 00:29:15.142 11:21:11 -- spdk/autotest.sh@291 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:29:15.142 11:21:11 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:29:15.142 11:21:11 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:29:15.142 11:21:11 -- common/autotest_common.sh@10 -- # set +x 00:29:15.142 ************************************ 00:29:15.142 START TEST keyring_file 00:29:15.142 ************************************ 00:29:15.142 11:21:12 keyring_file -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:29:15.142 * Looking for test storage... 00:29:15.142 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:29:15.142 11:21:12 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:29:15.142 11:21:12 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:15.142 11:21:12 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:29:15.142 11:21:12 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:15.142 11:21:12 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:15.142 11:21:12 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:15.142 11:21:12 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:15.142 11:21:12 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:15.142 11:21:12 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:15.142 11:21:12 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:15.142 11:21:12 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:15.142 11:21:12 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:15.142 11:21:12 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:15.142 11:21:12 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:15.142 11:21:12 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:15.142 11:21:12 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:15.142 11:21:12 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:15.142 11:21:12 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:15.142 11:21:12 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:15.142 11:21:12 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:15.142 11:21:12 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:15.142 11:21:12 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:15.142 11:21:12 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:15.142 11:21:12 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.142 11:21:12 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.142 11:21:12 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.143 11:21:12 keyring_file -- paths/export.sh@5 -- # export PATH 00:29:15.143 11:21:12 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.143 11:21:12 keyring_file -- nvmf/common.sh@47 -- # : 0 00:29:15.143 11:21:12 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:15.143 11:21:12 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:15.143 11:21:12 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:15.143 11:21:12 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:15.143 11:21:12 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:15.143 11:21:12 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:15.143 11:21:12 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:15.143 11:21:12 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:15.143 11:21:12 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:15.143 11:21:12 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:15.143 11:21:12 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:15.143 11:21:12 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:29:15.143 11:21:12 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:29:15.143 11:21:12 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:29:15.143 11:21:12 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:15.143 11:21:12 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:15.143 11:21:12 keyring_file -- keyring/common.sh@17 -- # name=key0 00:29:15.143 11:21:12 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:15.143 11:21:12 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:15.143 11:21:12 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:15.143 11:21:12 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.WjA7Yt1A0M 00:29:15.143 11:21:12 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:15.143 11:21:12 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:15.143 11:21:12 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:15.143 11:21:12 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:15.143 11:21:12 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:29:15.143 11:21:12 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:15.143 11:21:12 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:15.143 11:21:12 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.WjA7Yt1A0M 00:29:15.143 11:21:12 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.WjA7Yt1A0M 00:29:15.143 11:21:12 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.WjA7Yt1A0M 00:29:15.143 11:21:12 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:29:15.143 11:21:12 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:15.143 11:21:12 keyring_file -- keyring/common.sh@17 -- # name=key1 00:29:15.143 11:21:12 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:15.143 11:21:12 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:15.143 11:21:12 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:15.143 11:21:12 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Py928jQ5kh 00:29:15.143 11:21:12 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:15.143 11:21:12 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:15.143 11:21:12 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:15.143 11:21:12 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:15.143 11:21:12 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:29:15.143 11:21:12 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:15.143 11:21:12 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:15.143 11:21:12 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Py928jQ5kh 00:29:15.143 11:21:12 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Py928jQ5kh 00:29:15.143 11:21:12 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.Py928jQ5kh 00:29:15.143 11:21:12 keyring_file -- keyring/file.sh@30 -- # tgtpid=2449369 00:29:15.143 11:21:12 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2449369 00:29:15.143 11:21:12 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:29:15.143 11:21:12 keyring_file -- common/autotest_common.sh@828 -- # '[' -z 2449369 ']' 00:29:15.143 11:21:12 keyring_file -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:15.143 11:21:12 keyring_file -- common/autotest_common.sh@833 -- # local max_retries=100 00:29:15.143 11:21:12 keyring_file -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:15.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:15.143 11:21:12 keyring_file -- common/autotest_common.sh@837 -- # xtrace_disable 00:29:15.143 11:21:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:15.143 [2024-05-15 11:21:12.273490] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:29:15.143 [2024-05-15 11:21:12.273540] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2449369 ] 00:29:15.143 EAL: No free 2048 kB hugepages reported on node 1 00:29:15.143 [2024-05-15 11:21:12.326078] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:15.143 [2024-05-15 11:21:12.404961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:16.073 11:21:13 keyring_file -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:29:16.073 11:21:13 keyring_file -- common/autotest_common.sh@861 -- # return 0 00:29:16.073 11:21:13 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:29:16.073 11:21:13 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:16.073 11:21:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:16.073 [2024-05-15 11:21:13.077124] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:16.073 null0 00:29:16.073 [2024-05-15 11:21:13.109160] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:16.073 [2024-05-15 11:21:13.109206] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:16.073 [2024-05-15 11:21:13.109450] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:16.073 [2024-05-15 11:21:13.117201] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:29:16.074 11:21:13 keyring_file -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:16.074 11:21:13 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:16.074 11:21:13 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:29:16.074 11:21:13 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:16.074 11:21:13 keyring_file -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:29:16.074 11:21:13 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:16.074 11:21:13 keyring_file -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:29:16.074 11:21:13 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:16.074 11:21:13 keyring_file -- common/autotest_common.sh@652 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:16.074 11:21:13 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:16.074 11:21:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:16.074 [2024-05-15 11:21:13.133238] nvmf_rpc.c: 773:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:29:16.074 request: 00:29:16.074 { 00:29:16.074 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:29:16.074 "secure_channel": false, 00:29:16.074 "listen_address": { 00:29:16.074 "trtype": "tcp", 00:29:16.074 "traddr": "127.0.0.1", 00:29:16.074 "trsvcid": "4420" 00:29:16.074 }, 00:29:16.074 "method": "nvmf_subsystem_add_listener", 00:29:16.074 "req_id": 1 00:29:16.074 } 00:29:16.074 Got JSON-RPC error response 00:29:16.074 response: 00:29:16.074 { 00:29:16.074 "code": -32602, 00:29:16.074 "message": "Invalid parameters" 00:29:16.074 } 00:29:16.074 11:21:13 keyring_file -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:29:16.074 11:21:13 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:29:16.074 11:21:13 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:16.074 11:21:13 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:16.074 11:21:13 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:16.074 11:21:13 keyring_file -- keyring/file.sh@46 -- # bperfpid=2449505 00:29:16.074 11:21:13 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:29:16.074 11:21:13 keyring_file -- keyring/file.sh@48 -- # waitforlisten 2449505 /var/tmp/bperf.sock 00:29:16.074 11:21:13 keyring_file -- common/autotest_common.sh@828 -- # '[' -z 2449505 ']' 00:29:16.074 11:21:13 keyring_file -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:16.074 11:21:13 keyring_file -- common/autotest_common.sh@833 -- # local max_retries=100 00:29:16.074 11:21:13 keyring_file -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:16.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:16.074 11:21:13 keyring_file -- common/autotest_common.sh@837 -- # xtrace_disable 00:29:16.074 11:21:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:16.074 [2024-05-15 11:21:13.174161] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:29:16.074 [2024-05-15 11:21:13.174207] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2449505 ] 00:29:16.074 EAL: No free 2048 kB hugepages reported on node 1 00:29:16.074 [2024-05-15 11:21:13.225645] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.074 [2024-05-15 11:21:13.297533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:17.005 11:21:13 keyring_file -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:29:17.005 11:21:13 keyring_file -- common/autotest_common.sh@861 -- # return 0 00:29:17.005 11:21:13 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.WjA7Yt1A0M 00:29:17.005 11:21:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.WjA7Yt1A0M 00:29:17.005 11:21:14 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Py928jQ5kh 00:29:17.005 11:21:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Py928jQ5kh 00:29:17.262 11:21:14 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:29:17.262 11:21:14 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:29:17.262 11:21:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:17.262 11:21:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:17.262 11:21:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:17.262 11:21:14 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.WjA7Yt1A0M == \/\t\m\p\/\t\m\p\.\W\j\A\7\Y\t\1\A\0\M ]] 00:29:17.262 11:21:14 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:29:17.262 11:21:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:17.262 11:21:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:17.262 11:21:14 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:29:17.262 11:21:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:17.518 11:21:14 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.Py928jQ5kh == \/\t\m\p\/\t\m\p\.\P\y\9\2\8\j\Q\5\k\h ]] 00:29:17.518 11:21:14 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:29:17.518 11:21:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:17.518 11:21:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:17.518 11:21:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:17.518 11:21:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:17.518 11:21:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:17.775 11:21:14 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:29:17.775 11:21:14 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:29:17.775 11:21:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:17.775 11:21:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:17.775 11:21:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:17.775 11:21:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:17.775 11:21:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:17.775 11:21:15 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:29:17.775 11:21:15 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:17.775 11:21:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:18.032 [2024-05-15 11:21:15.179788] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:18.032 nvme0n1 00:29:18.032 11:21:15 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:29:18.032 11:21:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:18.032 11:21:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:18.032 11:21:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:18.032 11:21:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:18.032 11:21:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:18.289 11:21:15 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:29:18.289 11:21:15 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:29:18.289 11:21:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:18.289 11:21:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:18.289 11:21:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:18.289 11:21:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:18.289 11:21:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:18.545 11:21:15 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:29:18.545 11:21:15 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:18.545 Running I/O for 1 seconds... 00:29:19.475 00:29:19.475 Latency(us) 00:29:19.475 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:19.475 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:29:19.475 nvme0n1 : 1.00 17043.07 66.57 0.00 0.00 7493.87 3561.74 14246.96 00:29:19.475 =================================================================================================================== 00:29:19.475 Total : 17043.07 66.57 0.00 0.00 7493.87 3561.74 14246.96 00:29:19.475 0 00:29:19.732 11:21:16 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:19.732 11:21:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:19.732 11:21:16 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:29:19.732 11:21:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:19.732 11:21:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:19.732 11:21:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:19.732 11:21:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:19.732 11:21:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:19.989 11:21:17 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:29:19.989 11:21:17 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:29:19.989 11:21:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:19.989 11:21:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:19.989 11:21:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:19.989 11:21:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:19.989 11:21:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:20.246 11:21:17 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:29:20.246 11:21:17 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:20.246 11:21:17 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:29:20.246 11:21:17 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:20.246 11:21:17 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:29:20.246 11:21:17 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:20.246 11:21:17 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:29:20.246 11:21:17 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:20.246 11:21:17 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:20.246 11:21:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:20.246 [2024-05-15 11:21:17.434012] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:20.246 [2024-05-15 11:21:17.434763] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x245c1e0 (107): Transport endpoint is not connected 00:29:20.246 [2024-05-15 11:21:17.435758] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x245c1e0 (9): Bad file descriptor 00:29:20.246 [2024-05-15 11:21:17.436759] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:20.246 [2024-05-15 11:21:17.436769] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:20.246 [2024-05-15 11:21:17.436775] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:20.246 request: 00:29:20.246 { 00:29:20.246 "name": "nvme0", 00:29:20.246 "trtype": "tcp", 00:29:20.246 "traddr": "127.0.0.1", 00:29:20.246 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:20.246 "adrfam": "ipv4", 00:29:20.246 "trsvcid": "4420", 00:29:20.246 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:20.246 "psk": "key1", 00:29:20.246 "method": "bdev_nvme_attach_controller", 00:29:20.246 "req_id": 1 00:29:20.246 } 00:29:20.246 Got JSON-RPC error response 00:29:20.246 response: 00:29:20.246 { 00:29:20.246 "code": -32602, 00:29:20.246 "message": "Invalid parameters" 00:29:20.246 } 00:29:20.246 11:21:17 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:29:20.246 11:21:17 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:20.246 11:21:17 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:20.246 11:21:17 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:20.246 11:21:17 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:29:20.246 11:21:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:20.246 11:21:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:20.246 11:21:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:20.246 11:21:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:20.246 11:21:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:20.501 11:21:17 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:29:20.501 11:21:17 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:29:20.501 11:21:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:20.501 11:21:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:20.501 11:21:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:20.501 11:21:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:20.501 11:21:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:20.757 11:21:17 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:29:20.757 11:21:17 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:29:20.757 11:21:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:20.757 11:21:17 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:29:20.757 11:21:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:29:21.014 11:21:18 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:29:21.014 11:21:18 keyring_file -- keyring/file.sh@77 -- # jq length 00:29:21.014 11:21:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:21.272 11:21:18 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:29:21.272 11:21:18 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.WjA7Yt1A0M 00:29:21.272 11:21:18 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.WjA7Yt1A0M 00:29:21.272 11:21:18 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:29:21.272 11:21:18 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.WjA7Yt1A0M 00:29:21.272 11:21:18 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:29:21.272 11:21:18 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:21.272 11:21:18 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:29:21.272 11:21:18 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:21.272 11:21:18 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.WjA7Yt1A0M 00:29:21.272 11:21:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.WjA7Yt1A0M 00:29:21.272 [2024-05-15 11:21:18.477864] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.WjA7Yt1A0M': 0100660 00:29:21.272 [2024-05-15 11:21:18.477886] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:29:21.272 request: 00:29:21.272 { 00:29:21.272 "name": "key0", 00:29:21.272 "path": "/tmp/tmp.WjA7Yt1A0M", 00:29:21.272 "method": "keyring_file_add_key", 00:29:21.272 "req_id": 1 00:29:21.272 } 00:29:21.272 Got JSON-RPC error response 00:29:21.272 response: 00:29:21.272 { 00:29:21.272 "code": -1, 00:29:21.272 "message": "Operation not permitted" 00:29:21.272 } 00:29:21.272 11:21:18 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:29:21.272 11:21:18 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:21.272 11:21:18 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:21.272 11:21:18 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:21.272 11:21:18 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.WjA7Yt1A0M 00:29:21.272 11:21:18 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.WjA7Yt1A0M 00:29:21.272 11:21:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.WjA7Yt1A0M 00:29:21.529 11:21:18 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.WjA7Yt1A0M 00:29:21.529 11:21:18 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:29:21.529 11:21:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:21.529 11:21:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:21.529 11:21:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:21.529 11:21:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:21.529 11:21:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:21.786 11:21:18 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:29:21.786 11:21:18 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:21.786 11:21:18 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:29:21.786 11:21:18 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:21.786 11:21:18 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:29:21.786 11:21:18 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:21.786 11:21:18 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:29:21.786 11:21:18 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:21.786 11:21:18 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:21.786 11:21:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:21.786 [2024-05-15 11:21:19.015290] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.WjA7Yt1A0M': No such file or directory 00:29:21.787 [2024-05-15 11:21:19.015309] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:29:21.787 [2024-05-15 11:21:19.015329] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:29:21.787 [2024-05-15 11:21:19.015336] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:21.787 [2024-05-15 11:21:19.015342] bdev_nvme.c:6252:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:29:21.787 request: 00:29:21.787 { 00:29:21.787 "name": "nvme0", 00:29:21.787 "trtype": "tcp", 00:29:21.787 "traddr": "127.0.0.1", 00:29:21.787 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:21.787 "adrfam": "ipv4", 00:29:21.787 "trsvcid": "4420", 00:29:21.787 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:21.787 "psk": "key0", 00:29:21.787 "method": "bdev_nvme_attach_controller", 00:29:21.787 "req_id": 1 00:29:21.787 } 00:29:21.787 Got JSON-RPC error response 00:29:21.787 response: 00:29:21.787 { 00:29:21.787 "code": -19, 00:29:21.787 "message": "No such device" 00:29:21.787 } 00:29:21.787 11:21:19 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:29:21.787 11:21:19 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:21.787 11:21:19 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:21.787 11:21:19 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:21.787 11:21:19 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:29:21.787 11:21:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:22.044 11:21:19 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:22.044 11:21:19 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:22.044 11:21:19 keyring_file -- keyring/common.sh@17 -- # name=key0 00:29:22.044 11:21:19 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:22.044 11:21:19 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:22.044 11:21:19 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:22.044 11:21:19 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.UX1MGORdTT 00:29:22.044 11:21:19 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:22.044 11:21:19 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:22.044 11:21:19 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:22.044 11:21:19 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:22.044 11:21:19 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:29:22.044 11:21:19 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:22.044 11:21:19 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:22.044 11:21:19 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.UX1MGORdTT 00:29:22.044 11:21:19 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.UX1MGORdTT 00:29:22.044 11:21:19 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.UX1MGORdTT 00:29:22.044 11:21:19 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.UX1MGORdTT 00:29:22.044 11:21:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.UX1MGORdTT 00:29:22.301 11:21:19 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:22.301 11:21:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:22.558 nvme0n1 00:29:22.558 11:21:19 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:29:22.558 11:21:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:22.558 11:21:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:22.558 11:21:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:22.558 11:21:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:22.558 11:21:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:22.816 11:21:19 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:29:22.816 11:21:19 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:29:22.816 11:21:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:22.816 11:21:20 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:29:22.816 11:21:20 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:29:22.816 11:21:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:22.816 11:21:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:22.816 11:21:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:23.074 11:21:20 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:29:23.074 11:21:20 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:29:23.074 11:21:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:23.074 11:21:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:23.074 11:21:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:23.074 11:21:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:23.074 11:21:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:23.331 11:21:20 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:29:23.331 11:21:20 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:23.331 11:21:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:23.331 11:21:20 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:29:23.331 11:21:20 keyring_file -- keyring/file.sh@104 -- # jq length 00:29:23.331 11:21:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:23.589 11:21:20 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:29:23.589 11:21:20 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.UX1MGORdTT 00:29:23.589 11:21:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.UX1MGORdTT 00:29:23.847 11:21:20 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Py928jQ5kh 00:29:23.847 11:21:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Py928jQ5kh 00:29:24.114 11:21:21 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:24.114 11:21:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:24.114 nvme0n1 00:29:24.114 11:21:21 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:29:24.114 11:21:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:29:24.372 11:21:21 keyring_file -- keyring/file.sh@112 -- # config='{ 00:29:24.372 "subsystems": [ 00:29:24.372 { 00:29:24.372 "subsystem": "keyring", 00:29:24.372 "config": [ 00:29:24.372 { 00:29:24.372 "method": "keyring_file_add_key", 00:29:24.372 "params": { 00:29:24.372 "name": "key0", 00:29:24.372 "path": "/tmp/tmp.UX1MGORdTT" 00:29:24.372 } 00:29:24.372 }, 00:29:24.372 { 00:29:24.372 "method": "keyring_file_add_key", 00:29:24.372 "params": { 00:29:24.372 "name": "key1", 00:29:24.372 "path": "/tmp/tmp.Py928jQ5kh" 00:29:24.372 } 00:29:24.372 } 00:29:24.372 ] 00:29:24.372 }, 00:29:24.372 { 00:29:24.372 "subsystem": "iobuf", 00:29:24.372 "config": [ 00:29:24.372 { 00:29:24.372 "method": "iobuf_set_options", 00:29:24.372 "params": { 00:29:24.372 "small_pool_count": 8192, 00:29:24.372 "large_pool_count": 1024, 00:29:24.372 "small_bufsize": 8192, 00:29:24.372 "large_bufsize": 135168 00:29:24.372 } 00:29:24.372 } 00:29:24.372 ] 00:29:24.372 }, 00:29:24.372 { 00:29:24.372 "subsystem": "sock", 00:29:24.372 "config": [ 00:29:24.372 { 00:29:24.372 "method": "sock_impl_set_options", 00:29:24.372 "params": { 00:29:24.372 "impl_name": "posix", 00:29:24.372 "recv_buf_size": 2097152, 00:29:24.372 "send_buf_size": 2097152, 00:29:24.372 "enable_recv_pipe": true, 00:29:24.372 "enable_quickack": false, 00:29:24.372 "enable_placement_id": 0, 00:29:24.372 "enable_zerocopy_send_server": true, 00:29:24.372 "enable_zerocopy_send_client": false, 00:29:24.372 "zerocopy_threshold": 0, 00:29:24.372 "tls_version": 0, 00:29:24.372 "enable_ktls": false 00:29:24.372 } 00:29:24.372 }, 00:29:24.372 { 00:29:24.372 "method": "sock_impl_set_options", 00:29:24.372 "params": { 00:29:24.372 "impl_name": "ssl", 00:29:24.372 "recv_buf_size": 4096, 00:29:24.372 "send_buf_size": 4096, 00:29:24.372 "enable_recv_pipe": true, 00:29:24.372 "enable_quickack": false, 00:29:24.372 "enable_placement_id": 0, 00:29:24.372 "enable_zerocopy_send_server": true, 00:29:24.372 "enable_zerocopy_send_client": false, 00:29:24.372 "zerocopy_threshold": 0, 00:29:24.372 "tls_version": 0, 00:29:24.372 "enable_ktls": false 00:29:24.372 } 00:29:24.372 } 00:29:24.372 ] 00:29:24.372 }, 00:29:24.372 { 00:29:24.372 "subsystem": "vmd", 00:29:24.372 "config": [] 00:29:24.372 }, 00:29:24.372 { 00:29:24.372 "subsystem": "accel", 00:29:24.372 "config": [ 00:29:24.372 { 00:29:24.372 "method": "accel_set_options", 00:29:24.372 "params": { 00:29:24.372 "small_cache_size": 128, 00:29:24.372 "large_cache_size": 16, 00:29:24.372 "task_count": 2048, 00:29:24.372 "sequence_count": 2048, 00:29:24.372 "buf_count": 2048 00:29:24.372 } 00:29:24.372 } 00:29:24.372 ] 00:29:24.372 }, 00:29:24.372 { 00:29:24.372 "subsystem": "bdev", 00:29:24.372 "config": [ 00:29:24.372 { 00:29:24.372 "method": "bdev_set_options", 00:29:24.372 "params": { 00:29:24.372 "bdev_io_pool_size": 65535, 00:29:24.372 "bdev_io_cache_size": 256, 00:29:24.372 "bdev_auto_examine": true, 00:29:24.372 "iobuf_small_cache_size": 128, 00:29:24.372 "iobuf_large_cache_size": 16 00:29:24.372 } 00:29:24.372 }, 00:29:24.372 { 00:29:24.372 "method": "bdev_raid_set_options", 00:29:24.372 "params": { 00:29:24.372 "process_window_size_kb": 1024 00:29:24.372 } 00:29:24.372 }, 00:29:24.372 { 00:29:24.372 "method": "bdev_iscsi_set_options", 00:29:24.372 "params": { 00:29:24.372 "timeout_sec": 30 00:29:24.372 } 00:29:24.372 }, 00:29:24.372 { 00:29:24.372 "method": "bdev_nvme_set_options", 00:29:24.372 "params": { 00:29:24.372 "action_on_timeout": "none", 00:29:24.372 "timeout_us": 0, 00:29:24.372 "timeout_admin_us": 0, 00:29:24.372 "keep_alive_timeout_ms": 10000, 00:29:24.372 "arbitration_burst": 0, 00:29:24.372 "low_priority_weight": 0, 00:29:24.372 "medium_priority_weight": 0, 00:29:24.372 "high_priority_weight": 0, 00:29:24.372 "nvme_adminq_poll_period_us": 10000, 00:29:24.372 "nvme_ioq_poll_period_us": 0, 00:29:24.372 "io_queue_requests": 512, 00:29:24.372 "delay_cmd_submit": true, 00:29:24.372 "transport_retry_count": 4, 00:29:24.372 "bdev_retry_count": 3, 00:29:24.372 "transport_ack_timeout": 0, 00:29:24.372 "ctrlr_loss_timeout_sec": 0, 00:29:24.372 "reconnect_delay_sec": 0, 00:29:24.372 "fast_io_fail_timeout_sec": 0, 00:29:24.372 "disable_auto_failback": false, 00:29:24.372 "generate_uuids": false, 00:29:24.372 "transport_tos": 0, 00:29:24.372 "nvme_error_stat": false, 00:29:24.372 "rdma_srq_size": 0, 00:29:24.372 "io_path_stat": false, 00:29:24.372 "allow_accel_sequence": false, 00:29:24.372 "rdma_max_cq_size": 0, 00:29:24.372 "rdma_cm_event_timeout_ms": 0, 00:29:24.372 "dhchap_digests": [ 00:29:24.372 "sha256", 00:29:24.372 "sha384", 00:29:24.372 "sha512" 00:29:24.372 ], 00:29:24.372 "dhchap_dhgroups": [ 00:29:24.372 "null", 00:29:24.372 "ffdhe2048", 00:29:24.372 "ffdhe3072", 00:29:24.372 "ffdhe4096", 00:29:24.372 "ffdhe6144", 00:29:24.372 "ffdhe8192" 00:29:24.372 ] 00:29:24.372 } 00:29:24.372 }, 00:29:24.372 { 00:29:24.372 "method": "bdev_nvme_attach_controller", 00:29:24.372 "params": { 00:29:24.372 "name": "nvme0", 00:29:24.372 "trtype": "TCP", 00:29:24.372 "adrfam": "IPv4", 00:29:24.372 "traddr": "127.0.0.1", 00:29:24.372 "trsvcid": "4420", 00:29:24.372 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:24.372 "prchk_reftag": false, 00:29:24.372 "prchk_guard": false, 00:29:24.372 "ctrlr_loss_timeout_sec": 0, 00:29:24.372 "reconnect_delay_sec": 0, 00:29:24.372 "fast_io_fail_timeout_sec": 0, 00:29:24.372 "psk": "key0", 00:29:24.372 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:24.372 "hdgst": false, 00:29:24.372 "ddgst": false 00:29:24.372 } 00:29:24.372 }, 00:29:24.372 { 00:29:24.372 "method": "bdev_nvme_set_hotplug", 00:29:24.372 "params": { 00:29:24.372 "period_us": 100000, 00:29:24.372 "enable": false 00:29:24.372 } 00:29:24.372 }, 00:29:24.372 { 00:29:24.372 "method": "bdev_wait_for_examine" 00:29:24.372 } 00:29:24.372 ] 00:29:24.372 }, 00:29:24.372 { 00:29:24.372 "subsystem": "nbd", 00:29:24.372 "config": [] 00:29:24.372 } 00:29:24.372 ] 00:29:24.372 }' 00:29:24.372 11:21:21 keyring_file -- keyring/file.sh@114 -- # killprocess 2449505 00:29:24.372 11:21:21 keyring_file -- common/autotest_common.sh@947 -- # '[' -z 2449505 ']' 00:29:24.372 11:21:21 keyring_file -- common/autotest_common.sh@951 -- # kill -0 2449505 00:29:24.372 11:21:21 keyring_file -- common/autotest_common.sh@952 -- # uname 00:29:24.372 11:21:21 keyring_file -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:29:24.372 11:21:21 keyring_file -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2449505 00:29:24.645 11:21:21 keyring_file -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:29:24.645 11:21:21 keyring_file -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:29:24.645 11:21:21 keyring_file -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2449505' 00:29:24.645 killing process with pid 2449505 00:29:24.645 11:21:21 keyring_file -- common/autotest_common.sh@966 -- # kill 2449505 00:29:24.645 Received shutdown signal, test time was about 1.000000 seconds 00:29:24.645 00:29:24.645 Latency(us) 00:29:24.645 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.645 =================================================================================================================== 00:29:24.645 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:24.645 11:21:21 keyring_file -- common/autotest_common.sh@971 -- # wait 2449505 00:29:24.645 11:21:21 keyring_file -- keyring/file.sh@117 -- # bperfpid=2451020 00:29:24.645 11:21:21 keyring_file -- keyring/file.sh@119 -- # waitforlisten 2451020 /var/tmp/bperf.sock 00:29:24.645 11:21:21 keyring_file -- common/autotest_common.sh@828 -- # '[' -z 2451020 ']' 00:29:24.645 11:21:21 keyring_file -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:24.645 11:21:21 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:29:24.645 11:21:21 keyring_file -- common/autotest_common.sh@833 -- # local max_retries=100 00:29:24.645 11:21:21 keyring_file -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:24.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:24.645 11:21:21 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:29:24.645 "subsystems": [ 00:29:24.645 { 00:29:24.645 "subsystem": "keyring", 00:29:24.645 "config": [ 00:29:24.645 { 00:29:24.645 "method": "keyring_file_add_key", 00:29:24.645 "params": { 00:29:24.645 "name": "key0", 00:29:24.645 "path": "/tmp/tmp.UX1MGORdTT" 00:29:24.645 } 00:29:24.645 }, 00:29:24.645 { 00:29:24.645 "method": "keyring_file_add_key", 00:29:24.645 "params": { 00:29:24.645 "name": "key1", 00:29:24.645 "path": "/tmp/tmp.Py928jQ5kh" 00:29:24.645 } 00:29:24.645 } 00:29:24.645 ] 00:29:24.645 }, 00:29:24.645 { 00:29:24.645 "subsystem": "iobuf", 00:29:24.645 "config": [ 00:29:24.645 { 00:29:24.645 "method": "iobuf_set_options", 00:29:24.645 "params": { 00:29:24.645 "small_pool_count": 8192, 00:29:24.645 "large_pool_count": 1024, 00:29:24.645 "small_bufsize": 8192, 00:29:24.645 "large_bufsize": 135168 00:29:24.645 } 00:29:24.645 } 00:29:24.645 ] 00:29:24.645 }, 00:29:24.645 { 00:29:24.645 "subsystem": "sock", 00:29:24.645 "config": [ 00:29:24.645 { 00:29:24.645 "method": "sock_impl_set_options", 00:29:24.645 "params": { 00:29:24.645 "impl_name": "posix", 00:29:24.645 "recv_buf_size": 2097152, 00:29:24.645 "send_buf_size": 2097152, 00:29:24.645 "enable_recv_pipe": true, 00:29:24.645 "enable_quickack": false, 00:29:24.645 "enable_placement_id": 0, 00:29:24.645 "enable_zerocopy_send_server": true, 00:29:24.645 "enable_zerocopy_send_client": false, 00:29:24.645 "zerocopy_threshold": 0, 00:29:24.645 "tls_version": 0, 00:29:24.645 "enable_ktls": false 00:29:24.645 } 00:29:24.645 }, 00:29:24.645 { 00:29:24.645 "method": "sock_impl_set_options", 00:29:24.645 "params": { 00:29:24.645 "impl_name": "ssl", 00:29:24.645 "recv_buf_size": 4096, 00:29:24.645 "send_buf_size": 4096, 00:29:24.645 "enable_recv_pipe": true, 00:29:24.645 "enable_quickack": false, 00:29:24.645 "enable_placement_id": 0, 00:29:24.645 "enable_zerocopy_send_server": true, 00:29:24.645 "enable_zerocopy_send_client": false, 00:29:24.645 "zerocopy_threshold": 0, 00:29:24.645 "tls_version": 0, 00:29:24.645 "enable_ktls": false 00:29:24.645 } 00:29:24.645 } 00:29:24.645 ] 00:29:24.645 }, 00:29:24.645 { 00:29:24.645 "subsystem": "vmd", 00:29:24.645 "config": [] 00:29:24.645 }, 00:29:24.645 { 00:29:24.645 "subsystem": "accel", 00:29:24.645 "config": [ 00:29:24.646 { 00:29:24.646 "method": "accel_set_options", 00:29:24.646 "params": { 00:29:24.646 "small_cache_size": 128, 00:29:24.646 "large_cache_size": 16, 00:29:24.646 "task_count": 2048, 00:29:24.646 "sequence_count": 2048, 00:29:24.646 "buf_count": 2048 00:29:24.646 } 00:29:24.646 } 00:29:24.646 ] 00:29:24.646 }, 00:29:24.646 { 00:29:24.646 "subsystem": "bdev", 00:29:24.646 "config": [ 00:29:24.646 { 00:29:24.646 "method": "bdev_set_options", 00:29:24.646 "params": { 00:29:24.646 "bdev_io_pool_size": 65535, 00:29:24.646 "bdev_io_cache_size": 256, 00:29:24.646 "bdev_auto_examine": true, 00:29:24.646 "iobuf_small_cache_size": 128, 00:29:24.646 "iobuf_large_cache_size": 16 00:29:24.646 } 00:29:24.646 }, 00:29:24.646 { 00:29:24.646 "method": "bdev_raid_set_options", 00:29:24.646 "params": { 00:29:24.646 "process_window_size_kb": 1024 00:29:24.646 } 00:29:24.646 }, 00:29:24.646 { 00:29:24.646 "method": "bdev_iscsi_set_options", 00:29:24.646 "params": { 00:29:24.646 "timeout_sec": 30 00:29:24.646 } 00:29:24.646 }, 00:29:24.646 { 00:29:24.646 "method": "bdev_nvme_set_options", 00:29:24.646 "params": { 00:29:24.646 "action_on_timeout": "none", 00:29:24.646 "timeout_us": 0, 00:29:24.646 "timeout_admin_us": 0, 00:29:24.646 "keep_alive_timeout_ms": 10000, 00:29:24.646 "arbitration_burst": 0, 00:29:24.646 "low_priority_weight": 0, 00:29:24.646 "medium_priority_weight": 0, 00:29:24.646 "high_priority_weight": 0, 00:29:24.646 "nvme_adminq_poll_period_us": 10000, 00:29:24.646 "nvme_ioq_poll_period_us": 0, 00:29:24.646 "io_queue_requests": 512, 00:29:24.646 "delay_cmd_submit": true, 00:29:24.646 "transport_retry_count": 4, 00:29:24.646 "bdev_retry_count": 3, 00:29:24.646 "transport_ack_timeout": 0, 00:29:24.646 "ctrlr_loss_timeout_sec": 0, 00:29:24.646 "reconnect_delay_sec": 0, 00:29:24.646 "fast_io_fail_timeout_sec": 0, 00:29:24.646 "disable_auto_failback": false, 00:29:24.646 "generate_uuids": false, 00:29:24.646 "transport_tos": 0, 00:29:24.646 "nvme_error_stat": false, 00:29:24.646 "rdma_srq_size": 0, 00:29:24.646 "io_path_stat": false, 00:29:24.646 "allow_accel_sequence": false, 00:29:24.646 "rdma_max_cq_size": 0, 00:29:24.646 "rdma_cm_event_timeout_ms": 0, 00:29:24.646 "dhchap_digests": [ 00:29:24.646 "sha256", 00:29:24.646 "sha384", 00:29:24.646 "sha512" 00:29:24.646 ], 00:29:24.646 "dhchap_dhgroups": [ 00:29:24.646 "null", 00:29:24.646 "ffdhe2048", 00:29:24.646 "ffdhe3072", 00:29:24.646 "ffdhe4096", 00:29:24.646 "ffdhe6144", 00:29:24.646 "ffdhe8192" 00:29:24.646 ] 00:29:24.646 } 00:29:24.646 }, 00:29:24.646 { 00:29:24.646 "method": "bdev_nvme_attach_controller", 00:29:24.646 "params": { 00:29:24.646 "name": "nvme0", 00:29:24.646 "trtype": "TCP", 00:29:24.646 "adrfam": "IPv4", 00:29:24.646 "traddr": "127.0.0.1", 00:29:24.646 "trsvcid": "4420", 00:29:24.646 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:24.646 "prchk_reftag": false, 00:29:24.646 "prchk_guard": false, 00:29:24.646 "ctrlr_loss_timeout_sec": 0, 00:29:24.646 "reconnect_delay_sec": 0, 00:29:24.646 "fast_io_fail_timeout_sec": 0, 00:29:24.646 "psk": "key0", 00:29:24.646 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:24.646 "hdgst": false, 00:29:24.646 "ddgst": false 00:29:24.646 } 00:29:24.646 }, 00:29:24.646 { 00:29:24.646 "method": "bdev_nvme_set_hotplug", 00:29:24.646 "params": { 00:29:24.646 "period_us": 100000, 00:29:24.646 "enable": false 00:29:24.646 } 00:29:24.646 }, 00:29:24.646 { 00:29:24.646 "method": "bdev_wait_for_examine" 00:29:24.646 } 00:29:24.646 ] 00:29:24.646 }, 00:29:24.646 { 00:29:24.646 "subsystem": "nbd", 00:29:24.646 "config": [] 00:29:24.646 } 00:29:24.646 ] 00:29:24.646 }' 00:29:24.646 11:21:21 keyring_file -- common/autotest_common.sh@837 -- # xtrace_disable 00:29:24.646 11:21:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:24.646 [2024-05-15 11:21:21.891135] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:29:24.646 [2024-05-15 11:21:21.891189] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2451020 ] 00:29:24.917 EAL: No free 2048 kB hugepages reported on node 1 00:29:24.917 [2024-05-15 11:21:21.943468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.917 [2024-05-15 11:21:22.013918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:24.917 [2024-05-15 11:21:22.165147] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:25.479 11:21:22 keyring_file -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:29:25.479 11:21:22 keyring_file -- common/autotest_common.sh@861 -- # return 0 00:29:25.479 11:21:22 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:29:25.480 11:21:22 keyring_file -- keyring/file.sh@120 -- # jq length 00:29:25.480 11:21:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:25.737 11:21:22 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:29:25.737 11:21:22 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:29:25.737 11:21:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:25.737 11:21:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:25.737 11:21:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:25.737 11:21:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:25.737 11:21:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:25.997 11:21:23 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:29:25.997 11:21:23 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:29:25.997 11:21:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:25.997 11:21:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:25.997 11:21:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:25.997 11:21:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:25.997 11:21:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:25.997 11:21:23 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:29:25.997 11:21:23 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:29:25.997 11:21:23 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:29:25.997 11:21:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:29:26.254 11:21:23 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:29:26.254 11:21:23 keyring_file -- keyring/file.sh@1 -- # cleanup 00:29:26.254 11:21:23 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.UX1MGORdTT /tmp/tmp.Py928jQ5kh 00:29:26.254 11:21:23 keyring_file -- keyring/file.sh@20 -- # killprocess 2451020 00:29:26.254 11:21:23 keyring_file -- common/autotest_common.sh@947 -- # '[' -z 2451020 ']' 00:29:26.254 11:21:23 keyring_file -- common/autotest_common.sh@951 -- # kill -0 2451020 00:29:26.254 11:21:23 keyring_file -- common/autotest_common.sh@952 -- # uname 00:29:26.254 11:21:23 keyring_file -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:29:26.254 11:21:23 keyring_file -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2451020 00:29:26.254 11:21:23 keyring_file -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:29:26.254 11:21:23 keyring_file -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:29:26.254 11:21:23 keyring_file -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2451020' 00:29:26.254 killing process with pid 2451020 00:29:26.254 11:21:23 keyring_file -- common/autotest_common.sh@966 -- # kill 2451020 00:29:26.254 Received shutdown signal, test time was about 1.000000 seconds 00:29:26.254 00:29:26.254 Latency(us) 00:29:26.254 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:26.254 =================================================================================================================== 00:29:26.254 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:26.254 11:21:23 keyring_file -- common/autotest_common.sh@971 -- # wait 2451020 00:29:26.512 11:21:23 keyring_file -- keyring/file.sh@21 -- # killprocess 2449369 00:29:26.512 11:21:23 keyring_file -- common/autotest_common.sh@947 -- # '[' -z 2449369 ']' 00:29:26.512 11:21:23 keyring_file -- common/autotest_common.sh@951 -- # kill -0 2449369 00:29:26.512 11:21:23 keyring_file -- common/autotest_common.sh@952 -- # uname 00:29:26.512 11:21:23 keyring_file -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:29:26.512 11:21:23 keyring_file -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2449369 00:29:26.512 11:21:23 keyring_file -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:29:26.512 11:21:23 keyring_file -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:29:26.512 11:21:23 keyring_file -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2449369' 00:29:26.512 killing process with pid 2449369 00:29:26.512 11:21:23 keyring_file -- common/autotest_common.sh@966 -- # kill 2449369 00:29:26.512 [2024-05-15 11:21:23.698771] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:26.512 [2024-05-15 11:21:23.698805] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:29:26.512 11:21:23 keyring_file -- common/autotest_common.sh@971 -- # wait 2449369 00:29:27.078 00:29:27.078 real 0m12.042s 00:29:27.078 user 0m28.713s 00:29:27.078 sys 0m2.693s 00:29:27.078 11:21:24 keyring_file -- common/autotest_common.sh@1123 -- # xtrace_disable 00:29:27.078 11:21:24 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:27.078 ************************************ 00:29:27.078 END TEST keyring_file 00:29:27.078 ************************************ 00:29:27.078 11:21:24 -- spdk/autotest.sh@292 -- # [[ n == y ]] 00:29:27.078 11:21:24 -- spdk/autotest.sh@304 -- # '[' 0 -eq 1 ']' 00:29:27.078 11:21:24 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:29:27.078 11:21:24 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:29:27.078 11:21:24 -- spdk/autotest.sh@317 -- # '[' 0 -eq 1 ']' 00:29:27.078 11:21:24 -- spdk/autotest.sh@326 -- # '[' 0 -eq 1 ']' 00:29:27.078 11:21:24 -- spdk/autotest.sh@331 -- # '[' 0 -eq 1 ']' 00:29:27.078 11:21:24 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:29:27.078 11:21:24 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:29:27.078 11:21:24 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:29:27.078 11:21:24 -- spdk/autotest.sh@348 -- # '[' 0 -eq 1 ']' 00:29:27.078 11:21:24 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:29:27.078 11:21:24 -- spdk/autotest.sh@359 -- # [[ 0 -eq 1 ]] 00:29:27.078 11:21:24 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:29:27.078 11:21:24 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:29:27.079 11:21:24 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:29:27.079 11:21:24 -- spdk/autotest.sh@376 -- # trap - SIGINT SIGTERM EXIT 00:29:27.079 11:21:24 -- spdk/autotest.sh@378 -- # timing_enter post_cleanup 00:29:27.079 11:21:24 -- common/autotest_common.sh@721 -- # xtrace_disable 00:29:27.079 11:21:24 -- common/autotest_common.sh@10 -- # set +x 00:29:27.079 11:21:24 -- spdk/autotest.sh@379 -- # autotest_cleanup 00:29:27.079 11:21:24 -- common/autotest_common.sh@1389 -- # local autotest_es=0 00:29:27.079 11:21:24 -- common/autotest_common.sh@1390 -- # xtrace_disable 00:29:27.079 11:21:24 -- common/autotest_common.sh@10 -- # set +x 00:29:31.268 INFO: APP EXITING 00:29:31.268 INFO: killing all VMs 00:29:31.268 INFO: killing vhost app 00:29:31.268 INFO: EXIT DONE 00:29:33.795 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:29:33.795 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:29:33.795 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:29:33.795 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:29:33.795 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:29:33.795 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:29:33.795 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:29:33.795 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:29:33.795 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:29:33.795 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:29:33.795 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:29:33.795 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:29:33.795 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:29:33.795 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:29:33.795 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:29:33.795 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:29:33.795 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:29:36.322 Cleaning 00:29:36.322 Removing: /var/run/dpdk/spdk0/config 00:29:36.322 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:36.322 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:36.322 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:36.322 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:36.322 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:29:36.322 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:29:36.322 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:29:36.322 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:29:36.322 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:36.322 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:36.322 Removing: /var/run/dpdk/spdk1/config 00:29:36.322 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:29:36.322 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:29:36.322 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:29:36.322 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:29:36.322 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:29:36.322 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:29:36.322 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:29:36.322 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:29:36.322 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:29:36.322 Removing: /var/run/dpdk/spdk1/hugepage_info 00:29:36.322 Removing: /var/run/dpdk/spdk1/mp_socket 00:29:36.322 Removing: /var/run/dpdk/spdk2/config 00:29:36.322 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:29:36.322 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:29:36.322 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:29:36.322 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:29:36.322 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:29:36.322 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:29:36.322 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:29:36.322 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:29:36.322 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:29:36.322 Removing: /var/run/dpdk/spdk2/hugepage_info 00:29:36.322 Removing: /var/run/dpdk/spdk3/config 00:29:36.322 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:29:36.322 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:29:36.322 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:29:36.322 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:29:36.322 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:29:36.322 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:29:36.322 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:29:36.322 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:29:36.322 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:29:36.322 Removing: /var/run/dpdk/spdk3/hugepage_info 00:29:36.322 Removing: /var/run/dpdk/spdk4/config 00:29:36.322 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:29:36.322 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:29:36.322 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:29:36.322 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:29:36.322 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:29:36.322 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:29:36.322 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:29:36.580 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:29:36.580 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:29:36.580 Removing: /var/run/dpdk/spdk4/hugepage_info 00:29:36.580 Removing: /dev/shm/bdev_svc_trace.1 00:29:36.580 Removing: /dev/shm/nvmf_trace.0 00:29:36.580 Removing: /dev/shm/spdk_tgt_trace.pid2066575 00:29:36.580 Removing: /var/run/dpdk/spdk0 00:29:36.580 Removing: /var/run/dpdk/spdk1 00:29:36.580 Removing: /var/run/dpdk/spdk2 00:29:36.580 Removing: /var/run/dpdk/spdk3 00:29:36.580 Removing: /var/run/dpdk/spdk4 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2064312 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2065501 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2066575 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2067213 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2068159 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2068398 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2069375 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2069607 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2069815 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2071458 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2072732 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2073012 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2073305 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2073601 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2073895 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2074150 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2074398 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2074677 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2075413 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2078399 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2078663 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2078929 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2079158 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2079555 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2079732 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2080283 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2080509 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2080986 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2081164 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2081441 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2081674 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2082077 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2082330 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2082666 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2082975 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2083062 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2083128 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2083382 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2083639 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2083900 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2084172 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2084452 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2084729 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2085004 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2085305 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2085577 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2085843 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2086097 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2086344 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2086596 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2086850 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2087096 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2087349 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2087600 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2087860 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2088105 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2088357 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2088501 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2088949 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2092598 00:29:36.580 Removing: /var/run/dpdk/spdk_pid2136465 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2140710 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2150716 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2156110 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2160104 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2160785 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2172185 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2172276 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2173016 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2173917 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2174833 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2175327 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2175512 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2175788 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2175826 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2175859 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2177181 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2178112 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2179024 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2179496 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2179519 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2179851 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2181030 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2182173 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2190507 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2190758 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2195007 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2200858 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2203475 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2213869 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2223270 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2225098 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2226022 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2242600 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2246387 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2270478 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2274849 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2276643 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2278485 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2278717 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2278739 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2278973 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2279480 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2281318 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2282323 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2282798 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2285123 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2285642 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2286355 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2290395 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2300339 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2304499 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2310929 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2312384 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2313840 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2318135 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2322265 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2329621 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2329623 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2334232 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2334357 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2334579 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2335038 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2335046 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2339295 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2339859 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2344198 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2346946 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2352345 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2358290 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2366894 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2373932 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2373934 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2392433 00:29:36.839 Removing: /var/run/dpdk/spdk_pid2393021 00:29:37.098 Removing: /var/run/dpdk/spdk_pid2393618 00:29:37.098 Removing: /var/run/dpdk/spdk_pid2394312 00:29:37.098 Removing: /var/run/dpdk/spdk_pid2395282 00:29:37.098 Removing: /var/run/dpdk/spdk_pid2395922 00:29:37.098 Removing: /var/run/dpdk/spdk_pid2396460 00:29:37.098 Removing: /var/run/dpdk/spdk_pid2397152 00:29:37.098 Removing: /var/run/dpdk/spdk_pid2401652 00:29:37.098 Removing: /var/run/dpdk/spdk_pid2401886 00:29:37.098 Removing: /var/run/dpdk/spdk_pid2408458 00:29:37.098 Removing: /var/run/dpdk/spdk_pid2408735 00:29:37.098 Removing: /var/run/dpdk/spdk_pid2410962 00:29:37.098 Removing: /var/run/dpdk/spdk_pid2418841 00:29:37.098 Removing: /var/run/dpdk/spdk_pid2418858 00:29:37.098 Removing: /var/run/dpdk/spdk_pid2423887 00:29:37.098 Removing: /var/run/dpdk/spdk_pid2425853 00:29:37.098 Removing: /var/run/dpdk/spdk_pid2427815 00:29:37.098 Removing: /var/run/dpdk/spdk_pid2428866 00:29:37.098 Removing: /var/run/dpdk/spdk_pid2430977 00:29:37.098 Removing: /var/run/dpdk/spdk_pid2432114 00:29:37.098 Removing: /var/run/dpdk/spdk_pid2440839 00:29:37.098 Removing: /var/run/dpdk/spdk_pid2441305 00:29:37.098 Removing: /var/run/dpdk/spdk_pid2441863 00:29:37.098 Removing: /var/run/dpdk/spdk_pid2444020 00:29:37.098 Removing: /var/run/dpdk/spdk_pid2444502 00:29:37.098 Removing: /var/run/dpdk/spdk_pid2445067 00:29:37.098 Removing: /var/run/dpdk/spdk_pid2449369 00:29:37.098 Removing: /var/run/dpdk/spdk_pid2449505 00:29:37.098 Removing: /var/run/dpdk/spdk_pid2451020 00:29:37.098 Clean 00:29:37.098 11:21:34 -- common/autotest_common.sh@1448 -- # return 0 00:29:37.098 11:21:34 -- spdk/autotest.sh@380 -- # timing_exit post_cleanup 00:29:37.098 11:21:34 -- common/autotest_common.sh@727 -- # xtrace_disable 00:29:37.098 11:21:34 -- common/autotest_common.sh@10 -- # set +x 00:29:37.098 11:21:34 -- spdk/autotest.sh@382 -- # timing_exit autotest 00:29:37.098 11:21:34 -- common/autotest_common.sh@727 -- # xtrace_disable 00:29:37.098 11:21:34 -- common/autotest_common.sh@10 -- # set +x 00:29:37.098 11:21:34 -- spdk/autotest.sh@383 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:29:37.098 11:21:34 -- spdk/autotest.sh@385 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:29:37.098 11:21:34 -- spdk/autotest.sh@385 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:29:37.098 11:21:34 -- spdk/autotest.sh@387 -- # hash lcov 00:29:37.098 11:21:34 -- spdk/autotest.sh@387 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:29:37.098 11:21:34 -- spdk/autotest.sh@389 -- # hostname 00:29:37.098 11:21:34 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:29:37.356 geninfo: WARNING: invalid characters removed from testname! 00:29:59.301 11:21:54 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:59.560 11:21:56 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:01.465 11:21:58 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:03.368 11:22:00 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:04.790 11:22:02 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:06.690 11:22:03 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:08.592 11:22:05 -- spdk/autotest.sh@396 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:08.592 11:22:05 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:08.592 11:22:05 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:08.592 11:22:05 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:08.592 11:22:05 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:08.592 11:22:05 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.592 11:22:05 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.592 11:22:05 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.592 11:22:05 -- paths/export.sh@5 -- $ export PATH 00:30:08.592 11:22:05 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.592 11:22:05 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:30:08.592 11:22:05 -- common/autobuild_common.sh@437 -- $ date +%s 00:30:08.592 11:22:05 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715764925.XXXXXX 00:30:08.592 11:22:05 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715764925.a8UwN9 00:30:08.592 11:22:05 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:30:08.592 11:22:05 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:30:08.592 11:22:05 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:30:08.592 11:22:05 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:30:08.592 11:22:05 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:30:08.592 11:22:05 -- common/autobuild_common.sh@453 -- $ get_config_params 00:30:08.592 11:22:05 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:30:08.592 11:22:05 -- common/autotest_common.sh@10 -- $ set +x 00:30:08.592 11:22:05 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:30:08.592 11:22:05 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:30:08.592 11:22:05 -- pm/common@17 -- $ local monitor 00:30:08.592 11:22:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:08.592 11:22:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:08.592 11:22:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:08.592 11:22:05 -- pm/common@21 -- $ date +%s 00:30:08.592 11:22:05 -- pm/common@21 -- $ date +%s 00:30:08.592 11:22:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:08.592 11:22:05 -- pm/common@25 -- $ sleep 1 00:30:08.592 11:22:05 -- pm/common@21 -- $ date +%s 00:30:08.592 11:22:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715764925 00:30:08.592 11:22:05 -- pm/common@21 -- $ date +%s 00:30:08.592 11:22:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715764925 00:30:08.592 11:22:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715764925 00:30:08.592 11:22:05 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715764925 00:30:08.592 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715764925_collect-vmstat.pm.log 00:30:08.592 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715764925_collect-cpu-load.pm.log 00:30:08.592 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715764925_collect-cpu-temp.pm.log 00:30:08.592 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715764925_collect-bmc-pm.bmc.pm.log 00:30:09.526 11:22:06 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:30:09.526 11:22:06 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:30:09.526 11:22:06 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:09.526 11:22:06 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:09.526 11:22:06 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:09.526 11:22:06 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:09.526 11:22:06 -- common/autotest_common.sh@733 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:09.526 11:22:06 -- common/autotest_common.sh@734 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:09.526 11:22:06 -- common/autotest_common.sh@736 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:30:09.526 11:22:06 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:09.526 11:22:06 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:30:09.526 11:22:06 -- pm/common@29 -- $ signal_monitor_resources TERM 00:30:09.526 11:22:06 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:30:09.526 11:22:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:09.526 11:22:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:30:09.526 11:22:06 -- pm/common@44 -- $ pid=2460716 00:30:09.526 11:22:06 -- pm/common@50 -- $ kill -TERM 2460716 00:30:09.526 11:22:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:09.526 11:22:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:30:09.526 11:22:06 -- pm/common@44 -- $ pid=2460718 00:30:09.526 11:22:06 -- pm/common@50 -- $ kill -TERM 2460718 00:30:09.526 11:22:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:09.526 11:22:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:30:09.526 11:22:06 -- pm/common@44 -- $ pid=2460720 00:30:09.526 11:22:06 -- pm/common@50 -- $ kill -TERM 2460720 00:30:09.526 11:22:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:09.526 11:22:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:30:09.526 11:22:06 -- pm/common@44 -- $ pid=2460747 00:30:09.526 11:22:06 -- pm/common@50 -- $ sudo -E kill -TERM 2460747 00:30:09.526 + [[ -n 1961552 ]] 00:30:09.526 + sudo kill 1961552 00:30:09.535 [Pipeline] } 00:30:09.553 [Pipeline] // stage 00:30:09.558 [Pipeline] } 00:30:09.575 [Pipeline] // timeout 00:30:09.580 [Pipeline] } 00:30:09.595 [Pipeline] // catchError 00:30:09.600 [Pipeline] } 00:30:09.618 [Pipeline] // wrap 00:30:09.623 [Pipeline] } 00:30:09.639 [Pipeline] // catchError 00:30:09.647 [Pipeline] stage 00:30:09.649 [Pipeline] { (Epilogue) 00:30:09.664 [Pipeline] catchError 00:30:09.665 [Pipeline] { 00:30:09.680 [Pipeline] echo 00:30:09.682 Cleanup processes 00:30:09.687 [Pipeline] sh 00:30:09.974 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:09.974 2460830 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:30:09.974 2461114 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:09.987 [Pipeline] sh 00:30:10.265 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:10.265 ++ grep -v 'sudo pgrep' 00:30:10.265 ++ awk '{print $1}' 00:30:10.265 + sudo kill -9 2460830 00:30:10.277 [Pipeline] sh 00:30:10.560 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:20.536 [Pipeline] sh 00:30:20.823 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:20.823 Artifacts sizes are good 00:30:20.837 [Pipeline] archiveArtifacts 00:30:20.844 Archiving artifacts 00:30:21.016 [Pipeline] sh 00:30:21.301 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:30:21.317 [Pipeline] cleanWs 00:30:21.327 [WS-CLEANUP] Deleting project workspace... 00:30:21.327 [WS-CLEANUP] Deferred wipeout is used... 00:30:21.334 [WS-CLEANUP] done 00:30:21.336 [Pipeline] } 00:30:21.357 [Pipeline] // catchError 00:30:21.369 [Pipeline] sh 00:30:21.650 + logger -p user.info -t JENKINS-CI 00:30:21.659 [Pipeline] } 00:30:21.675 [Pipeline] // stage 00:30:21.680 [Pipeline] } 00:30:21.698 [Pipeline] // node 00:30:21.704 [Pipeline] End of Pipeline 00:30:21.740 Finished: SUCCESS